Sample records for adaptive regularization methods

  1. A spatially adaptive total variation regularization method for electrical resistance tomography

    NASA Astrophysics Data System (ADS)

    Song, Xizi; Xu, Yanbin; Dong, Feng

    2015-12-01

    The total variation (TV) regularization method has been used to solve the ill-posed inverse problem of electrical resistance tomography (ERT), owing to its good ability to preserve edges. However, the quality of the reconstructed images, especially in the flat region, is often degraded by noise. To optimize the regularization term and the regularization factor according to the spatial feature and to improve the resolution of reconstructed images, a spatially adaptive total variation (SATV) regularization method is proposed. A kind of effective spatial feature indicator named difference curvature is used to identify which region is a flat or edge region. According to different spatial features, the SATV regularization method can automatically adjust both the regularization term and regularization factor. At edge regions, the regularization term is approximate to the TV functional to preserve the edges; in flat regions, it is approximate to the first-order Tikhonov (FOT) functional to make the solution stable. Meanwhile, the adaptive regularization factor determined by the spatial feature is used to constrain the regularization strength of the SATV regularization method for different regions. Besides, a numerical scheme is adopted for the implementation of the second derivatives of difference curvature to improve the numerical stability. Several reconstruction image metrics are used to quantitatively evaluate the performance of the reconstructed results. Both simulation and experimental results indicate that, compared with the TV (mean relative error 0.288, mean correlation coefficient 0.627) and FOT (mean relative error 0.295, mean correlation coefficient 0.638) regularization methods, the proposed SATV (mean relative error 0.259, mean correlation coefficient 0.738) regularization method can endure a relatively high level of noise and improve the resolution of reconstructed images.

  2. Application of Two-Parameter Stabilizing Functions in Solving a Convolution-Type Integral Equation by Regularization Method

    NASA Astrophysics Data System (ADS)

    Maslakov, M. L.

    2018-04-01

    This paper examines the solution of convolution-type integral equations of the first kind by applying the Tikhonov regularization method with two-parameter stabilizing functions. The class of stabilizing functions is expanded in order to improve the accuracy of the resulting solution. The features of the problem formulation for identification and adaptive signal correction are described. A method for choosing regularization parameters in problems of identification and adaptive signal correction is suggested.

  3. Spatially adapted second-order total generalized variational image deblurring model under impulse noise

    NASA Astrophysics Data System (ADS)

    Zhong, Qiu-Xiang; Wu, Chuan-Sheng; Shu, Qiao-Ling; Liu, Ryan Wen

    2018-04-01

    Image deblurring under impulse noise is a typical ill-posed problem which requires regularization methods to guarantee high-quality imaging. L1-norm data-fidelity term and total variation (TV) regularizer have been combined to contribute the popular regularization method. However, the TV-regularized variational image deblurring model often suffers from the staircase-like artifacts leading to image quality degradation. To enhance image quality, the detailpreserving total generalized variation (TGV) was introduced to replace TV to eliminate the undesirable artifacts. The resulting nonconvex optimization problem was effectively solved using the alternating direction method of multipliers (ADMM). In addition, an automatic method for selecting spatially adapted regularization parameters was proposed to further improve deblurring performance. Our proposed image deblurring framework is able to remove blurring and impulse noise effects while maintaining the image edge details. Comprehensive experiments have been conducted to demonstrate the superior performance of our proposed method over several state-of-the-art image deblurring methods.

  4. An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng Jinchao; Qin Chenghu; Jia Kebin

    2011-11-15

    Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescentmore » photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used rather than monochromatic data. Furthermore, the study conducted using an adaptive regularization parameter demonstrated our ability to accurately localize the bioluminescent source. With the adaptively estimated regularization parameter, the reconstructed center position of the source was (20.37, 31.05, 12.95) mm, and the distance to the real source was 0.63 mm. The results of the dual-source experiments further showed that our algorithm could localize the bioluminescent sources accurately. The authors then presented experimental evidence that the proposed algorithm exhibited its calculated efficiency over the heuristic method. The effectiveness of the new algorithm was also confirmed by comparing it with the L-curve method. Furthermore, various initial speculations regarding the regularization parameter were used to illustrate the convergence of our algorithm. Finally, in vivo mouse experiment further illustrates the effectiveness of the proposed algorithm. Conclusions: Utilizing numerical, physical phantom and in vivo examples, we demonstrated that the bioluminescent sources could be reconstructed accurately with automatic regularization parameters. The proposed algorithm exhibited superior performance than both the heuristic regularization parameter choice method and L-curve method based on the computational speed and localization error.« less

  5. Image super-resolution via adaptive filtering and regularization

    NASA Astrophysics Data System (ADS)

    Ren, Jingbo; Wu, Hao; Dong, Weisheng; Shi, Guangming

    2014-11-01

    Image super-resolution (SR) is widely used in the fields of civil and military, especially for the low-resolution remote sensing images limited by the sensor. Single-image SR refers to the task of restoring a high-resolution (HR) image from the low-resolution image coupled with some prior knowledge as a regularization term. One classic method regularizes image by total variation (TV) and/or wavelet or some other transform which introduce some artifacts. To compress these shortages, a new framework for single image SR is proposed by utilizing an adaptive filter before regularization. The key of our model is that the adaptive filter is used to remove the spatial relevance among pixels first and then only the high frequency (HF) part, which is sparser in TV and transform domain, is considered as the regularization term. Concretely, through transforming the original model, the SR question can be solved by two alternate iteration sub-problems. Before each iteration, the adaptive filter should be updated to estimate the initial HF. A high quality HF part and HR image can be obtained by solving the first and second sub-problem, respectively. In experimental part, a set of remote sensing images captured by Landsat satellites are tested to demonstrate the effectiveness of the proposed framework. Experimental results show the outstanding performance of the proposed method in quantitative evaluation and visual fidelity compared with the state-of-the-art methods.

  6. Adaptive regularization network based neural modeling paradigm for nonlinear adaptive estimation of cerebral evoked potentials.

    PubMed

    Zhang, Jian-Hua; Böhme, Johann F

    2007-11-01

    In this paper we report an adaptive regularization network (ARN) approach to realizing fast blind separation of cerebral evoked potentials (EPs) from background electroencephalogram (EEG) activity with no need to make any explicit assumption on the statistical (or deterministic) signal model. The ARNs are proposed to construct nonlinear EEG and EP signal models. A novel adaptive regularization training (ART) algorithm is proposed to improve the generalization performance of the ARN. Two adaptive neural modeling methods based on the ARN are developed and their implementation and performance analysis are also presented. The computer experiments using simulated and measured visual evoked potential (VEP) data have shown that the proposed ARN modeling paradigm yields computationally efficient and more accurate VEP signal estimation owing to its intrinsic model-free and nonlinear processing characteristics.

  7. Iterative Nonlocal Total Variation Regularization Method for Image Restoration

    PubMed Central

    Xu, Huanyu; Sun, Quansen; Luo, Nan; Cao, Guo; Xia, Deshen

    2013-01-01

    In this paper, a Bregman iteration based total variation image restoration algorithm is proposed. Based on the Bregman iteration, the algorithm splits the original total variation problem into sub-problems that are easy to solve. Moreover, non-local regularization is introduced into the proposed algorithm, and a method to choose the non-local filter parameter locally and adaptively is proposed. Experiment results show that the proposed algorithms outperform some other regularization methods. PMID:23776560

  8. Adaptive tight frame based medical image reconstruction: a proof-of-concept study for computed tomography

    NASA Astrophysics Data System (ADS)

    Zhou, Weifeng; Cai, Jian-Feng; Gao, Hao

    2013-12-01

    A popular approach for medical image reconstruction has been through the sparsity regularization, assuming the targeted image can be well approximated by sparse coefficients under some properly designed system. The wavelet tight frame is such a widely used system due to its capability for sparsely approximating piecewise-smooth functions, such as medical images. However, using a fixed system may not always be optimal for reconstructing a variety of diversified images. Recently, the method based on the adaptive over-complete dictionary that is specific to structures of the targeted images has demonstrated its superiority for image processing. This work is to develop the adaptive wavelet tight frame method image reconstruction. The proposed scheme first constructs the adaptive wavelet tight frame that is task specific, and then reconstructs the image of interest by solving an l1-regularized minimization problem using the constructed adaptive tight frame system. The proof-of-concept study is performed for computed tomography (CT), and the simulation results suggest that the adaptive tight frame method improves the reconstructed CT image quality from the traditional tight frame method.

  9. L1-norm locally linear representation regularization multi-source adaptation learning.

    PubMed

    Tao, Jianwen; Wen, Shiting; Hu, Wenjun

    2015-09-01

    In most supervised domain adaptation learning (DAL) tasks, one has access only to a small number of labeled examples from target domain. Therefore the success of supervised DAL in this "small sample" regime needs the effective utilization of the large amounts of unlabeled data to extract information that is useful for generalization. Toward this end, we here use the geometric intuition of manifold assumption to extend the established frameworks in existing model-based DAL methods for function learning by incorporating additional information about the target geometric structure of the marginal distribution. We would like to ensure that the solution is smooth with respect to both the ambient space and the target marginal distribution. In doing this, we propose a novel L1-norm locally linear representation regularization multi-source adaptation learning framework which exploits the geometry of the probability distribution, which has two techniques. Firstly, an L1-norm locally linear representation method is presented for robust graph construction by replacing the L2-norm reconstruction measure in LLE with L1-norm one, which is termed as L1-LLR for short. Secondly, considering the robust graph regularization, we replace traditional graph Laplacian regularization with our new L1-LLR graph Laplacian regularization and therefore construct new graph-based semi-supervised learning framework with multi-source adaptation constraint, which is coined as L1-MSAL method. Moreover, to deal with the nonlinear learning problem, we also generalize the L1-MSAL method by mapping the input data points from the input space to a high-dimensional reproducing kernel Hilbert space (RKHS) via a nonlinear mapping. Promising experimental results have been obtained on several real-world datasets such as face, visual video and object. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Adaptive eigenspace method for inverse scattering problems in the frequency domain

    NASA Astrophysics Data System (ADS)

    Grote, Marcus J.; Kray, Marie; Nahum, Uri

    2017-02-01

    A nonlinear optimization method is proposed for the solution of inverse scattering problems in the frequency domain, when the scattered field is governed by the Helmholtz equation. The time-harmonic inverse medium problem is formulated as a PDE-constrained optimization problem and solved by an inexact truncated Newton-type iteration. Instead of a grid-based discrete representation, the unknown wave speed is projected to a particular finite-dimensional basis of eigenfunctions, which is iteratively adapted during the optimization. Truncating the adaptive eigenspace (AE) basis at a (small and slowly increasing) finite number of eigenfunctions effectively introduces regularization into the inversion and thus avoids the need for standard Tikhonov-type regularization. Both analytical and numerical evidence underpins the accuracy of the AE representation. Numerical experiments demonstrate the efficiency and robustness to missing or noisy data of the resulting adaptive eigenspace inversion method.

  11. Dynamic experiment design regularization approach to adaptive imaging with array radar/SAR sensor systems.

    PubMed

    Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart

    2011-01-01

    We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the "model-free" variational analysis (VA)-based image enhancement approach and the "model-based" descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations.

  12. Dictionary learning-based spatiotemporal regularization for 3D dense speckle tracking

    NASA Astrophysics Data System (ADS)

    Lu, Allen; Zontak, Maria; Parajuli, Nripesh; Stendahl, John C.; Boutagy, Nabil; Eberle, Melissa; O'Donnell, Matthew; Sinusas, Albert J.; Duncan, James S.

    2017-03-01

    Speckle tracking is a common method for non-rigid tissue motion analysis in 3D echocardiography, where unique texture patterns are tracked through the cardiac cycle. However, poor tracking often occurs due to inherent ultrasound issues, such as image artifacts and speckle decorrelation; thus regularization is required. Various methods, such as optical flow, elastic registration, and block matching techniques have been proposed to track speckle motion. Such methods typically apply spatial and temporal regularization in a separate manner. In this paper, we propose a joint spatiotemporal regularization method based on an adaptive dictionary representation of the dense 3D+time Lagrangian motion field. Sparse dictionaries have good signal adaptive and noise-reduction properties; however, they are prone to quantization errors. Our method takes advantage of the desirable noise suppression, while avoiding the undesirable quantization error. The idea is to enforce regularization only on the poorly tracked trajectories. Specifically, our method 1.) builds data-driven 4-dimensional dictionary of Lagrangian displacements using sparse learning, 2.) automatically identifies poorly tracked trajectories (outliers) based on sparse reconstruction errors, and 3.) performs sparse reconstruction of the outliers only. Our approach can be applied on dense Lagrangian motion fields calculated by any method. We demonstrate the effectiveness of our approach on a baseline block matching speckle tracking and evaluate performance of the proposed algorithm using tracking and strain accuracy analysis.

  13. Fast ℓ1-regularized space-time adaptive processing using alternating direction method of multipliers

    NASA Astrophysics Data System (ADS)

    Qin, Lilong; Wu, Manqing; Wang, Xuan; Dong, Zhen

    2017-04-01

    Motivated by the sparsity of filter coefficients in full-dimension space-time adaptive processing (STAP) algorithms, this paper proposes a fast ℓ1-regularized STAP algorithm based on the alternating direction method of multipliers to accelerate the convergence and reduce the calculations. The proposed algorithm uses a splitting variable to obtain an equivalent optimization formulation, which is addressed with an augmented Lagrangian method. Using the alternating recursive algorithm, the method can rapidly result in a low minimum mean-square error without a large number of calculations. Through theoretical analysis and experimental verification, we demonstrate that the proposed algorithm provides a better output signal-to-clutter-noise ratio performance than other algorithms.

  14. Adaptive regularization of the NL-means: application to image and video denoising.

    PubMed

    Sutour, Camille; Deledalle, Charles-Alban; Aujol, Jean-François

    2014-08-01

    Image denoising is a central problem in image processing and it is often a necessary step prior to higher level analysis such as segmentation, reconstruction, or super-resolution. The nonlocal means (NL-means) perform denoising by exploiting the natural redundancy of patterns inside an image; they perform a weighted average of pixels whose neighborhoods (patches) are close to each other. This reduces significantly the noise while preserving most of the image content. While it performs well on flat areas and textures, it suffers from two opposite drawbacks: it might over-smooth low-contrasted areas or leave a residual noise around edges and singular structures. Denoising can also be performed by total variation minimization-the Rudin, Osher and Fatemi model-which leads to restore regular images, but it is prone to over-smooth textures, staircasing effects, and contrast losses. We introduce in this paper a variational approach that corrects the over-smoothing and reduces the residual noise of the NL-means by adaptively regularizing nonlocal methods with the total variation. The proposed regularized NL-means algorithm combines these methods and reduces both of their respective defaults by minimizing an adaptive total variation with a nonlocal data fidelity term. Besides, this model adapts to different noise statistics and a fast solution can be obtained in the general case of the exponential family. We develop this model for image denoising and we adapt it to video denoising with 3D patches.

  15. Dynamic Experiment Design Regularization Approach to Adaptive Imaging with Array Radar/SAR Sensor Systems

    PubMed Central

    Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart

    2011-01-01

    We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the “model-free” variational analysis (VA)-based image enhancement approach and the “model-based” descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations. PMID:22163859

  16. Assessment and Evaluation in Adapted Physical Education.

    ERIC Educational Resources Information Center

    Kennedy, Charlotte I.; Bundschuh, Ernie

    Intended for physical education teachers, the booklet describes informal and formal methods for evaluating handicapped children to determine whether they can participate in a regular physical education program with nonhandicapped students, in a regular physical education program with modification, or in a specially designed physical education…

  17. Limited angle CT reconstruction by simultaneous spatial and Radon domain regularization based on TV and data-driven tight frame

    NASA Astrophysics Data System (ADS)

    Zhang, Wenkun; Zhang, Hanming; Wang, Linyuan; Cai, Ailong; Li, Lei; Yan, Bin

    2018-02-01

    Limited angle computed tomography (CT) reconstruction is widely performed in medical diagnosis and industrial testing because of the size of objects, engine/armor inspection requirements, and limited scan flexibility. Limited angle reconstruction necessitates usage of optimization-based methods that utilize additional sparse priors. However, most of conventional methods solely exploit sparsity priors of spatial domains. When CT projection suffers from serious data deficiency or various noises, obtaining reconstruction images that meet the requirement of quality becomes difficult and challenging. To solve this problem, this paper developed an adaptive reconstruction method for limited angle CT problem. The proposed method simultaneously uses spatial and Radon domain regularization model based on total variation (TV) and data-driven tight frame. Data-driven tight frame being derived from wavelet transformation aims at exploiting sparsity priors of sinogram in Radon domain. Unlike existing works that utilize pre-constructed sparse transformation, the framelets of the data-driven regularization model can be adaptively learned from the latest projection data in the process of iterative reconstruction to provide optimal sparse approximations for given sinogram. At the same time, an effective alternating direction method is designed to solve the simultaneous spatial and Radon domain regularization model. The experiments for both simulation and real data demonstrate that the proposed algorithm shows better performance in artifacts depression and details preservation than the algorithms solely using regularization model of spatial domain. Quantitative evaluations for the results also indicate that the proposed algorithm applying learning strategy performs better than the dual domains algorithms without learning regularization model

  18. Flexible binding simulation by a novel and improved version of virtual-system coupled adaptive umbrella sampling

    NASA Astrophysics Data System (ADS)

    Dasgupta, Bhaskar; Nakamura, Haruki; Higo, Junichi

    2016-10-01

    Virtual-system coupled adaptive umbrella sampling (VAUS) enhances sampling along a reaction coordinate by using a virtual degree of freedom. However, VAUS and regular adaptive umbrella sampling (AUS) methods are yet computationally expensive. To decrease the computational burden further, improvements of VAUS for all-atom explicit solvent simulation are presented here. The improvements include probability distribution calculation by a Markov approximation; parameterization of biasing forces by iterative polynomial fitting; and force scaling. These when applied to study Ala-pentapeptide dimerization in explicit solvent showed advantage over regular AUS. By using improved VAUS larger biological systems are amenable.

  19. A Dictionary Learning Method with Total Generalized Variation for MRI Reconstruction

    PubMed Central

    Lu, Hongyang; Wei, Jingbo; Wang, Yuhao; Deng, Xiaohua

    2016-01-01

    Reconstructing images from their noisy and incomplete measurements is always a challenge especially for medical MR image with important details and features. This work proposes a novel dictionary learning model that integrates two sparse regularization methods: the total generalized variation (TGV) approach and adaptive dictionary learning (DL). In the proposed method, the TGV selectively regularizes different image regions at different levels to avoid oil painting artifacts largely. At the same time, the dictionary learning adaptively represents the image features sparsely and effectively recovers details of images. The proposed model is solved by variable splitting technique and the alternating direction method of multiplier. Extensive simulation experimental results demonstrate that the proposed method consistently recovers MR images efficiently and outperforms the current state-of-the-art approaches in terms of higher PSNR and lower HFEN values. PMID:27110235

  20. A Dictionary Learning Method with Total Generalized Variation for MRI Reconstruction.

    PubMed

    Lu, Hongyang; Wei, Jingbo; Liu, Qiegen; Wang, Yuhao; Deng, Xiaohua

    2016-01-01

    Reconstructing images from their noisy and incomplete measurements is always a challenge especially for medical MR image with important details and features. This work proposes a novel dictionary learning model that integrates two sparse regularization methods: the total generalized variation (TGV) approach and adaptive dictionary learning (DL). In the proposed method, the TGV selectively regularizes different image regions at different levels to avoid oil painting artifacts largely. At the same time, the dictionary learning adaptively represents the image features sparsely and effectively recovers details of images. The proposed model is solved by variable splitting technique and the alternating direction method of multiplier. Extensive simulation experimental results demonstrate that the proposed method consistently recovers MR images efficiently and outperforms the current state-of-the-art approaches in terms of higher PSNR and lower HFEN values.

  1. Implication of adaptive smoothness constraint and Helmert variance component estimation in seismic slip inversion

    NASA Astrophysics Data System (ADS)

    Fan, Qingbiao; Xu, Caijun; Yi, Lei; Liu, Yang; Wen, Yangmao; Yin, Zhi

    2017-10-01

    When ill-posed problems are inverted, the regularization process is equivalent to adding constraint equations or prior information from a Bayesian perspective. The veracity of the constraints (or the regularization matrix R) significantly affects the solution, and a smoothness constraint is usually added in seismic slip inversions. In this paper, an adaptive smoothness constraint (ASC) based on the classic Laplacian smoothness constraint (LSC) is proposed. The ASC not only improves the smoothness constraint, but also helps constrain the slip direction. A series of experiments are conducted in which different magnitudes of noise are imposed and different densities of observation are assumed, and the results indicated that the ASC was superior to the LSC. Using the proposed ASC, the Helmert variance component estimation method is highlighted as the best for selecting the regularization parameter compared with other methods, such as generalized cross-validation or the mean squared error criterion method. The ASC may also benefit other ill-posed problems in which a smoothness constraint is required.

  2. Fast parallel MR image reconstruction via B1-based, adaptive restart, iterative soft thresholding algorithms (BARISTA).

    PubMed

    Muckley, Matthew J; Noll, Douglas C; Fessler, Jeffrey A

    2015-02-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms.

  3. Fast Parallel MR Image Reconstruction via B1-based, Adaptive Restart, Iterative Soft Thresholding Algorithms (BARISTA)

    PubMed Central

    Noll, Douglas C.; Fessler, Jeffrey A.

    2014-01-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms. PMID:25330484

  4. A Simple Method for Deriving the Confidence Regions for the Penalized Cox’s Model via the Minimand Perturbation†

    PubMed Central

    Lin, Chen-Yen; Halabi, Susan

    2017-01-01

    We propose a minimand perturbation method to derive the confidence regions for the regularized estimators for the Cox’s proportional hazards model. Although the regularized estimation procedure produces a more stable point estimate, it remains challenging to provide an interval estimator or an analytic variance estimator for the associated point estimate. Based on the sandwich formula, the current variance estimator provides a simple approximation, but its finite sample performance is not entirely satisfactory. Besides, the sandwich formula can only provide variance estimates for the non-zero coefficients. In this article, we present a generic description for the perturbation method and then introduce a computation algorithm using the adaptive least absolute shrinkage and selection operator (LASSO) penalty. Through simulation studies, we demonstrate that our method can better approximate the limiting distribution of the adaptive LASSO estimator and produces more accurate inference compared with the sandwich formula. The simulation results also indicate the possibility of extending the applications to the adaptive elastic-net penalty. We further demonstrate our method using data from a phase III clinical trial in prostate cancer. PMID:29326496

  5. A Simple Method for Deriving the Confidence Regions for the Penalized Cox's Model via the Minimand Perturbation.

    PubMed

    Lin, Chen-Yen; Halabi, Susan

    2017-01-01

    We propose a minimand perturbation method to derive the confidence regions for the regularized estimators for the Cox's proportional hazards model. Although the regularized estimation procedure produces a more stable point estimate, it remains challenging to provide an interval estimator or an analytic variance estimator for the associated point estimate. Based on the sandwich formula, the current variance estimator provides a simple approximation, but its finite sample performance is not entirely satisfactory. Besides, the sandwich formula can only provide variance estimates for the non-zero coefficients. In this article, we present a generic description for the perturbation method and then introduce a computation algorithm using the adaptive least absolute shrinkage and selection operator (LASSO) penalty. Through simulation studies, we demonstrate that our method can better approximate the limiting distribution of the adaptive LASSO estimator and produces more accurate inference compared with the sandwich formula. The simulation results also indicate the possibility of extending the applications to the adaptive elastic-net penalty. We further demonstrate our method using data from a phase III clinical trial in prostate cancer.

  6. A Locally Adaptive Regularization Based on Anisotropic Diffusion for Deformable Image Registration of Sliding Organs

    PubMed Central

    Pace, Danielle F.; Aylward, Stephen R.; Niethammer, Marc

    2014-01-01

    We propose a deformable image registration algorithm that uses anisotropic smoothing for regularization to find correspondences between images of sliding organs. In particular, we apply the method for respiratory motion estimation in longitudinal thoracic and abdominal computed tomography scans. The algorithm uses locally adaptive diffusion tensors to determine the direction and magnitude with which to smooth the components of the displacement field that are normal and tangential to an expected sliding boundary. Validation was performed using synthetic, phantom, and 14 clinical datasets, including the publicly available DIR-Lab dataset. We show that motion discontinuities caused by sliding can be effectively recovered, unlike conventional regularizations that enforce globally smooth motion. In the clinical datasets, target registration error showed improved accuracy for lung landmarks compared to the diffusive regularization. We also present a generalization of our algorithm to other sliding geometries, including sliding tubes (e.g., needles sliding through tissue, or contrast agent flowing through a vessel). Potential clinical applications of this method include longitudinal change detection and radiotherapy for lung or abdominal tumours, especially those near the chest or abdominal wall. PMID:23899632

  7. A locally adaptive regularization based on anisotropic diffusion for deformable image registration of sliding organs.

    PubMed

    Pace, Danielle F; Aylward, Stephen R; Niethammer, Marc

    2013-11-01

    We propose a deformable image registration algorithm that uses anisotropic smoothing for regularization to find correspondences between images of sliding organs. In particular, we apply the method for respiratory motion estimation in longitudinal thoracic and abdominal computed tomography scans. The algorithm uses locally adaptive diffusion tensors to determine the direction and magnitude with which to smooth the components of the displacement field that are normal and tangential to an expected sliding boundary. Validation was performed using synthetic, phantom, and 14 clinical datasets, including the publicly available DIR-Lab dataset. We show that motion discontinuities caused by sliding can be effectively recovered, unlike conventional regularizations that enforce globally smooth motion. In the clinical datasets, target registration error showed improved accuracy for lung landmarks compared to the diffusive regularization. We also present a generalization of our algorithm to other sliding geometries, including sliding tubes (e.g., needles sliding through tissue, or contrast agent flowing through a vessel). Potential clinical applications of this method include longitudinal change detection and radiotherapy for lung or abdominal tumours, especially those near the chest or abdominal wall.

  8. Angular dependence of multiangle dynamic light scattering for particle size distribution inversion using a self-adapting regularization algorithm

    NASA Astrophysics Data System (ADS)

    Li, Lei; Yu, Long; Yang, Kecheng; Li, Wei; Li, Kai; Xia, Min

    2018-04-01

    The multiangle dynamic light scattering (MDLS) technique can better estimate particle size distributions (PSDs) than single-angle dynamic light scattering. However, determining the inversion range, angular weighting coefficients, and scattering angle combination is difficult but fundamental to the reconstruction for both unimodal and multimodal distributions. In this paper, we propose a self-adapting regularization method called the wavelet iterative recursion nonnegative Tikhonov-Phillips-Twomey (WIRNNT-PT) algorithm. This algorithm combines a wavelet multiscale strategy with an appropriate inversion method and could self-adaptively optimize several noteworthy issues containing the choices of the weighting coefficients, the inversion range and the optimal inversion method from two regularization algorithms for estimating the PSD from MDLS measurements. In addition, the angular dependence of the MDLS for estimating the PSDs of polymeric latexes is thoroughly analyzed. The dependence of the results on the number and range of measurement angles was analyzed in depth to identify the optimal scattering angle combination. Numerical simulations and experimental results for unimodal and multimodal distributions are presented to demonstrate both the validity of the WIRNNT-PT algorithm and the angular dependence of MDLS and show that the proposed algorithm with a six-angle analysis in the 30-130° range can be satisfactorily applied to retrieve PSDs from MDLS measurements.

  9. Adaptive multi-view clustering based on nonnegative matrix factorization and pairwise co-regularization

    NASA Astrophysics Data System (ADS)

    Zhang, Tianzhen; Wang, Xiumei; Gao, Xinbo

    2018-04-01

    Nowadays, several datasets are demonstrated by multi-view, which usually include shared and complementary information. Multi-view clustering methods integrate the information of multi-view to obtain better clustering results. Nonnegative matrix factorization has become an essential and popular tool in clustering methods because of its interpretation. However, existing nonnegative matrix factorization based multi-view clustering algorithms do not consider the disagreement between views and neglects the fact that different views will have different contributions to the data distribution. In this paper, we propose a new multi-view clustering method, named adaptive multi-view clustering based on nonnegative matrix factorization and pairwise co-regularization. The proposed algorithm can obtain the parts-based representation of multi-view data by nonnegative matrix factorization. Then, pairwise co-regularization is used to measure the disagreement between views. There is only one parameter to auto learning the weight values according to the contribution of each view to data distribution. Experimental results show that the proposed algorithm outperforms several state-of-the-arts algorithms for multi-view clustering.

  10. Dancing with Black Holes

    NASA Astrophysics Data System (ADS)

    Aarseth, S. J.

    2008-05-01

    We describe efforts over the last six years to implement regularization methods suitable for studying one or more interacting black holes by direct N-body simulations. Three different methods have been adapted to large-N systems: (i) Time-Transformed Leapfrog, (ii) Wheel-Spoke, and (iii) Algorithmic Regularization. These methods have been tried out with some success on GRAPE-type computers. Special emphasis has also been devoted to including post-Newtonian terms, with application to moderately massive black holes in stellar clusters. Some examples of simulations leading to coalescence by gravitational radiation will be presented to illustrate the practical usefulness of such methods.

  11. A novel speckle pattern—Adaptive digital image correlation approach with robust strain calculation

    NASA Astrophysics Data System (ADS)

    Cofaru, Corneliu; Philips, Wilfried; Van Paepegem, Wim

    2012-02-01

    Digital image correlation (DIC) has seen widespread acceptance and usage as a non-contact method for the determination of full-field displacements and strains in experimental mechanics. The advances of imaging hardware in the last decades led to high resolution and speed cameras being more affordable than in the past making large amounts of data image available for typical DIC experimental scenarios. The work presented in this paper is aimed at maximizing both the accuracy and speed of DIC methods when employed with such images. A low-level framework for speckle image partitioning which replaces regularly shaped blocks with image-adaptive cells in the displacement calculation is introduced. The Newton-Raphson DIC method is modified to use the image pixels of the cells and to perform adaptive regularization to increase the spatial consistency of the displacements. Furthermore, a novel robust framework for strain calculation based also on the Newton-Raphson algorithm is introduced. The proposed methods are evaluated in five experimental scenarios, out of which four use numerically deformed images and one uses real experimental data. Results indicate that, as the desired strain density increases, significant computational gains can be obtained while maintaining or improving accuracy and rigid-body rotation sensitivity.

  12. Efficient energy stable schemes for isotropic and strongly anisotropic Cahn-Hilliard systems with the Willmore regularization

    NASA Astrophysics Data System (ADS)

    Chen, Ying; Lowengrub, John; Shen, Jie; Wang, Cheng; Wise, Steven

    2018-07-01

    We develop efficient energy stable numerical methods for solving isotropic and strongly anisotropic Cahn-Hilliard systems with the Willmore regularization. The scheme, which involves adaptive mesh refinement and a nonlinear multigrid finite difference method, is constructed based on a convex splitting approach. We prove that, for the isotropic Cahn-Hilliard system with the Willmore regularization, the total free energy of the system is non-increasing for any time step and mesh sizes. A straightforward modification of the scheme is then used to solve the regularized strongly anisotropic Cahn-Hilliard system, and it is numerically verified that the discrete energy of the anisotropic system is also non-increasing, and can be efficiently solved by using the modified stable method. We present numerical results in both two and three dimensions that are in good agreement with those in earlier work on the topics. Numerical simulations are presented to demonstrate the accuracy and efficiency of the proposed methods.

  13. Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images

    PubMed Central

    Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki

    2015-01-01

    In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures. PMID:26007744

  14. Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images.

    PubMed

    Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki

    2015-05-22

    In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures.

  15. Minimum entropy deconvolution optimized sinusoidal synthesis and its application to vibration based fault detection

    NASA Astrophysics Data System (ADS)

    Li, Gang; Zhao, Qing

    2017-03-01

    In this paper, a minimum entropy deconvolution based sinusoidal synthesis (MEDSS) filter is proposed to improve the fault detection performance of the regular sinusoidal synthesis (SS) method. The SS filter is an efficient linear predictor that exploits the frequency properties during model construction. The phase information of the harmonic components is not used in the regular SS filter. However, the phase relationships are important in differentiating noise from characteristic impulsive fault signatures. Therefore, in this work, the minimum entropy deconvolution (MED) technique is used to optimize the SS filter during the model construction process. A time-weighted-error Kalman filter is used to estimate the MEDSS model parameters adaptively. Three simulation examples and a practical application case study are provided to illustrate the effectiveness of the proposed method. The regular SS method and the autoregressive MED (ARMED) method are also implemented for comparison. The MEDSS model has demonstrated superior performance compared to the regular SS method and it also shows comparable or better performance with much less computational intensity than the ARMED method.

  16. A two-stage adaptive stochastic collocation method on nested sparse grids for multiphase flow in randomly heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Liao, Qinzhuo; Zhang, Dongxiao; Tchelepi, Hamdi

    2017-02-01

    A new computational method is proposed for efficient uncertainty quantification of multiphase flow in porous media with stochastic permeability. For pressure estimation, it combines the dimension-adaptive stochastic collocation method on Smolyak sparse grids and the Kronrod-Patterson-Hermite nested quadrature formulas. For saturation estimation, an additional stage is developed, in which the pressure and velocity samples are first generated by the sparse grid interpolation and then substituted into the transport equation to solve for the saturation samples, to address the low regularity problem of the saturation. Numerical examples are presented for multiphase flow with stochastic permeability fields to demonstrate accuracy and efficiency of the proposed two-stage adaptive stochastic collocation method on nested sparse grids.

  17. Higher order total variation regularization for EIT reconstruction.

    PubMed

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Zhang, Fan; Mueller-Lisse, Ullrich; Moeller, Knut

    2018-01-08

    Electrical impedance tomography (EIT) attempts to reveal the conductivity distribution of a domain based on the electrical boundary condition. This is an ill-posed inverse problem; its solution is very unstable. Total variation (TV) regularization is one of the techniques commonly employed to stabilize reconstructions. However, it is well known that TV regularization induces staircase effects, which are not realistic in clinical applications. To reduce such artifacts, modified TV regularization terms considering a higher order differential operator were developed in several previous studies. One of them is called total generalized variation (TGV) regularization. TGV regularization has been successively applied in image processing in a regular grid context. In this study, we adapted TGV regularization to the finite element model (FEM) framework for EIT reconstruction. Reconstructions using simulation and clinical data were performed. First results indicate that, in comparison to TV regularization, TGV regularization promotes more realistic images. Graphical abstract Reconstructed conductivity changes located on selected vertical lines. For each of the reconstructed images as well as the ground truth image, conductivity changes located along the selected left and right vertical lines are plotted. In these plots, the notation GT in the legend stands for ground truth, TV stands for total variation method, and TGV stands for total generalized variation method. Reconstructed conductivity distributions from the GREIT algorithm are also demonstrated.

  18. Characterizing Reinforcement Learning Methods through Parameterized Learning Problems

    DTIC Science & Technology

    2011-06-03

    extraneous. The agent could potentially adapt these representational aspects by applying methods from feature selection ( Kolter and Ng, 2009; Petrik et al...611–616. AAAI Press. Kolter , J. Z. and Ng, A. Y. (2009). Regularization and feature selection in least-squares temporal difference learning. In A. P

  19. Towards Adaptive Grids for Atmospheric Boundary-Layer Simulations

    NASA Astrophysics Data System (ADS)

    van Hooft, J. Antoon; Popinet, Stéphane; van Heerwaarden, Chiel C.; van der Linden, Steven J. A.; de Roode, Stephan R.; van de Wiel, Bas J. H.

    2018-02-01

    We present a proof-of-concept for the adaptive mesh refinement method applied to atmospheric boundary-layer simulations. Such a method may form an attractive alternative to static grids for studies on atmospheric flows that have a high degree of scale separation in space and/or time. Examples include the diurnal cycle and a convective boundary layer capped by a strong inversion. For such cases, large-eddy simulations using regular grids often have to rely on a subgrid-scale closure for the most challenging regions in the spatial and/or temporal domain. Here we analyze a flow configuration that describes the growth and subsequent decay of a convective boundary layer using direct numerical simulation (DNS). We validate the obtained results and benchmark the performance of the adaptive solver against two runs using fixed regular grids. It appears that the adaptive-mesh algorithm is able to coarsen and refine the grid dynamically whilst maintaining an accurate solution. In particular, during the initial growth of the convective boundary layer a high resolution is required compared to the subsequent stage of decaying turbulence. More specifically, the number of grid cells varies by two orders of magnitude over the course of the simulation. For this specific DNS case, the adaptive solver was not yet more efficient than the more traditional solver that is dedicated to these types of flows. However, the overall analysis shows that the method has a clear potential for numerical investigations of the most challenging atmospheric cases.

  20. Towards Adaptive Grids for Atmospheric Boundary-Layer Simulations

    NASA Astrophysics Data System (ADS)

    van Hooft, J. Antoon; Popinet, Stéphane; van Heerwaarden, Chiel C.; van der Linden, Steven J. A.; de Roode, Stephan R.; van de Wiel, Bas J. H.

    2018-06-01

    We present a proof-of-concept for the adaptive mesh refinement method applied to atmospheric boundary-layer simulations. Such a method may form an attractive alternative to static grids for studies on atmospheric flows that have a high degree of scale separation in space and/or time. Examples include the diurnal cycle and a convective boundary layer capped by a strong inversion. For such cases, large-eddy simulations using regular grids often have to rely on a subgrid-scale closure for the most challenging regions in the spatial and/or temporal domain. Here we analyze a flow configuration that describes the growth and subsequent decay of a convective boundary layer using direct numerical simulation (DNS). We validate the obtained results and benchmark the performance of the adaptive solver against two runs using fixed regular grids. It appears that the adaptive-mesh algorithm is able to coarsen and refine the grid dynamically whilst maintaining an accurate solution. In particular, during the initial growth of the convective boundary layer a high resolution is required compared to the subsequent stage of decaying turbulence. More specifically, the number of grid cells varies by two orders of magnitude over the course of the simulation. For this specific DNS case, the adaptive solver was not yet more efficient than the more traditional solver that is dedicated to these types of flows. However, the overall analysis shows that the method has a clear potential for numerical investigations of the most challenging atmospheric cases.

  1. A Weight-Adaptive Laplacian Embedding for Graph-Based Clustering.

    PubMed

    Cheng, De; Nie, Feiping; Sun, Jiande; Gong, Yihong

    2017-07-01

    Graph-based clustering methods perform clustering on a fixed input data graph. Thus such clustering results are sensitive to the particular graph construction. If this initial construction is of low quality, the resulting clustering may also be of low quality. We address this drawback by allowing the data graph itself to be adaptively adjusted in the clustering procedure. In particular, our proposed weight adaptive Laplacian (WAL) method learns a new data similarity matrix that can adaptively adjust the initial graph according to the similarity weight in the input data graph. We develop three versions of these methods based on the L2-norm, fuzzy entropy regularizer, and another exponential-based weight strategy, that yield three new graph-based clustering objectives. We derive optimization algorithms to solve these objectives. Experimental results on synthetic data sets and real-world benchmark data sets exhibit the effectiveness of these new graph-based clustering methods.

  2. A two-stage adaptive stochastic collocation method on nested sparse grids for multiphase flow in randomly heterogeneous porous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Qinzhuo, E-mail: liaoqz@pku.edu.cn; Zhang, Dongxiao; Tchelepi, Hamdi

    A new computational method is proposed for efficient uncertainty quantification of multiphase flow in porous media with stochastic permeability. For pressure estimation, it combines the dimension-adaptive stochastic collocation method on Smolyak sparse grids and the Kronrod–Patterson–Hermite nested quadrature formulas. For saturation estimation, an additional stage is developed, in which the pressure and velocity samples are first generated by the sparse grid interpolation and then substituted into the transport equation to solve for the saturation samples, to address the low regularity problem of the saturation. Numerical examples are presented for multiphase flow with stochastic permeability fields to demonstrate accuracy and efficiencymore » of the proposed two-stage adaptive stochastic collocation method on nested sparse grids.« less

  3. ADAPTIVE FINITE ELEMENT MODELING TECHNIQUES FOR THE POISSON-BOLTZMANN EQUATION

    PubMed Central

    HOLST, MICHAEL; MCCAMMON, JAMES ANDREW; YU, ZEYUN; ZHOU, YOUNGCHENG; ZHU, YUNRONG

    2011-01-01

    We consider the design of an effective and reliable adaptive finite element method (AFEM) for the nonlinear Poisson-Boltzmann equation (PBE). We first examine the two-term regularization technique for the continuous problem recently proposed by Chen, Holst, and Xu based on the removal of the singular electrostatic potential inside biomolecules; this technique made possible the development of the first complete solution and approximation theory for the Poisson-Boltzmann equation, the first provably convergent discretization, and also allowed for the development of a provably convergent AFEM. However, in practical implementation, this two-term regularization exhibits numerical instability. Therefore, we examine a variation of this regularization technique which can be shown to be less susceptible to such instability. We establish a priori estimates and other basic results for the continuous regularized problem, as well as for Galerkin finite element approximations. We show that the new approach produces regularized continuous and discrete problems with the same mathematical advantages of the original regularization. We then design an AFEM scheme for the new regularized problem, and show that the resulting AFEM scheme is accurate and reliable, by proving a contraction result for the error. This result, which is one of the first results of this type for nonlinear elliptic problems, is based on using continuous and discrete a priori L∞ estimates to establish quasi-orthogonality. To provide a high-quality geometric model as input to the AFEM algorithm, we also describe a class of feature-preserving adaptive mesh generation algorithms designed specifically for constructing meshes of biomolecular structures, based on the intrinsic local structure tensor of the molecular surface. All of the algorithms described in the article are implemented in the Finite Element Toolkit (FETK), developed and maintained at UCSD. The stability advantages of the new regularization scheme are demonstrated with FETK through comparisons with the original regularization approach for a model problem. The convergence and accuracy of the overall AFEM algorithm is also illustrated by numerical approximation of electrostatic solvation energy for an insulin protein. PMID:21949541

  4. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.

    PubMed

    Dazard, Jean-Eudes; Rao, J Sunil

    2012-07-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.

  5. Visuomotor adaptability in older adults with mild cognitive decline.

    PubMed

    Schaffert, Jeffrey; Lee, Chi-Mei; Neill, Rebecca; Bo, Jin

    2017-02-01

    The current study examined the augmentation of error feedback on visuomotor adaptability in older adults with varying degrees of cognitive decline (assessed by the Montreal Cognitive Assessment; MoCA). Twenty-three participants performed a center-out computerized visuomotor adaptation task when the visual feedback of their hand movement error was presented in a regular (ratio=1:1) or enhanced (ratio=1:2) error feedback schedule. Results showed that older adults with lower scores on the MoCA had less adaptability than those with higher MoCA scores during the regular feedback schedule. However, participants demonstrated similar adaptability during the enhanced feedback schedule, regardless of their cognitive ability. Furthermore, individuals with lower MoCA scores showed larger after-effects in spatial control during the enhanced schedule compared to the regular schedule, whereas individuals with higher MoCA scores displayed the opposite pattern. Additional neuro-cognitive assessments revealed that spatial working memory and processing speed were positively related to motor adaptability during the regular scheduled but negatively related to adaptability during the enhanced schedule. We argue that individuals with mild cognitive decline employed different adaptation strategies when encountering enhanced visual feedback, suggesting older adults with mild cognitive impairment (MCI) may benefit from enhanced visual error feedback during sensorimotor adaptation. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. An adaptive surface filter for airborne laser scanning point clouds by means of regularization and bending energy

    NASA Astrophysics Data System (ADS)

    Hu, Han; Ding, Yulin; Zhu, Qing; Wu, Bo; Lin, Hui; Du, Zhiqiang; Zhang, Yeting; Zhang, Yunsheng

    2014-06-01

    The filtering of point clouds is a ubiquitous task in the processing of airborne laser scanning (ALS) data; however, such filtering processes are difficult because of the complex configuration of the terrain features. The classical filtering algorithms rely on the cautious tuning of parameters to handle various landforms. To address the challenge posed by the bundling of different terrain features into a single dataset and to surmount the sensitivity of the parameters, in this study, we propose an adaptive surface filter (ASF) for the classification of ALS point clouds. Based on the principle that the threshold should vary in accordance to the terrain smoothness, the ASF embeds bending energy, which quantitatively depicts the local terrain structure to self-adapt the filter threshold automatically. The ASF employs a step factor to control the data pyramid scheme in which the processing window sizes are reduced progressively, and the ASF gradually interpolates thin plate spline surfaces toward the ground with regularization to handle noise. Using the progressive densification strategy, regularization and self-adaption, both performance improvement and resilience to parameter tuning are achieved. When tested against the benchmark datasets provided by ISPRS, the ASF performs the best in comparison with all other filtering methods, yielding an average total error of 2.85% when optimized and 3.67% when using the same parameter set.

  7. A Large-Scale Multi-Hop Localization Algorithm Based on Regularized Extreme Learning for Wireless Networks.

    PubMed

    Zheng, Wei; Yan, Xiaoyong; Zhao, Wei; Qian, Chengshan

    2017-12-20

    A novel large-scale multi-hop localization algorithm based on regularized extreme learning is proposed in this paper. The large-scale multi-hop localization problem is formulated as a learning problem. Unlike other similar localization algorithms, the proposed algorithm overcomes the shortcoming of the traditional algorithms which are only applicable to an isotropic network, therefore has a strong adaptability to the complex deployment environment. The proposed algorithm is composed of three stages: data acquisition, modeling and location estimation. In data acquisition stage, the training information between nodes of the given network is collected. In modeling stage, the model among the hop-counts and the physical distances between nodes is constructed using regularized extreme learning. In location estimation stage, each node finds its specific location in a distributed manner. Theoretical analysis and several experiments show that the proposed algorithm can adapt to the different topological environments with low computational cost. Furthermore, high accuracy can be achieved by this method without setting complex parameters.

  8. Impedance computed tomography using an adaptive smoothing coefficient algorithm.

    PubMed

    Suzuki, A; Uchiyama, A

    2001-01-01

    In impedance computed tomography, a fixed coefficient regularization algorithm has been frequently used to improve the ill-conditioning problem of the Newton-Raphson algorithm. However, a lot of experimental data and a long period of computation time are needed to determine a good smoothing coefficient because a good smoothing coefficient has to be manually chosen from a number of coefficients and is a constant for each iteration calculation. Thus, sometimes the fixed coefficient regularization algorithm distorts the information or fails to obtain any effect. In this paper, a new adaptive smoothing coefficient algorithm is proposed. This algorithm automatically calculates the smoothing coefficient from the eigenvalue of the ill-conditioned matrix. Therefore, the effective images can be obtained within a short computation time. Also the smoothing coefficient is automatically adjusted by the information related to the real resistivity distribution and the data collection method. In our impedance system, we have reconstructed the resistivity distributions of two phantoms using this algorithm. As a result, this algorithm only needs one-fifth the computation time compared to the fixed coefficient regularization algorithm. When compared to the fixed coefficient regularization algorithm, it shows that the image is obtained more rapidly and applicable in real-time monitoring of the blood vessel.

  9. Temporal sparsity exploiting nonlocal regularization for 4D computed tomography reconstruction

    PubMed Central

    Kazantsev, Daniil; Guo, Enyu; Kaestner, Anders; Lionheart, William R. B.; Bent, Julian; Withers, Philip J.; Lee, Peter D.

    2016-01-01

    X-ray imaging applications in medical and material sciences are frequently limited by the number of tomographic projections collected. The inversion of the limited projection data is an ill-posed problem and needs regularization. Traditional spatial regularization is not well adapted to the dynamic nature of time-lapse tomography since it discards the redundancy of the temporal information. In this paper, we propose a novel iterative reconstruction algorithm with a nonlocal regularization term to account for time-evolving datasets. The aim of the proposed nonlocal penalty is to collect the maximum relevant information in the spatial and temporal domains. With the proposed sparsity seeking approach in the temporal space, the computational complexity of the classical nonlocal regularizer is substantially reduced (at least by one order of magnitude). The presented reconstruction method can be directly applied to various big data 4D (x, y, z+time) tomographic experiments in many fields. We apply the proposed technique to modelled data and to real dynamic X-ray microtomography (XMT) data of high resolution. Compared to the classical spatio-temporal nonlocal regularization approach, the proposed method delivers reconstructed images of improved resolution and higher contrast while remaining significantly less computationally demanding. PMID:27002902

  10. Total variation regularization for seismic waveform inversion using an adaptive primal dual hybrid gradient method

    NASA Astrophysics Data System (ADS)

    Yong, Peng; Liao, Wenyuan; Huang, Jianping; Li, Zhenchuan

    2018-04-01

    Full waveform inversion is an effective tool for recovering the properties of the Earth from seismograms. However, it suffers from local minima caused mainly by the limited accuracy of the starting model and the lack of a low-frequency component in the seismic data. Because of the high velocity contrast between salt and sediment, the relation between the waveform and velocity perturbation is strongly nonlinear. Therefore, salt inversion can easily get trapped in the local minima. Since the velocity of salt is nearly constant, we can make the most of this characteristic with total variation regularization to mitigate the local minima. In this paper, we develop an adaptive primal dual hybrid gradient method to implement total variation regularization by projecting the solution onto a total variation norm constrained convex set, through which the total variation norm constraint is satisfied at every model iteration. The smooth background velocities are first inverted and the perturbations are gradually obtained by successively relaxing the total variation norm constraints. Numerical experiment of the projection of the BP model onto the intersection of the total variation norm and box constraints has demonstrated the accuracy and efficiency of our adaptive primal dual hybrid gradient method. A workflow is designed to recover complex salt structures in the BP 2004 model and the 2D SEG/EAGE salt model, starting from a linear gradient model without using low-frequency data below 3 Hz. The salt inversion processes demonstrate that wavefield reconstruction inversion with a total variation norm and box constraints is able to overcome local minima and inverts the complex salt velocity layer by layer.

  11. Versatile robotic probe calibration for position tracking in ultrasound imaging.

    PubMed

    Bø, Lars Eirik; Hofstad, Erlend Fagertun; Lindseth, Frank; Hernes, Toril A N

    2015-05-07

    Within the field of ultrasound-guided procedures, there are a number of methods for ultrasound probe calibration. While these methods are usually developed for a specific probe, they are in principle easily adapted to other probes. In practice, however, the adaptation often proves tedious and this is impractical in a research setting, where new probes are tested regularly. Therefore, we developed a method which can be applied to a large variety of probes without adaptation. The method used a robot arm to move a plastic sphere submerged in water through the ultrasound image plane, providing a slow and precise movement. The sphere was then segmented from the recorded ultrasound images using a MATLAB programme and the calibration matrix was computed based on this segmentation in combination with tracking information. The method was tested on three very different probes demonstrating both great versatility and high accuracy.

  12. Versatile robotic probe calibration for position tracking in ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Eirik Bø, Lars; Fagertun Hofstad, Erlend; Lindseth, Frank; Hernes, Toril A. N.

    2015-05-01

    Within the field of ultrasound-guided procedures, there are a number of methods for ultrasound probe calibration. While these methods are usually developed for a specific probe, they are in principle easily adapted to other probes. In practice, however, the adaptation often proves tedious and this is impractical in a research setting, where new probes are tested regularly. Therefore, we developed a method which can be applied to a large variety of probes without adaptation. The method used a robot arm to move a plastic sphere submerged in water through the ultrasound image plane, providing a slow and precise movement. The sphere was then segmented from the recorded ultrasound images using a MATLAB programme and the calibration matrix was computed based on this segmentation in combination with tracking information. The method was tested on three very different probes demonstrating both great versatility and high accuracy.

  13. Efficient robust doubly adaptive regularized regression with applications.

    PubMed

    Karunamuni, Rohana J; Kong, Linglong; Tu, Wei

    2018-01-01

    We consider the problem of estimation and variable selection for general linear regression models. Regularized regression procedures have been widely used for variable selection, but most existing methods perform poorly in the presence of outliers. We construct a new penalized procedure that simultaneously attains full efficiency and maximum robustness. Furthermore, the proposed procedure satisfies the oracle properties. The new procedure is designed to achieve sparse and robust solutions by imposing adaptive weights on both the decision loss and the penalty function. The proposed method of estimation and variable selection attains full efficiency when the model is correct and, at the same time, achieves maximum robustness when outliers are present. We examine the robustness properties using the finite-sample breakdown point and an influence function. We show that the proposed estimator attains the maximum breakdown point. Furthermore, there is no loss in efficiency when there are no outliers or the error distribution is normal. For practical implementation of the proposed method, we present a computational algorithm. We examine the finite-sample and robustness properties using Monte Carlo studies. Two datasets are also analyzed.

  14. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data

    PubMed Central

    Dazard, Jean-Eudes; Rao, J. Sunil

    2012-01-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput “omics” data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel “similarity statistic”-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called ‘MVR’ (‘Mean-Variance Regularization’), downloadable from the CRAN website. PMID:22711950

  15. Wavelet domain image restoration with adaptive edge-preserving regularization.

    PubMed

    Belge, M; Kilmer, M E; Miller, E L

    2000-01-01

    In this paper, we consider a wavelet based edge-preserving regularization scheme for use in linear image restoration problems. Our efforts build on a collection of mathematical results indicating that wavelets are especially useful for representing functions that contain discontinuities (i.e., edges in two dimensions or jumps in one dimension). We interpret the resulting theory in a statistical signal processing framework and obtain a highly flexible framework for adapting the degree of regularization to the local structure of the underlying image. In particular, we are able to adapt quite easily to scale-varying and orientation-varying features in the image while simultaneously retaining the edge preservation properties of the regularizer. We demonstrate a half-quadratic algorithm for obtaining the restorations from observed data.

  16. Stimulated Deep Neural Network for Speech Recognition

    DTIC Science & Technology

    2016-09-08

    making network regularization and robust adaptation challenging. Stimulated training has recently been proposed to address this problem by encouraging...potential to improve regularization and adaptation. This paper investigates stimulated training of DNNs for both of these options. These schemes take

  17. A hierarchical Bayesian method for vibration-based time domain force reconstruction problems

    NASA Astrophysics Data System (ADS)

    Li, Qiaofeng; Lu, Qiuhai

    2018-05-01

    Traditional force reconstruction techniques require prior knowledge on the force nature to determine the regularization term. When such information is unavailable, the inappropriate term is easily chosen and the reconstruction result becomes unsatisfactory. In this paper, we propose a novel method to automatically determine the appropriate q as in ℓq regularization and reconstruct the force history. The method incorporates all to-be-determined variables such as the force history, precision parameters and q into a hierarchical Bayesian formulation. The posterior distributions of variables are evaluated by a Metropolis-within-Gibbs sampler. The point estimates of variables and their uncertainties are given. Simulations of a cantilever beam and a space truss under various loading conditions validate the proposed method in providing adaptive determination of q and better reconstruction performance than existing Bayesian methods.

  18. Multidimensional deconvolution of optical microscope and ultrasound imaging using adaptive least-mean-square (LMS) inverse filtering

    NASA Astrophysics Data System (ADS)

    Sapia, Mark Angelo

    2000-11-01

    Three-dimensional microscope images typically suffer from reduced resolution due to the effects of convolution, optical aberrations and out-of-focus blurring. Two- dimensional ultrasound images are also degraded by convolutional bluffing and various sources of noise. Speckle noise is a major problem in ultrasound images. In microscopy and ultrasound, various methods of digital filtering have been used to improve image quality. Several methods of deconvolution filtering have been used to improve resolution by reversing the convolutional effects, many of which are based on regularization techniques and non-linear constraints. The technique discussed here is a unique linear filter for deconvolving 3D fluorescence microscopy or 2D ultrasound images. The process is to solve for the filter completely in the spatial-domain using an adaptive algorithm to converge to an optimum solution for de-blurring and resolution improvement. There are two key advantages of using an adaptive solution: (1)it efficiently solves for the filter coefficients by taking into account all sources of noise and degraded resolution at the same time, and (2)achieves near-perfect convergence to the ideal linear deconvolution filter. This linear adaptive technique has other advantages such as avoiding artifacts of frequency-domain transformations and concurrent adaptation to suppress noise. Ultimately, this approach results in better signal-to-noise characteristics with virtually no edge-ringing. Many researchers have not adopted linear techniques because of poor convergence, noise instability and negative valued data in the results. The methods presented here overcome many of these well-documented disadvantages and provide results that clearly out-perform other linear methods and may also out-perform regularization and constrained algorithms. In particular, the adaptive solution is most responsible for overcoming the poor performance associated with linear techniques. This linear adaptive approach to deconvolution is demonstrated with results of restoring blurred phantoms for both microscopy and ultrasound and restoring 3D microscope images of biological cells and 2D ultrasound images of human subjects (courtesy of General Electric and Diasonics, Inc.).

  19. Quantification of organ motion based on an adaptive image-based scale invariant feature method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paganelli, Chiara; Peroni, Marta; Baroni, Guido

    2013-11-15

    Purpose: The availability of corresponding landmarks in IGRT image series allows quantifying the inter and intrafractional motion of internal organs. In this study, an approach for the automatic localization of anatomical landmarks is presented, with the aim of describing the nonrigid motion of anatomo-pathological structures in radiotherapy treatments according to local image contrast.Methods: An adaptive scale invariant feature transform (SIFT) was developed from the integration of a standard 3D SIFT approach with a local image-based contrast definition. The robustness and invariance of the proposed method to shape-preserving and deformable transforms were analyzed in a CT phantom study. The application ofmore » contrast transforms to the phantom images was also tested, in order to verify the variation of the local adaptive measure in relation to the modification of image contrast. The method was also applied to a lung 4D CT dataset, relying on manual feature identification by an expert user as ground truth. The 3D residual distance between matches obtained in adaptive-SIFT was then computed to verify the internal motion quantification with respect to the expert user. Extracted corresponding features in the lungs were used as regularization landmarks in a multistage deformable image registration (DIR) mapping the inhale vs exhale phase. The residual distances between the warped manual landmarks and their reference position in the inhale phase were evaluated, in order to provide a quantitative indication of the registration performed with the three different point sets.Results: The phantom study confirmed the method invariance and robustness properties to shape-preserving and deformable transforms, showing residual matching errors below the voxel dimension. The adapted SIFT algorithm on the 4D CT dataset provided automated and accurate motion detection of peak to peak breathing motion. The proposed method resulted in reduced residual errors with respect to standard SIFT, providing a motion description comparable to expert manual identification, as confirmed by DIR.Conclusions: The application of the method to a 4D lung CT patient dataset demonstrated adaptive-SIFT potential as an automatic tool to detect landmarks for DIR regularization and internal motion quantification. Future works should include the optimization of the computational cost and the application of the method to other anatomical sites and image modalities.« less

  20. Curriculum Adaptation for Inclusive Classrooms.

    ERIC Educational Resources Information Center

    Neary, Tom; And Others

    This manual on curriculum adaptation for inclusive classrooms was developed as part of the PEERS (Providing Education for Everyone in Regular Schools) Project, a 5-year collaborative systems change project in California to facilitate the integration of students with severe disabilities previously at special centers into services at regular school…

  1. Rapid Motion Adaptation Reveals the Temporal Dynamics of Spatiotemporal Correlation between ON and OFF Pathways

    PubMed Central

    Oluk, Can; Pavan, Andrea; Kafaligonul, Hulusi

    2016-01-01

    At the early stages of visual processing, information is processed by two major thalamic pathways encoding brightness increments (ON) and decrements (OFF). Accumulating evidence suggests that these pathways interact and merge as early as in primary visual cortex. Using regular and reverse-phi motion in a rapid adaptation paradigm, we investigated the temporal dynamics of within and across pathway mechanisms for motion processing. When the adaptation duration was short (188 ms), reverse-phi and regular motion led to similar adaptation effects, suggesting that the information from the two pathways are combined efficiently at early-stages of motion processing. However, as the adaption duration was increased to 752 ms, reverse-phi and regular motion showed distinct adaptation effects depending on the test pattern used, either engaging spatiotemporal correlation between the same or opposite contrast polarities. Overall, these findings indicate that spatiotemporal correlation within and across ON-OFF pathways for motion processing can be selectively adapted, and support those models that integrate within and across pathway mechanisms for motion processing. PMID:27667401

  2. Joint image reconstruction method with correlative multi-channel prior for x-ray spectral computed tomography

    NASA Astrophysics Data System (ADS)

    Kazantsev, Daniil; Jørgensen, Jakob S.; Andersen, Martin S.; Lionheart, William R. B.; Lee, Peter D.; Withers, Philip J.

    2018-06-01

    Rapid developments in photon-counting and energy-discriminating detectors have the potential to provide an additional spectral dimension to conventional x-ray grayscale imaging. Reconstructed spectroscopic tomographic data can be used to distinguish individual materials by characteristic absorption peaks. The acquired energy-binned data, however, suffer from low signal-to-noise ratio, acquisition artifacts, and frequently angular undersampled conditions. New regularized iterative reconstruction methods have the potential to produce higher quality images and since energy channels are mutually correlated it can be advantageous to exploit this additional knowledge. In this paper, we propose a novel method which jointly reconstructs all energy channels while imposing a strong structural correlation. The core of the proposed algorithm is to employ a variational framework of parallel level sets to encourage joint smoothing directions. In particular, the method selects reference channels from which to propagate structure in an adaptive and stochastic way while preferring channels with a high data signal-to-noise ratio. The method is compared with current state-of-the-art multi-channel reconstruction techniques including channel-wise total variation and correlative total nuclear variation regularization. Realistic simulation experiments demonstrate the performance improvements achievable by using correlative regularization methods.

  3. Extended Hansen solubility approach: naphthalene in individual solvents.

    PubMed

    Martin, A; Wu, P L; Adjei, A; Beerbower, A; Prausnitz, J M

    1981-11-01

    A multiple regression method using Hansen partial solubility parameters, delta D, delta p, and delta H, was used to reproduce the solubilities of naphthalene in pure polar and nonpolar solvents and to predict its solubility in untested solvents. The method, called the extended Hansen approach, was compared with the extended Hildebrand solubility approach and the universal-functional-group-activity-coefficient (UNIFAC) method. The Hildebrand regular solution theory was also used to calculate naphthalene solubility. Naphthalene, an aromatic molecule having no side chains or functional groups, is "well-behaved', i.e., its solubility in active solvents known to interact with drug molecules is fairly regular. Because of its simplicity, naphthalene is a suitable solute with which to initiate the difficult study of solubility phenomena. The three methods tested (Hildebrand regular solution theory was introduced only for comparison of solubilities in regular solution) yielded similar results, reproducing naphthalene solubilities within approximately 30% of literature values. In some cases, however, the error was considerably greater. The UNIFAC calculation is superior in that it requires only the solute's heat of fusion, the melting point, and a knowledge of chemical structures of solute and solvent. The extended Hansen and extended Hildebrand methods need experimental solubility data on which to carry out regression analysis. The extended Hansen approach was the method of second choice because of its adaptability to solutes and solvents from various classes. Sample calculations are included to illustrate methods of predicting solubilities in untested solvents at various temperatures. The UNIFAC method was successful in this regard.

  4. Online Calibration Methods for the DINA Model with Independent Attributes in CD-CAT

    ERIC Educational Resources Information Center

    Chen, Ping; Xin, Tao; Wang, Chun; Chang, Hua-Hua

    2012-01-01

    Item replenishing is essential for item bank maintenance in cognitive diagnostic computerized adaptive testing (CD-CAT). In regular CAT, online calibration is commonly used to calibrate the new items continuously. However, until now no reference has publicly become available about online calibration for CD-CAT. Thus, this study investigates the…

  5. Prevalence of Sufficient Physical Activity among Parents Attending a University

    ERIC Educational Resources Information Center

    Sabourin, Sharon; Irwin, Jennifer

    2008-01-01

    Objective: The benefits of regular physical activity are well documented. However, approximately half of all university students are insufficiently active, and no research to date exists on the activity behavior of university students who are also parents. Participants and Methods: Using an adapted version of the Godin Leisure Time Exercise…

  6. Consumer Mathematics. Teacher's Guide [and Student Guide]. Parallel Alternative Strategies for Students (PASS).

    ERIC Educational Resources Information Center

    Walford, Sylvia B.; Thomas, Portia R.

    This teacher's guide and student guide are designed to accompany a consumer mathematics textbook that contains supplemental readings, activities, and methods adapted for secondary students who have disabilities and other students with diverse learning needs. The materials are designed to help these students succeed in regular education content…

  7. Behavior Management in the Adaptive Environment: Evaluation of a School-Community Program for Adjudicated Youth.

    ERIC Educational Resources Information Center

    Lazzaro, Edward L.; Hosie, Thomas W.

    1979-01-01

    The principles of behavioral engineering were applied to create a program for delinquent youth to facilitate their adjustment into the regular pattern of the normal public school. Results indicated that the behavioral engineering method produced a significantly greater number of placements and significantly less recidivism. (Author)

  8. Physical Science. Teacher's Guide [and Student Guide]. Parallel Alternative Strategies for Students (PASS).

    ERIC Educational Resources Information Center

    Danner, Greg, Ed.; Fresen, Sue, Ed.

    This teacher's guide and student guide unit contains supplemental readings, activities, and methods adapted for secondary students who have disabilities and other students with diverse learning needs. The materials are designed to help these students succeed in regular education content courses and include simplified text and smaller units of…

  9. Error Argumentation Enhance Adaptability in Adults With Low Motor Ability.

    PubMed

    Lee, Chi-Mei; Bo, Jin

    2016-01-01

    The authors focused on young adults with varying degrees of motor difficulties and examined their adaptability in a visuomotor adaptation task where the visual feedback of participants' movement error was presented with either 1:1 ratio (i.e., regular feedback schedule) or 1:2 ratio (i.e., enhanced feedback schedule). Within-subject design was used with two feedback schedules counter-balanced and separated for 10 days. Results revealed that participants with greater motor difficulties showed less adaptability than those with normal motor abilities in the regular feedback schedule; however, all participants demonstrated similar level of adaptability in the enhanced feedback schedule. The results suggest that error argumentation enhances adaptability in adults with low motor ability.

  10. Adaptive enhanced sampling by force-biasing using neural networks

    NASA Astrophysics Data System (ADS)

    Guo, Ashley Z.; Sevgen, Emre; Sidky, Hythem; Whitmer, Jonathan K.; Hubbell, Jeffrey A.; de Pablo, Juan J.

    2018-04-01

    A machine learning assisted method is presented for molecular simulation of systems with rugged free energy landscapes. The method is general and can be combined with other advanced sampling techniques. In the particular implementation proposed here, it is illustrated in the context of an adaptive biasing force approach where, rather than relying on discrete force estimates, one can resort to a self-regularizing artificial neural network to generate continuous, estimated generalized forces. By doing so, the proposed approach addresses several shortcomings common to adaptive biasing force and other algorithms. Specifically, the neural network enables (1) smooth estimates of generalized forces in sparsely sampled regions, (2) force estimates in previously unexplored regions, and (3) continuous force estimates with which to bias the simulation, as opposed to biases generated at specific points of a discrete grid. The usefulness of the method is illustrated with three different examples, chosen to highlight the wide range of applicability of the underlying concepts. In all three cases, the new method is found to enhance considerably the underlying traditional adaptive biasing force approach. The method is also found to provide improvements over previous implementations of neural network assisted algorithms.

  11. Robust dynamic myocardial perfusion CT deconvolution for accurate residue function estimation via adaptive-weighted tensor total variation regularization: a preclinical study.

    PubMed

    Zeng, Dong; Gong, Changfei; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Niu, Shanzhou; Zhang, Zhang; Liang, Zhengrong; Feng, Qianjin; Chen, Wufan; Ma, Jianhua

    2016-11-21

    Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for quick diagnosis and risk stratification of coronary artery disease. However, one major drawback of dynamic MPCT imaging is the heavy radiation dose to patients due to its dynamic image acquisition protocol. In this work, to address this issue, we present a robust dynamic MPCT deconvolution algorithm via adaptive-weighted tensor total variation (AwTTV) regularization for accurate residue function estimation with low-mA s data acquisitions. For simplicity, the presented method is termed 'MPD-AwTTV'. More specifically, the gains of the AwTTV regularization over the original tensor total variation regularization are from the anisotropic edge property of the sequential MPCT images. To minimize the associative objective function we propose an efficient iterative optimization strategy with fast convergence rate in the framework of an iterative shrinkage/thresholding algorithm. We validate and evaluate the presented algorithm using both digital XCAT phantom and preclinical porcine data. The preliminary experimental results have demonstrated that the presented MPD-AwTTV deconvolution algorithm can achieve remarkable gains in noise-induced artifact suppression, edge detail preservation, and accurate flow-scaled residue function and MPHM estimation as compared with the other existing deconvolution algorithms in digital phantom studies, and similar gains can be obtained in the porcine data experiment.

  12. Robust dynamic myocardial perfusion CT deconvolution for accurate residue function estimation via adaptive-weighted tensor total variation regularization: a preclinical study

    NASA Astrophysics Data System (ADS)

    Zeng, Dong; Gong, Changfei; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Niu, Shanzhou; Zhang, Zhang; Liang, Zhengrong; Feng, Qianjin; Chen, Wufan; Ma, Jianhua

    2016-11-01

    Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for quick diagnosis and risk stratification of coronary artery disease. However, one major drawback of dynamic MPCT imaging is the heavy radiation dose to patients due to its dynamic image acquisition protocol. In this work, to address this issue, we present a robust dynamic MPCT deconvolution algorithm via adaptive-weighted tensor total variation (AwTTV) regularization for accurate residue function estimation with low-mA s data acquisitions. For simplicity, the presented method is termed ‘MPD-AwTTV’. More specifically, the gains of the AwTTV regularization over the original tensor total variation regularization are from the anisotropic edge property of the sequential MPCT images. To minimize the associative objective function we propose an efficient iterative optimization strategy with fast convergence rate in the framework of an iterative shrinkage/thresholding algorithm. We validate and evaluate the presented algorithm using both digital XCAT phantom and preclinical porcine data. The preliminary experimental results have demonstrated that the presented MPD-AwTTV deconvolution algorithm can achieve remarkable gains in noise-induced artifact suppression, edge detail preservation, and accurate flow-scaled residue function and MPHM estimation as compared with the other existing deconvolution algorithms in digital phantom studies, and similar gains can be obtained in the porcine data experiment.

  13. The Regularity of Sustained Firing Reveals Two Populations of Slowly Adapting Touch Receptors in Mouse Hairy Skin

    PubMed Central

    Wellnitz, Scott A.; Lesniak, Daine R.; Gerling, Gregory J.

    2010-01-01

    Touch is initiated by diverse somatosensory afferents that innervate the skin. The ability to manipulate and classify receptor subtypes is prerequisite for elucidating sensory mechanisms. Merkel cell–neurite complexes, which distinguish shapes and textures, are experimentally tractable mammalian touch receptors that mediate slowly adapting type I (SAI) responses. The assessment of SAI function in mutant mice has been hindered because previous studies did not distinguish SAI responses from slowly adapting type II (SAII) responses, which are thought to arise from different end organs, such as Ruffini endings. Thus we sought methods to discriminate these afferent types. We developed an epidermis-up ex vivo skin–nerve chamber to record action potentials from afferents while imaging Merkel cells in intact receptive fields. Using model-based cluster analysis, we found that two types of slowly adapting receptors were readily distinguished based on the regularity of touch-evoked firing patterns. We identified these clusters as SAI (coefficient of variation = 0.78 ± 0.09) and SAII responses (0.21 ± 0.09). The identity of SAI afferents was confirmed by recording from transgenic mice with green fluorescent protein–expressing Merkel cells. SAI receptive fields always contained fluorescent Merkel cells (n = 10), whereas SAII receptive fields lacked these cells (n = 5). Consistent with reports from other vertebrates, mouse SAI and SAII responses arise from afferents exhibiting similar conduction velocities, receptive field sizes, mechanical thresholds, and firing rates. These results demonstrate that mice, like other vertebrates, have two classes of slowly adapting light-touch receptors, identify a simple method to distinguish these populations, and extend the utility of skin–nerve recordings for genetic dissection of touch receptor mechanisms. PMID:20393068

  14. R package MVR for Joint Adaptive Mean-Variance Regularization and Variance Stabilization

    PubMed Central

    Dazard, Jean-Eudes; Xu, Hua; Rao, J. Sunil

    2015-01-01

    We present an implementation in the R language for statistical computing of our recent non-parametric joint adaptive mean-variance regularization and variance stabilization procedure. The method is specifically suited for handling difficult problems posed by high-dimensional multivariate datasets (p ≫ n paradigm), such as in ‘omics’-type data, among which are that the variance is often a function of the mean, variable-specific estimators of variances are not reliable, and tests statistics have low powers due to a lack of degrees of freedom. The implementation offers a complete set of features including: (i) normalization and/or variance stabilization function, (ii) computation of mean-variance-regularized t and F statistics, (iii) generation of diverse diagnostic plots, (iv) synthetic and real ‘omics’ test datasets, (v) computationally efficient implementation, using C interfacing, and an option for parallel computing, (vi) manual and documentation on how to setup a cluster. To make each feature as user-friendly as possible, only one subroutine per functionality is to be handled by the end-user. It is available as an R package, called MVR (‘Mean-Variance Regularization’), downloadable from the CRAN. PMID:26819572

  15. Lossless medical image compression using geometry-adaptive partitioning and least square-based prediction.

    PubMed

    Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao

    2018-06-01

    To improve the compression rates for lossless compression of medical images, an efficient algorithm, based on irregular segmentation and region-based prediction, is proposed in this paper. Considering that the first step of a region-based compression algorithm is segmentation, this paper proposes a hybrid method by combining geometry-adaptive partitioning and quadtree partitioning to achieve adaptive irregular segmentation for medical images. Then, least square (LS)-based predictors are adaptively designed for each region (regular subblock or irregular subregion). The proposed adaptive algorithm not only exploits spatial correlation between pixels but it utilizes local structure similarity, resulting in efficient compression performance. Experimental results show that the average compression performance of the proposed algorithm is 10.48, 4.86, 3.58, and 0.10% better than that of JPEG 2000, CALIC, EDP, and JPEG-LS, respectively. Graphical abstract ᅟ.

  16. Comparison of detrending methods for fluctuation analysis in hydrology

    NASA Astrophysics Data System (ADS)

    Zhang, Qiang; Zhou, Yu; Singh, Vijay P.; Chen, Yongqin David

    2011-03-01

    SummaryTrends within a hydrologic time series can significantly influence the scaling results of fluctuation analysis, such as rescaled range (RS) analysis and (multifractal) detrended fluctuation analysis (MF-DFA). Therefore, removal of trends is important in the study of scaling properties of the time series. In this study, three detrending methods, including adaptive detrending algorithm (ADA), Fourier-based method, and average removing technique, were evaluated by analyzing numerically generated series and observed streamflow series with obvious relative regular periodic trend. Results indicated that: (1) the Fourier-based detrending method and ADA were similar in detrending practices, and given proper parameters, these two methods can produce similarly satisfactory results; (2) detrended series by Fourier-based detrending method and ADA lose the fluctuation information at larger time scales, and the location of crossover points is heavily impacted by the chosen parameters of these two methods; and (3) the average removing method has an advantage over the other two methods, i.e., the fluctuation information at larger time scales is kept well-an indication of relatively reliable performance in detrending. In addition, the average removing method performed reasonably well in detrending a time series with regular periods or trends. In this sense, the average removing method should be preferred in the study of scaling properties of the hydrometeorolgical series with relative regular periodic trend using MF-DFA.

  17. Editing the Neuronal Genome: a CRISPR View of Chromatin Regulation in Neuronal Development, Function, and Plasticity.

    PubMed

    Yang, Marty G; West, Anne E

    2016-12-01

    The dynamic orchestration of gene expression is crucial for the proper differentiation, function, and adaptation of cells. In the brain, transcriptional regulation underlies the incredible diversity of neuronal cell types and contributes to the ability of neurons to adapt their function to the environment. Recently, novel methods for genome and epigenome editing have begun to revolutionize our understanding of gene regulatory mechanisms. In particular, the clustered regularly interspaced short palindromic repeats (CRISPR)/Cas9 system has proven to be a particularly accessible and adaptable technique for genome engineering. Here, we review the use of CRISPR/Cas9 in neurobiology and discuss how these studies have advanced understanding of nervous system development and plasticity. We cover four especially salient applications of CRISPR/Cas9: testing the consequences of enhancer mutations, tagging genes and gene products for visualization in live cells, directly activating or repressing enhancers in vivo , and manipulating the epigenome. In each case, we summarize findings from recent studies and discuss evolving adaptations of the method.

  18. Sparse regularization for force identification using dictionaries

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng

    2016-04-01

    The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.

  19. Strehl-constrained reconstruction of post-adaptive optics data and the Software Package AIRY, v. 6.1

    NASA Astrophysics Data System (ADS)

    Carbillet, Marcel; La Camera, Andrea; Deguignet, Jérémy; Prato, Marco; Bertero, Mario; Aristidi, Éric; Boccacci, Patrizia

    2014-08-01

    We first briefly present the last version of the Software Package AIRY, version 6.1, a CAOS-based tool which includes various deconvolution methods, accelerations, regularizations, super-resolution, boundary effects reduction, point-spread function extraction/extrapolation, stopping rules, and constraints in the case of iterative blind deconvolution (IBD). Then, we focus on a new formulation of our Strehl-constrained IBD, here quantitatively compared to the original formulation for simulated near-infrared data of an 8-m class telescope equipped with adaptive optics (AO), showing their equivalence. Next, we extend the application of the original method to the visible domain with simulated data of an AO-equipped 1.5-m telescope, testing also the robustness of the method with respect to the Strehl ratio estimation.

  20. Regularity in an environment produces an internal torque pattern for biped balance control.

    PubMed

    Ito, Satoshi; Kawasaki, Haruhisa

    2005-04-01

    In this paper, we present a control method for achieving biped static balance under unknown periodic external forces whose periods are only known. In order to maintain static balance adaptively in an uncertain environment, it is essential to have information on the ground reaction forces. However, when the biped is exposed to a steady environment that provides an external force periodically, uncertain factors on the regularity with respect to a steady environment are gradually clarified using learning process, and finally a torque pattern for balancing motion is acquired. Consequently, static balance is maintained without feedback from ground reaction forces and achieved in a feedforward manner.

  1. The Regular Education Initiative: A Blueprint for Success. A Description of a Statewide Implementation Project in Illinois.

    ERIC Educational Resources Information Center

    Whitworth, Jerry

    This paper defines the Regular Education Initiative (REI) as encouraging both regular and special education personnel to work together more effectively to provide the best education possible for all children, by adapting the regular education environment to better accommodate the student's needs. The paper discusses the results of a statewide…

  2. Learning free energy landscapes using artificial neural networks.

    PubMed

    Sidky, Hythem; Whitmer, Jonathan K

    2018-03-14

    Existing adaptive bias techniques, which seek to estimate free energies and physical properties from molecular simulations, are limited by their reliance on fixed kernels or basis sets which hinder their ability to efficiently conform to varied free energy landscapes. Further, user-specified parameters are in general non-intuitive yet significantly affect the convergence rate and accuracy of the free energy estimate. Here we propose a novel method, wherein artificial neural networks (ANNs) are used to develop an adaptive biasing potential which learns free energy landscapes. We demonstrate that this method is capable of rapidly adapting to complex free energy landscapes and is not prone to boundary or oscillation problems. The method is made robust to hyperparameters and overfitting through Bayesian regularization which penalizes network weights and auto-regulates the number of effective parameters in the network. ANN sampling represents a promising innovative approach which can resolve complex free energy landscapes in less time than conventional approaches while requiring minimal user input.

  3. Learning free energy landscapes using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Sidky, Hythem; Whitmer, Jonathan K.

    2018-03-01

    Existing adaptive bias techniques, which seek to estimate free energies and physical properties from molecular simulations, are limited by their reliance on fixed kernels or basis sets which hinder their ability to efficiently conform to varied free energy landscapes. Further, user-specified parameters are in general non-intuitive yet significantly affect the convergence rate and accuracy of the free energy estimate. Here we propose a novel method, wherein artificial neural networks (ANNs) are used to develop an adaptive biasing potential which learns free energy landscapes. We demonstrate that this method is capable of rapidly adapting to complex free energy landscapes and is not prone to boundary or oscillation problems. The method is made robust to hyperparameters and overfitting through Bayesian regularization which penalizes network weights and auto-regulates the number of effective parameters in the network. ANN sampling represents a promising innovative approach which can resolve complex free energy landscapes in less time than conventional approaches while requiring minimal user input.

  4. Oxidative stress and inflammation: liver responses and adaptations to acute and regular exercise.

    PubMed

    Pillon Barcelos, Rômulo; Freire Royes, Luiz Fernando; Gonzalez-Gallego, Javier; Bresciani, Guilherme

    2017-02-01

    The liver is remarkably important during exercise outcomes due to its contribution to detoxification, synthesis, and release of biomolecules, and energy supply to the exercising muscles. Recently, liver has been also shown to play an important role in redox status and inflammatory modulation during exercise. However, while several studies have described the adaptations of skeletal muscles to acute and chronic exercise, hepatic changes are still scarcely investigated. Indeed, acute intense exercise challenges the liver with increased reactive oxygen species (ROS) and inflammation onset, whereas regular training induces hepatic antioxidant and anti-inflammatory improvements. Acute and regular exercise protocols in combination with antioxidant and anti-inflammatory supplementation have been also tested to verify hepatic adaptations to exercise. Although positive results have been reported in some acute models, several studies have shown an increased exercise-related stress upon liver. A similar trend has been observed during training: while synergistic effects of training and antioxidant/anti-inflammatory supplementations have been occasionally found, others reported a blunting of relevant adaptations to exercise, following the patterns described in skeletal muscles. This review discusses current data regarding liver responses and adaptation to acute and regular exercise protocols alone or combined with antioxidant and anti-inflammatory supplementation. The understanding of the mechanisms behind these modulations is of interest for both exercise-related health and performance outcomes.

  5. A Modeling Approach for Burn Scar Assessment Using Natural Features and Elastic Property

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsap, L V; Zhang, Y; Goldgof, D B

    2004-04-02

    A modeling approach is presented for quantitative burn scar assessment. Emphases are given to: (1) constructing a finite element model from natural image features with an adaptive mesh, and (2) quantifying the Young's modulus of scars using the finite element model and the regularization method. A set of natural point features is extracted from the images of burn patients. A Delaunay triangle mesh is then generated that adapts to the point features. A 3D finite element model is built on top of the mesh with the aid of range images providing the depth information. The Young's modulus of scars ismore » quantified with a simplified regularization functional, assuming that the knowledge of scar's geometry is available. The consistency between the Relative Elasticity Index and the physician's rating based on the Vancouver Scale (a relative scale used to rate burn scars) indicates that the proposed modeling approach has high potentials for image-based quantitative burn scar assessment.« less

  6. Prediction of the binding affinities of peptides to class II MHC using a regularized thermodynamic model

    PubMed Central

    2010-01-01

    Background The binding of peptide fragments of extracellular peptides to class II MHC is a crucial event in the adaptive immune response. Each MHC allotype generally binds a distinct subset of peptides and the enormous number of possible peptide epitopes prevents their complete experimental characterization. Computational methods can utilize the limited experimental data to predict the binding affinities of peptides to class II MHC. Results We have developed the Regularized Thermodynamic Average, or RTA, method for predicting the affinities of peptides binding to class II MHC. RTA accounts for all possible peptide binding conformations using a thermodynamic average and includes a parameter constraint for regularization to improve accuracy on novel data. RTA was shown to achieve higher accuracy, as measured by AUC, than SMM-align on the same data for all 17 MHC allotypes examined. RTA also gave the highest accuracy on all but three allotypes when compared with results from 9 different prediction methods applied to the same data. In addition, the method correctly predicted the peptide binding register of 17 out of 18 peptide-MHC complexes. Finally, we found that suboptimal peptide binding registers, which are often ignored in other prediction methods, made significant contributions of at least 50% of the total binding energy for approximately 20% of the peptides. Conclusions The RTA method accurately predicts peptide binding affinities to class II MHC and accounts for multiple peptide binding registers while reducing overfitting through regularization. The method has potential applications in vaccine design and in understanding autoimmune disorders. A web server implementing the RTA prediction method is available at http://bordnerlab.org/RTA/. PMID:20089173

  7. Graph Laplacian Regularization for Image Denoising: Analysis in the Continuous Domain.

    PubMed

    Pang, Jiahao; Cheung, Gene

    2017-04-01

    Inverse imaging problems are inherently underdetermined, and hence, it is important to employ appropriate image priors for regularization. One recent popular prior-the graph Laplacian regularizer-assumes that the target pixel patch is smooth with respect to an appropriately chosen graph. However, the mechanisms and implications of imposing the graph Laplacian regularizer on the original inverse problem are not well understood. To address this problem, in this paper, we interpret neighborhood graphs of pixel patches as discrete counterparts of Riemannian manifolds and perform analysis in the continuous domain, providing insights into several fundamental aspects of graph Laplacian regularization for image denoising. Specifically, we first show the convergence of the graph Laplacian regularizer to a continuous-domain functional, integrating a norm measured in a locally adaptive metric space. Focusing on image denoising, we derive an optimal metric space assuming non-local self-similarity of pixel patches, leading to an optimal graph Laplacian regularizer for denoising in the discrete domain. We then interpret graph Laplacian regularization as an anisotropic diffusion scheme to explain its behavior during iterations, e.g., its tendency to promote piecewise smooth signals under certain settings. To verify our analysis, an iterative image denoising algorithm is developed. Experimental results show that our algorithm performs competitively with state-of-the-art denoising methods, such as BM3D for natural images, and outperforms them significantly for piecewise smooth images.

  8. Cone Photoreceptor Irregularity on Adaptive Optics Scanning Laser Ophthalmoscopy Correlates With Severity of Diabetic Retinopathy and Macular Edema

    PubMed Central

    Lammer, Jan; Prager, Sonja G.; Cheney, Michael C.; Ahmed, Amel; Radwan, Salma H.; Burns, Stephen A.; Silva, Paolo S.; Sun, Jennifer K.

    2016-01-01

    Purpose To determine whether cone density, spacing, or regularity in eyes with and without diabetes (DM) as assessed by high-resolution adaptive optics scanning laser ophthalmoscopy (AOSLO) correlates with presence of diabetes, diabetic retinopathy (DR) severity, or presence of diabetic macular edema (DME). Methods Participants with type 1 or 2 DM and healthy controls underwent AOSLO imaging of four macular regions. Cone assessment was performed by independent graders for cone density, packing factor (PF), nearest neighbor distance (NND), and Voronoi tile area (VTA). Regularity indices (mean/SD) of NND (RI-NND) and VTA (RI-VTA) were calculated. Results Fifty-three eyes (53 subjects) were assessed. Mean ± SD age was 44 ± 12 years; 81% had DM (duration: 22 ± 13 years; glycated hemoglobin [HbA1c]: 8.0 ± 1.7%; DM type 1: 72%). No significant relationship was found between DM, HbA1c, or DR severity and cone density or spacing parameters. However, decreased regularity of cone arrangement in the macular quadrants was correlated with presence of DM (RI-NND: P = 0.04; RI-VTA: P = 0.04), increasing DR severity (RI-NND: P = 0.04), and presence of DME (RI-VTA: P = 0.04). Eyes with DME were associated with decreased density (P = 0.04), PF (P = 0.03), and RI-VTA (0.04). Conclusions Although absolute cone density and spacing don't appear to change substantially in DM, decreased regularity of the cone arrangement is consistently associated with the presence of DM, increasing DR severity, and DME. Future AOSLO evaluation of cone regularity is warranted to determine whether these changes are correlated with, or predict, anatomic or functional deficits in patients with DM. PMID:27926754

  9. A novel scene-based non-uniformity correction method for SWIR push-broom hyperspectral sensors

    NASA Astrophysics Data System (ADS)

    Hu, Bin-Lin; Hao, Shi-Jing; Sun, De-Xin; Liu, Yin-Nian

    2017-09-01

    A novel scene-based non-uniformity correction (NUC) method for short-wavelength infrared (SWIR) push-broom hyperspectral sensors is proposed and evaluated. This method relies on the assumption that for each band there will be ground objects with similar reflectance to form uniform regions when a sufficient number of scanning lines are acquired. The uniform regions are extracted automatically through a sorting algorithm, and are used to compute the corresponding NUC coefficients. SWIR hyperspectral data from airborne experiment are used to verify and evaluate the proposed method, and results show that stripes in the scenes have been well corrected without any significant information loss, and the non-uniformity is less than 0.5%. In addition, the proposed method is compared to two other regular methods, and they are evaluated based on their adaptability to the various scenes, non-uniformity, roughness and spectral fidelity. It turns out that the proposed method shows strong adaptability, high accuracy and efficiency.

  10. Adaptive laboratory evolution -- principles and applications for biotechnology.

    PubMed

    Dragosits, Martin; Mattanovich, Diethard

    2013-07-01

    Adaptive laboratory evolution is a frequent method in biological studies to gain insights into the basic mechanisms of molecular evolution and adaptive changes that accumulate in microbial populations during long term selection under specified growth conditions. Although regularly performed for more than 25 years, the advent of transcript and cheap next-generation sequencing technologies has resulted in many recent studies, which successfully applied this technique in order to engineer microbial cells for biotechnological applications. Adaptive laboratory evolution has some major benefits as compared with classical genetic engineering but also some inherent limitations. However, recent studies show how some of the limitations may be overcome in order to successfully incorporate adaptive laboratory evolution in microbial cell factory design. Over the last two decades important insights into nutrient and stress metabolism of relevant model species were acquired, whereas some other aspects such as niche-specific differences of non-conventional cell factories are not completely understood. Altogether the current status and its future perspectives highlight the importance and potential of adaptive laboratory evolution as approach in biotechnological engineering.

  11. Image Reconstruction from Under sampled Fourier Data Using the Polynomial Annihilation Transform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archibald, Richard K.; Gelb, Anne; Platte, Rodrigo

    Fourier samples are collected in a variety of applications including magnetic resonance imaging and synthetic aperture radar. The data are typically under-sampled and noisy. In recent years, l 1 regularization has received considerable attention in designing image reconstruction algorithms from under-sampled and noisy Fourier data. The underlying image is assumed to have some sparsity features, that is, some measurable features of the image have sparse representation. The reconstruction algorithm is typically designed to solve a convex optimization problem, which consists of a fidelity term penalized by one or more l 1 regularization terms. The Split Bregman Algorithm provides a fastmore » explicit solution for the case when TV is used for the l1l1 regularization terms. Due to its numerical efficiency, it has been widely adopted for a variety of applications. A well known drawback in using TV as an l 1 regularization term is that the reconstructed image will tend to default to a piecewise constant image. This issue has been addressed in several ways. Recently, the polynomial annihilation edge detection method was used to generate a higher order sparsifying transform, and was coined the “polynomial annihilation (PA) transform.” This paper adapts the Split Bregman Algorithm for the case when the PA transform is used as the l 1 regularization term. In so doing, we achieve a more accurate image reconstruction method from under-sampled and noisy Fourier data. Our new method compares favorably to the TV Split Bregman Algorithm, as well as to the popular TGV combined with shearlet approach.« less

  12. Z-Index Parameterization for Volumetric CT Image Reconstruction via 3-D Dictionary Learning.

    PubMed

    Bai, Ti; Yan, Hao; Jia, Xun; Jiang, Steve; Wang, Ge; Mou, Xuanqin

    2017-12-01

    Despite the rapid developments of X-ray cone-beam CT (CBCT), image noise still remains a major issue for the low dose CBCT. To suppress the noise effectively while retain the structures well for low dose CBCT image, in this paper, a sparse constraint based on the 3-D dictionary is incorporated into a regularized iterative reconstruction framework, defining the 3-D dictionary learning (3-DDL) method. In addition, by analyzing the sparsity level curve associated with different regularization parameters, a new adaptive parameter selection strategy is proposed to facilitate our 3-DDL method. To justify the proposed method, we first analyze the distributions of the representation coefficients associated with the 3-D dictionary and the conventional 2-D dictionary to compare their efficiencies in representing volumetric images. Then, multiple real data experiments are conducted for performance validation. Based on these results, we found: 1) the 3-D dictionary-based sparse coefficients have three orders narrower Laplacian distribution compared with the 2-D dictionary, suggesting the higher representation efficiencies of the 3-D dictionary; 2) the sparsity level curve demonstrates a clear Z-shape, and hence referred to as Z-curve, in this paper; 3) the parameter associated with the maximum curvature point of the Z-curve suggests a nice parameter choice, which could be adaptively located with the proposed Z-index parameterization (ZIP) method; 4) the proposed 3-DDL algorithm equipped with the ZIP method could deliver reconstructions with the lowest root mean squared errors and the highest structural similarity index compared with the competing methods; 5) similar noise performance as the regular dose FDK reconstruction regarding the standard deviation metric could be achieved with the proposed method using (1/2)/(1/4)/(1/8) dose level projections. The contrast-noise ratio is improved by ~2.5/3.5 times with respect to two different cases under the (1/8) dose level compared with the low dose FDK reconstruction. The proposed method is expected to reduce the radiation dose by a factor of 8 for CBCT, considering the voted strongly discriminated low contrast tissues.

  13. Lipschitz regularity for integro-differential equations with coercive Hamiltonians and application to large time behavior

    NASA Astrophysics Data System (ADS)

    Barles, Guy; Ley, Olivier; Topp, Erwin

    2017-02-01

    In this paper, we provide suitable adaptations of the ‘weak version of Bernstein method’ introduced by the first author in 1991, in order to obtain Lipschitz regularity results and Lipschitz estimates for nonlinear integro-differential elliptic and parabolic equations set in the whole space. Our interest is to obtain such Lipschitz results to possibly degenerate equations, or to equations which are indeed ‘uniformly elliptic’ (maybe in the nonlocal sense) but which do not satisfy the usual ‘growth condition’ on the gradient term allowing to use (for example) the Ishii-Lions’ method. We treat the case of a model equation with a superlinear coercivity on the gradient term which has a leading role in the equation. This regularity result together with comparison principle provided for the problem allow to obtain the ergodic large time behavior of the evolution problem in the periodic setting.

  14. Participation in regular classroom of student with hearing loss: frequency modulation system use.

    PubMed

    Jacob, Regina Tangerino de Souza; Alves, Tacianne Kriscia Machado; Moret, Adriane Lima Mortari; Morettin, Marina; Santos, Larissa Germiniani Dos; Mondelli, Maria Fernanda Capoani Garcia

    2014-01-01

    Translate and adapt to Portuguese the Classroom Participation Questionnaire (CPQ) and compare the participation in regular classroom of students with hearing impairment with and without the use of frequency modulation (FM) System. The translation and adaptation of CPQ included the translation into Portuguese, linguistic adaptation and review of grammatical and idiomatic equivalences. The questionnaire was administered to 15 children and teenagers using hearing aids (HA) and/or cochlear implant (CI), fitted with personal FM System. The translation of the English CPQ into the Portuguese instrument resulted in the "Questionário de participação em sala de aula" with the same number of questions as the original version; regarding linguistic adaptation, no difficulty was observed in the understanding of the items proposed in the application for students with hearing loss. The CPQ instrument was translated and culturally adapted for the Brazilian population, being named "Questionário de participação em sala de aula" in the Portuguese version. The study contributes to observation and monitoring of participation in regular classroom of students with hearing impairment using FM System. In general, students reported increased confidence and participation in the classroom with the use of FM System.

  15. Kalman Filtered Bio Heat Transfer Model Based Self-adaptive Hybrid Magnetic Resonance Thermometry.

    PubMed

    Zhang, Yuxin; Chen, Shuo; Deng, Kexin; Chen, Bingyao; Wei, Xing; Yang, Jiafei; Wang, Shi; Ying, Kui

    2017-01-01

    To develop a self-adaptive and fast thermometry method by combining the original hybrid magnetic resonance thermometry method and the bio heat transfer equation (BHTE) model. The proposed Kalman filtered Bio Heat Transfer Model Based Self-adaptive Hybrid Magnetic Resonance Thermometry, abbreviated as KalBHT hybrid method, introduced the BHTE model to synthesize a window on the regularization term of the hybrid algorithm, which leads to a self-adaptive regularization both spatially and temporally with change of temperature. Further, to decrease the sensitivity to accuracy of the BHTE model, Kalman filter is utilized to update the window at each iteration time. To investigate the effect of the proposed model, computer heating simulation, phantom microwave heating experiment and dynamic in-vivo model validation of liver and thoracic tumor were conducted in this study. The heating simulation indicates that the KalBHT hybrid algorithm achieves more accurate results without adjusting λ to a proper value in comparison to the hybrid algorithm. The results of the phantom heating experiment illustrate that the proposed model is able to follow temperature changes in the presence of motion and the temperature estimated also shows less noise in the background and surrounding the hot spot. The dynamic in-vivo model validation with heating simulation demonstrates that the proposed model has a higher convergence rate, more robustness to susceptibility problem surrounding the hot spot and more accuracy of temperature estimation. In the healthy liver experiment with heating simulation, the RMSE of the hot spot of the proposed model is reduced to about 50% compared to the RMSE of the original hybrid model and the convergence time becomes only about one fifth of the hybrid model. The proposed model is able to improve the accuracy of the original hybrid algorithm and accelerate the convergence rate of MR temperature estimation.

  16. A Discontinuous Petrov-Galerkin Methodology for Adaptive Solutions to the Incompressible Navier-Stokes Equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberts, Nathan V.; Demkowiz, Leszek; Moser, Robert

    2015-11-15

    The discontinuous Petrov-Galerkin methodology with optimal test functions (DPG) of Demkowicz and Gopalakrishnan [18, 20] guarantees the optimality of the solution in an energy norm, and provides several features facilitating adaptive schemes. Whereas Bubnov-Galerkin methods use identical trial and test spaces, Petrov-Galerkin methods allow these function spaces to differ. In DPG, test functions are computed on the fly and are chosen to realize the supremum in the inf-sup condition; the method is equivalent to a minimum residual method. For well-posed problems with sufficiently regular solutions, DPG can be shown to converge at optimal rates—the inf-sup constants governing the convergence aremore » mesh-independent, and of the same order as those governing the continuous problem [48]. DPG also provides an accurate mechanism for measuring the error, and this can be used to drive adaptive mesh refinements. We employ DPG to solve the steady incompressible Navier-Stokes equations in two dimensions, building on previous work on the Stokes equations, and focusing particularly on the usefulness of the approach for automatic adaptivity starting from a coarse mesh. We apply our approach to a manufactured solution due to Kovasznay as well as the lid-driven cavity flow, backward-facing step, and flow past a cylinder problems.« less

  17. Adaptive removal of background and white space from document images using seam categorization

    NASA Astrophysics Data System (ADS)

    Fillion, Claude; Fan, Zhigang; Monga, Vishal

    2011-03-01

    Document images are obtained regularly by rasterization of document content and as scans of printed documents. Resizing via background and white space removal is often desired for better consumption of these images, whether on displays or in print. While white space and background are easy to identify in images, existing methods such as naïve removal and content aware resizing (seam carving) each have limitations that can lead to undesirable artifacts, such as uneven spacing between lines of text or poor arrangement of content. An adaptive method based on image content is hence needed. In this paper we propose an adaptive method to intelligently remove white space and background content from document images. Document images are different from pictorial images in structure. They typically contain objects (text letters, pictures and graphics) separated by uniform background, which include both white paper space and other uniform color background. Pixels in uniform background regions are excellent candidates for deletion if resizing is required, as they introduce less change in document content and style, compared with deletion of object pixels. We propose a background deletion method that exploits both local and global context. The method aims to retain the document structural information and image quality.

  18. Fluorescence molecular tomography reconstruction via discrete cosine transform-based regularization

    NASA Astrophysics Data System (ADS)

    Shi, Junwei; Liu, Fei; Zhang, Jiulou; Luo, Jianwen; Bai, Jing

    2015-05-01

    Fluorescence molecular tomography (FMT) as a noninvasive imaging modality has been widely used for biomedical preclinical applications. However, FMT reconstruction suffers from severe ill-posedness, especially when a limited number of projections are used. In order to improve the quality of FMT reconstruction results, a discrete cosine transform (DCT) based reweighted L1-norm regularization algorithm is proposed. In each iteration of the reconstruction process, different reweighted regularization parameters are adaptively assigned according to the values of DCT coefficients to suppress the reconstruction noise. In addition, the permission region of the reconstructed fluorophores is adaptively constructed to increase the convergence speed. In order to evaluate the performance of the proposed algorithm, physical phantom and in vivo mouse experiments with a limited number of projections are carried out. For comparison, different L1-norm regularization strategies are employed. By quantifying the signal-to-noise ratio (SNR) of the reconstruction results in the phantom and in vivo mouse experiments with four projections, the proposed DCT-based reweighted L1-norm regularization shows higher SNR than other L1-norm regularizations employed in this work.

  19. Domain-Invariant Partial-Least-Squares Regression.

    PubMed

    Nikzad-Langerodi, Ramin; Zellinger, Werner; Lughofer, Edwin; Saminger-Platz, Susanne

    2018-05-11

    Multivariate calibration models often fail to extrapolate beyond the calibration samples because of changes associated with the instrumental response, environmental condition, or sample matrix. Most of the current methods used to adapt a source calibration model to a target domain exclusively apply to calibration transfer between similar analytical devices, while generic methods for calibration-model adaptation are largely missing. To fill this gap, we here introduce domain-invariant partial-least-squares (di-PLS) regression, which extends ordinary PLS by a domain regularizer in order to align the source and target distributions in the latent-variable space. We show that a domain-invariant weight vector can be derived in closed form, which allows the integration of (partially) labeled data from the source and target domains as well as entirely unlabeled data from the latter. We test our approach on a simulated data set where the aim is to desensitize a source calibration model to an unknown interfering agent in the target domain (i.e., unsupervised model adaptation). In addition, we demonstrate unsupervised, semisupervised, and supervised model adaptation by di-PLS on two real-world near-infrared (NIR) spectroscopic data sets.

  20. Editing the Neuronal Genome: a CRISPR View of Chromatin Regulation in Neuronal Development, Function, and Plasticity

    PubMed Central

    Yang, Marty G.; West, Anne E.

    2016-01-01

    The dynamic orchestration of gene expression is crucial for the proper differentiation, function, and adaptation of cells. In the brain, transcriptional regulation underlies the incredible diversity of neuronal cell types and contributes to the ability of neurons to adapt their function to the environment. Recently, novel methods for genome and epigenome editing have begun to revolutionize our understanding of gene regulatory mechanisms. In particular, the clustered regularly interspaced short palindromic repeats (CRISPR)/Cas9 system has proven to be a particularly accessible and adaptable technique for genome engineering. Here, we review the use of CRISPR/Cas9 in neurobiology and discuss how these studies have advanced understanding of nervous system development and plasticity. We cover four especially salient applications of CRISPR/Cas9: testing the consequences of enhancer mutations, tagging genes and gene products for visualization in live cells, directly activating or repressing enhancers in vivo, and manipulating the epigenome. In each case, we summarize findings from recent studies and discuss evolving adaptations of the method. PMID:28018138

  1. Improved liver R2* mapping by pixel-wise curve fitting with adaptive neighborhood regularization.

    PubMed

    Wang, Changqing; Zhang, Xinyuan; Liu, Xiaoyun; He, Taigang; Chen, Wufan; Feng, Qianjin; Feng, Yanqiu

    2018-08-01

    To improve liver R2* mapping by incorporating adaptive neighborhood regularization into pixel-wise curve fitting. Magnetic resonance imaging R2* mapping remains challenging because of the serial images with low signal-to-noise ratio. In this study, we proposed to exploit the neighboring pixels as regularization terms and adaptively determine the regularization parameters according to the interpixel signal similarity. The proposed algorithm, called the pixel-wise curve fitting with adaptive neighborhood regularization (PCANR), was compared with the conventional nonlinear least squares (NLS) and nonlocal means filter-based NLS algorithms on simulated, phantom, and in vivo data. Visually, the PCANR algorithm generates R2* maps with significantly reduced noise and well-preserved tiny structures. Quantitatively, the PCANR algorithm produces R2* maps with lower root mean square errors at varying R2* values and signal-to-noise-ratio levels compared with the NLS and nonlocal means filter-based NLS algorithms. For the high R2* values under low signal-to-noise-ratio levels, the PCANR algorithm outperforms the NLS and nonlocal means filter-based NLS algorithms in the accuracy and precision, in terms of mean and standard deviation of R2* measurements in selected region of interests, respectively. The PCANR algorithm can reduce the effect of noise on liver R2* mapping, and the improved measurement precision will benefit the assessment of hepatic iron in clinical practice. Magn Reson Med 80:792-801, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.

  2. Motion-adaptive spatio-temporal regularization for accelerated dynamic MRI.

    PubMed

    Asif, M Salman; Hamilton, Lei; Brummer, Marijn; Romberg, Justin

    2013-09-01

    Accelerated magnetic resonance imaging techniques reduce signal acquisition time by undersampling k-space. A fundamental problem in accelerated magnetic resonance imaging is the recovery of quality images from undersampled k-space data. Current state-of-the-art recovery algorithms exploit the spatial and temporal structures in underlying images to improve the reconstruction quality. In recent years, compressed sensing theory has helped formulate mathematical principles and conditions that ensure recovery of (structured) sparse signals from undersampled, incoherent measurements. In this article, a new recovery algorithm, motion-adaptive spatio-temporal regularization, is presented that uses spatial and temporal structured sparsity of MR images in the compressed sensing framework to recover dynamic MR images from highly undersampled k-space data. In contrast to existing algorithms, our proposed algorithm models temporal sparsity using motion-adaptive linear transformations between neighboring images. The efficiency of motion-adaptive spatio-temporal regularization is demonstrated with experiments on cardiac magnetic resonance imaging for a range of reduction factors. Results are also compared with k-t FOCUSS with motion estimation and compensation-another recently proposed recovery algorithm for dynamic magnetic resonance imaging. . Copyright © 2012 Wiley Periodicals, Inc.

  3. Regularized spherical polar fourier diffusion MRI with optimal dictionary learning.

    PubMed

    Cheng, Jian; Jiang, Tianzi; Deriche, Rachid; Shen, Dinggang; Yap, Pew-Thian

    2013-01-01

    Compressed Sensing (CS) takes advantage of signal sparsity or compressibility and allows superb signal reconstruction from relatively few measurements. Based on CS theory, a suitable dictionary for sparse representation of the signal is required. In diffusion MRI (dMRI), CS methods proposed for reconstruction of diffusion-weighted signal and the Ensemble Average Propagator (EAP) utilize two kinds of Dictionary Learning (DL) methods: 1) Discrete Representation DL (DR-DL), and 2) Continuous Representation DL (CR-DL). DR-DL is susceptible to numerical inaccuracy owing to interpolation and regridding errors in a discretized q-space. In this paper, we propose a novel CR-DL approach, called Dictionary Learning - Spherical Polar Fourier Imaging (DL-SPFI) for effective compressed-sensing reconstruction of the q-space diffusion-weighted signal and the EAP. In DL-SPFI, a dictionary that sparsifies the signal is learned from the space of continuous Gaussian diffusion signals. The learned dictionary is then adaptively applied to different voxels using a weighted LASSO framework for robust signal reconstruction. Compared with the start-of-the-art CR-DL and DR-DL methods proposed by Merlet et al. and Bilgic et al., respectively, our work offers the following advantages. First, the learned dictionary is proved to be optimal for Gaussian diffusion signals. Second, to our knowledge, this is the first work to learn a voxel-adaptive dictionary. The importance of the adaptive dictionary in EAP reconstruction will be demonstrated theoretically and empirically. Third, optimization in DL-SPFI is only performed in a small subspace resided by the SPF coefficients, as opposed to the q-space approach utilized by Merlet et al. We experimentally evaluated DL-SPFI with respect to L1-norm regularized SPFI (L1-SPFI), which uses the original SPF basis, and the DR-DL method proposed by Bilgic et al. The experiment results on synthetic and real data indicate that the learned dictionary produces sparser coefficients than the original SPF basis and results in significantly lower reconstruction error than Bilgic et al.'s method.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Q; Han, H; Xing, L

    Purpose: Dictionary learning based method has attracted more and more attentions in low-dose CT due to the superior performance on suppressing noise and preserving structural details. Considering the structures and noise vary from region to region in one imaging object, we propose a region-specific dictionary learning method to improve the low-dose CT reconstruction. Methods: A set of normal-dose images was used for dictionary learning. Segmentations were performed on these images, so that the training patch sets corresponding to different regions can be extracted out. After that, region-specific dictionaries were learned from these training sets. For the low-dose CT reconstruction, amore » conventional reconstruction, such as filtered back-projection (FBP), was performed firstly, and then segmentation was followed to segment the image into different regions. Sparsity constraints of each region based on its dictionary were used as regularization terms. The regularization parameters were selected adaptively according to different regions. A low-dose human thorax dataset was used to evaluate the proposed method. The single dictionary based method was performed for comparison. Results: Since the lung region is very different from the other part of thorax, two dictionaries corresponding to lung region and the rest part of thorax respectively were learned to better express the structural details and avoid artifacts. With only one dictionary some artifact appeared in the body region caused by the spot atoms corresponding to the structures in the lung region. And also some structure in the lung regions cannot be recovered well by only one dictionary. The quantitative indices of the result by the proposed method were also improved a little compared to the single dictionary based method. Conclusion: Region-specific dictionary can make the dictionary more adaptive to different region characteristics, which is much desirable for enhancing the performance of dictionary learning based method.« less

  5. Robust dynamic myocardial perfusion CT deconvolution using adaptive-weighted tensor total variation regularization

    NASA Astrophysics Data System (ADS)

    Gong, Changfei; Zeng, Dong; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Feng, Qianjin; Liang, Zhengrong; Ma, Jianhua

    2016-03-01

    Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for diagnosis and risk stratification of coronary artery disease by assessing the myocardial perfusion hemodynamic maps (MPHM). Meanwhile, the repeated scanning of the same region results in a relatively large radiation dose to patients potentially. In this work, we present a robust MPCT deconvolution algorithm with adaptive-weighted tensor total variation regularization to estimate residue function accurately under the low-dose context, which is termed `MPD-AwTTV'. More specifically, the AwTTV regularization takes into account the anisotropic edge property of the MPCT images compared with the conventional total variation (TV) regularization, which can mitigate the drawbacks of TV regularization. Subsequently, an effective iterative algorithm was adopted to minimize the associative objective function. Experimental results on a modified XCAT phantom demonstrated that the present MPD-AwTTV algorithm outperforms and is superior to other existing deconvolution algorithms in terms of noise-induced artifacts suppression, edge details preservation and accurate MPHM estimation.

  6. A Time-of-Flight Method to Measure the Speed of Sound Using a Stereo Sound Card

    ERIC Educational Resources Information Center

    Carvalho, Carlos C.; dos Santos, J. M. B. Lopes; Marques, M. B.

    2008-01-01

    Most homes in developed countries have a sophisticated data acquisition board, namely the PC sound board. Designed to be able to reproduce CD-quality stereo sound, it must have a sampling rate of at least 44 kHz and have very accurate timing between the two stereo channels. With a very simple adaptation of a pair of regular PC microphones, a…

  7. The quasi-optimality criterion in the linear functional strategy

    NASA Astrophysics Data System (ADS)

    Kindermann, Stefan; Pereverzyev, Sergiy, Jr.; Pilipenko, Andrey

    2018-07-01

    The linear functional strategy for the regularization of inverse problems is considered. For selecting the regularization parameter therein, we propose the heuristic quasi-optimality principle and some modifications including the smoothness of the linear functionals. We prove convergence rates for the linear functional strategy with these heuristic rules taking into account the smoothness of the solution and the functionals and imposing a structural condition on the noise. Furthermore, we study these noise conditions in both a deterministic and stochastic setup and verify that for mildly-ill-posed problems and Gaussian noise, these conditions are satisfied almost surely, where on the contrary, in the severely-ill-posed case and in a similar setup, the corresponding noise condition fails to hold. Moreover, we propose an aggregation method for adaptively optimizing the parameter choice rule by making use of improved rates for linear functionals. Numerical results indicate that this method yields better results than the standard heuristic rule.

  8. Regularization with numerical extrapolation for finite and UV-divergent multi-loop integrals

    NASA Astrophysics Data System (ADS)

    de Doncker, E.; Yuasa, F.; Kato, K.; Ishikawa, T.; Kapenga, J.; Olagbemi, O.

    2018-03-01

    We give numerical integration results for Feynman loop diagrams such as those covered by Laporta (2000) and by Baikov and Chetyrkin (2010), and which may give rise to loop integrals with UV singularities. We explore automatic adaptive integration using multivariate techniques from the PARINT package for multivariate integration, as well as iterated integration with programs from the QUADPACK package, and a trapezoidal method based on a double exponential transformation. PARINT is layered over MPI (Message Passing Interface), and incorporates advanced parallel/distributed techniques including load balancing among processes that may be distributed over a cluster or a network/grid of nodes. Results are included for 2-loop vertex and box diagrams and for sets of 2-, 3- and 4-loop self-energy diagrams with or without UV terms. Numerical regularization of integrals with singular terms is achieved by linear and non-linear extrapolation methods.

  9. AIDA: an adaptive image deconvolution algorithm with application to multi-frame and three-dimensional data

    PubMed Central

    Hom, Erik F. Y.; Marchis, Franck; Lee, Timothy K.; Haase, Sebastian; Agard, David A.; Sedat, John W.

    2011-01-01

    We describe an adaptive image deconvolution algorithm (AIDA) for myopic deconvolution of multi-frame and three-dimensional data acquired through astronomical and microscopic imaging. AIDA is a reimplementation and extension of the MISTRAL method developed by Mugnier and co-workers and shown to yield object reconstructions with excellent edge preservation and photometric precision [J. Opt. Soc. Am. A 21, 1841 (2004)]. Written in Numerical Python with calls to a robust constrained conjugate gradient method, AIDA has significantly improved run times over the original MISTRAL implementation. Included in AIDA is a scheme to automatically balance maximum-likelihood estimation and object regularization, which significantly decreases the amount of time and effort needed to generate satisfactory reconstructions. We validated AIDA using synthetic data spanning a broad range of signal-to-noise ratios and image types and demonstrated the algorithm to be effective for experimental data from adaptive optics–equipped telescope systems and wide-field microscopy. PMID:17491626

  10. Neuronal adaptation, novelty detection and regularity encoding in audition

    PubMed Central

    Malmierca, Manuel S.; Sanchez-Vives, Maria V.; Escera, Carles; Bendixen, Alexandra

    2014-01-01

    The ability to detect unexpected stimuli in the acoustic environment and determine their behavioral relevance to plan an appropriate reaction is critical for survival. This perspective article brings together several viewpoints and discusses current advances in understanding the mechanisms the auditory system implements to extract relevant information from incoming inputs and to identify unexpected events. This extraordinary sensitivity relies on the capacity to codify acoustic regularities, and is based on encoding properties that are present as early as the auditory midbrain. We review state-of-the-art studies on the processing of stimulus changes using non-invasive methods to record the summed electrical potentials in humans, and those that examine single-neuron responses in animal models. Human data will be based on mismatch negativity (MMN) and enhanced middle latency responses (MLR). Animal data will be based on the activity of single neurons at the cortical and subcortical levels, relating selective responses to novel stimuli to the MMN and to stimulus-specific neural adaptation (SSA). Theoretical models of the neural mechanisms that could create SSA and novelty responses will also be discussed. PMID:25009474

  11. A cellular automaton - finite volume method for the simulation of dendritic and eutectic growth in binary alloys using an adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Dobravec, Tadej; Mavrič, Boštjan; Šarler, Božidar

    2017-11-01

    A two-dimensional model to simulate the dendritic and eutectic growth in binary alloys is developed. A cellular automaton method is adopted to track the movement of the solid-liquid interface. The diffusion equation is solved in the solid and liquid phases by using an explicit finite volume method. The computational domain is divided into square cells that can be hierarchically refined or coarsened using an adaptive mesh based on the quadtree algorithm. Such a mesh refines the regions of the domain near the solid-liquid interface, where the highest concentration gradients are observed. In the regions where the lowest concentration gradients are observed the cells are coarsened. The originality of the work is in the novel, adaptive approach to the efficient and accurate solution of the posed multiscale problem. The model is verified and assessed by comparison with the analytical results of the Lipton-Glicksman-Kurz model for the steady growth of a dendrite tip and the Jackson-Hunt model for regular eutectic growth. Several examples of typical microstructures are simulated and the features of the method as well as further developments are discussed.

  12. A systematic review of gait analysis methods based on inertial sensors and adaptive algorithms.

    PubMed

    Caldas, Rafael; Mundt, Marion; Potthast, Wolfgang; Buarque de Lima Neto, Fernando; Markert, Bernd

    2017-09-01

    The conventional methods to assess human gait are either expensive or complex to be applied regularly in clinical practice. To reduce the cost and simplify the evaluation, inertial sensors and adaptive algorithms have been utilized, respectively. This paper aims to summarize studies that applied adaptive also called artificial intelligence (AI) algorithms to gait analysis based on inertial sensor data, verifying if they can support the clinical evaluation. Articles were identified through searches of the main databases, which were encompassed from 1968 to October 2016. We have identified 22 studies that met the inclusion criteria. The included papers were analyzed due to their data acquisition and processing methods with specific questionnaires. Concerning the data acquisition, the mean score is 6.1±1.62, what implies that 13 of 22 papers failed to report relevant outcomes. The quality assessment of AI algorithms presents an above-average rating (8.2±1.84). Therefore, AI algorithms seem to be able to support gait analysis based on inertial sensor data. Further research, however, is necessary to enhance and standardize the application in patients, since most of the studies used distinct methods to evaluate healthy subjects. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Adaptive laboratory evolution – principles and applications for biotechnology

    PubMed Central

    2013-01-01

    Adaptive laboratory evolution is a frequent method in biological studies to gain insights into the basic mechanisms of molecular evolution and adaptive changes that accumulate in microbial populations during long term selection under specified growth conditions. Although regularly performed for more than 25 years, the advent of transcript and cheap next-generation sequencing technologies has resulted in many recent studies, which successfully applied this technique in order to engineer microbial cells for biotechnological applications. Adaptive laboratory evolution has some major benefits as compared with classical genetic engineering but also some inherent limitations. However, recent studies show how some of the limitations may be overcome in order to successfully incorporate adaptive laboratory evolution in microbial cell factory design. Over the last two decades important insights into nutrient and stress metabolism of relevant model species were acquired, whereas some other aspects such as niche-specific differences of non-conventional cell factories are not completely understood. Altogether the current status and its future perspectives highlight the importance and potential of adaptive laboratory evolution as approach in biotechnological engineering. PMID:23815749

  14. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for Lp-Regularization Using the Multiple Sub-Dictionary Representation

    PubMed Central

    Zhang, Jie; Fan, Shangang; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki

    2017-01-01

    Both L1/2 and L2/3 are two typical non-convex regularizations of Lp (0

  15. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for Lp-Regularization Using the Multiple Sub-Dictionary Representation.

    PubMed

    Li, Yunyi; Zhang, Jie; Fan, Shangang; Yang, Jie; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki; Gui, Guan

    2017-12-15

    Both L 1/2 and L 2/3 are two typical non-convex regularizations of L p (0

  16. Spatially Regularized Machine Learning for Task and Resting-state fMRI

    PubMed Central

    Song, Xiaomu; Panych, Lawrence P.; Chen, Nan-kuei

    2015-01-01

    Background Reliable mapping of brain function across sessions and/or subjects in task- and resting-state has been a critical challenge for quantitative fMRI studies although it has been intensively addressed in the past decades. New Method A spatially regularized support vector machine (SVM) technique was developed for the reliable brain mapping in task- and resting-state. Unlike most existing SVM-based brain mapping techniques, which implement supervised classifications of specific brain functional states or disorders, the proposed method performs a semi-supervised classification for the general brain function mapping where spatial correlation of fMRI is integrated into the SVM learning. The method can adapt to intra- and inter-subject variations induced by fMRI nonstationarity, and identify a true boundary between active and inactive voxels, or between functionally connected and unconnected voxels in a feature space. Results The method was evaluated using synthetic and experimental data at the individual and group level. Multiple features were evaluated in terms of their contributions to the spatially regularized SVM learning. Reliable mapping results in both task- and resting-state were obtained from individual subjects and at the group level. Comparison with Existing Methods A comparison study was performed with independent component analysis, general linear model, and correlation analysis methods. Experimental results indicate that the proposed method can provide a better or comparable mapping performance at the individual and group level. Conclusions The proposed method can provide accurate and reliable mapping of brain function in task- and resting-state, and is applicable to a variety of quantitative fMRI studies. PMID:26470627

  17. 3D CSEM inversion based on goal-oriented adaptive finite element method

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Key, K.

    2016-12-01

    We present a parallel 3D frequency domain controlled-source electromagnetic inversion code name MARE3DEM. Non-linear inversion of observed data is performed with the Occam variant of regularized Gauss-Newton optimization. The forward operator is based on the goal-oriented finite element method that efficiently calculates the responses and sensitivity kernels in parallel using a data decomposition scheme where independent modeling tasks contain different frequencies and subsets of the transmitters and receivers. To accommodate complex 3D conductivity variation with high flexibility and precision, we adopt the dual-grid approach where the forward mesh conforms to the inversion parameter grid and is adaptively refined until the forward solution converges to the desired accuracy. This dual-grid approach is memory efficient, since the inverse parameter grid remains independent from fine meshing generated around the transmitter and receivers by the adaptive finite element method. Besides, the unstructured inverse mesh efficiently handles multiple scale structures and allows for fine-scale model parameters within the region of interest. Our mesh generation engine keeps track of the refinement hierarchy so that the map of conductivity and sensitivity kernel between the forward and inverse mesh is retained. We employ the adjoint-reciprocity method to calculate the sensitivity kernels which establish a linear relationship between changes in the conductivity model and changes in the modeled responses. Our code uses a direcy solver for the linear systems, so the adjoint problem is efficiently computed by re-using the factorization from the primary problem. Further computational efficiency and scalability is obtained in the regularized Gauss-Newton portion of the inversion using parallel dense matrix-matrix multiplication and matrix factorization routines implemented with the ScaLAPACK library. We show the scalability, reliability and the potential of the algorithm to deal with complex geological scenarios by applying it to the inversion of synthetic marine controlled source EM data generated for a complex 3D offshore model with significant seafloor topography.

  18. Moving mesh finite element simulation for phase-field modeling of brittle fracture and convergence of Newton's iteration

    NASA Astrophysics Data System (ADS)

    Zhang, Fei; Huang, Weizhang; Li, Xianping; Zhang, Shicheng

    2018-03-01

    A moving mesh finite element method is studied for the numerical solution of a phase-field model for brittle fracture. The moving mesh partial differential equation approach is employed to dynamically track crack propagation. Meanwhile, the decomposition of the strain tensor into tensile and compressive components is essential for the success of the phase-field modeling of brittle fracture but results in a non-smooth elastic energy and stronger nonlinearity in the governing equation. This makes the governing equation much more difficult to solve and, in particular, Newton's iteration often fails to converge. Three regularization methods are proposed to smooth out the decomposition of the strain tensor. Numerical examples of fracture propagation under quasi-static load demonstrate that all of the methods can effectively improve the convergence of Newton's iteration for relatively small values of the regularization parameter but without compromising the accuracy of the numerical solution. They also show that the moving mesh finite element method is able to adaptively concentrate the mesh elements around propagating cracks and handle multiple and complex crack systems.

  19. Subcortical processing of speech regularities underlies reading and music aptitude in children

    PubMed Central

    2011-01-01

    Background Neural sensitivity to acoustic regularities supports fundamental human behaviors such as hearing in noise and reading. Although the failure to encode acoustic regularities in ongoing speech has been associated with language and literacy deficits, how auditory expertise, such as the expertise that is associated with musical skill, relates to the brainstem processing of speech regularities is unknown. An association between musical skill and neural sensitivity to acoustic regularities would not be surprising given the importance of repetition and regularity in music. Here, we aimed to define relationships between the subcortical processing of speech regularities, music aptitude, and reading abilities in children with and without reading impairment. We hypothesized that, in combination with auditory cognitive abilities, neural sensitivity to regularities in ongoing speech provides a common biological mechanism underlying the development of music and reading abilities. Methods We assessed auditory working memory and attention, music aptitude, reading ability, and neural sensitivity to acoustic regularities in 42 school-aged children with a wide range of reading ability. Neural sensitivity to acoustic regularities was assessed by recording brainstem responses to the same speech sound presented in predictable and variable speech streams. Results Through correlation analyses and structural equation modeling, we reveal that music aptitude and literacy both relate to the extent of subcortical adaptation to regularities in ongoing speech as well as with auditory working memory and attention. Relationships between music and speech processing are specifically driven by performance on a musical rhythm task, underscoring the importance of rhythmic regularity for both language and music. Conclusions These data indicate common brain mechanisms underlying reading and music abilities that relate to how the nervous system responds to regularities in auditory input. Definition of common biological underpinnings for music and reading supports the usefulness of music for promoting child literacy, with the potential to improve reading remediation. PMID:22005291

  20. Moving force identification based on modified preconditioned conjugate gradient method

    NASA Astrophysics Data System (ADS)

    Chen, Zhen; Chan, Tommy H. T.; Nguyen, Andy

    2018-06-01

    This paper develops a modified preconditioned conjugate gradient (M-PCG) method for moving force identification (MFI) by improving the conjugate gradient (CG) and preconditioned conjugate gradient (PCG) methods with a modified Gram-Schmidt algorithm. The method aims to obtain more accurate and more efficient identification results from the responses of bridge deck caused by vehicles passing by, which are known to be sensitive to ill-posed problems that exist in the inverse problem. A simply supported beam model with biaxial time-varying forces is used to generate numerical simulations with various analysis scenarios to assess the effectiveness of the method. Evaluation results show that regularization matrix L and number of iterations j are very important influence factors to identification accuracy and noise immunity of M-PCG. Compared with the conventional counterpart SVD embedded in the time domain method (TDM) and the standard form of CG, the M-PCG with proper regularization matrix has many advantages such as better adaptability and more robust to ill-posed problems. More importantly, it is shown that the average optimal numbers of iterations of M-PCG can be reduced by more than 70% compared with PCG and this apparently makes M-PCG a preferred choice for field MFI applications.

  1. Cardiac C-arm computed tomography using a 3D + time ROI reconstruction method with spatial and temporal regularization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mory, Cyril, E-mail: cyril.mory@philips.com; Philips Research Medisys, 33 rue de Verdun, 92156 Suresnes; Auvray, Vincent

    2014-02-15

    Purpose: Reconstruction of the beating heart in 3D + time in the catheter laboratory using only the available C-arm system would improve diagnosis, guidance, device sizing, and outcome control for intracardiac interventions, e.g., electrophysiology, valvular disease treatment, structural or congenital heart disease. To obtain such a reconstruction, the patient's electrocardiogram (ECG) must be recorded during the acquisition and used in the reconstruction. In this paper, the authors present a 4D reconstruction method aiming to reconstruct the heart from a single sweep 10 s acquisition. Methods: The authors introduce the 4D RecOnstructiOn using Spatial and TEmporal Regularization (short 4D ROOSTER) method,more » which reconstructs all cardiac phases at once, as a 3D + time volume. The algorithm alternates between a reconstruction step based on conjugate gradient and four regularization steps: enforcing positivity, averaging along time outside a motion mask that contains the heart and vessels, 3D spatial total variation minimization, and 1D temporal total variation minimization. Results: 4D ROOSTER recovers the different temporal representations of a moving Shepp and Logan phantom, and outperforms both ECG-gated simultaneous algebraic reconstruction technique and prior image constrained compressed sensing on a clinical case. It generates 3D + time reconstructions with sharp edges which can be used, for example, to estimate the patient's left ventricular ejection fraction. Conclusions: 4D ROOSTER can be applied for human cardiac C-arm CT, and potentially in other dynamic tomography areas. It can easily be adapted to other problems as regularization is decoupled from projection and back projection.« less

  2. Spherical Deconvolution of Multichannel Diffusion MRI Data with Non-Gaussian Noise Models and Spatial Regularization

    PubMed Central

    Canales-Rodríguez, Erick J.; Caruyer, Emmanuel; Aja-Fernández, Santiago; Radua, Joaquim; Yurramendi Mendizabal, Jesús M.; Iturria-Medina, Yasser; Melie-García, Lester; Alemán-Gómez, Yasser; Thiran, Jean-Philippe; Sarró, Salvador; Pomarol-Clotet, Edith; Salvador, Raymond

    2015-01-01

    Spherical deconvolution (SD) methods are widely used to estimate the intra-voxel white-matter fiber orientations from diffusion MRI data. However, while some of these methods assume a zero-mean Gaussian distribution for the underlying noise, its real distribution is known to be non-Gaussian and to depend on many factors such as the number of coils and the methodology used to combine multichannel MRI signals. Indeed, the two prevailing methods for multichannel signal combination lead to noise patterns better described by Rician and noncentral Chi distributions. Here we develop a Robust and Unbiased Model-BAsed Spherical Deconvolution (RUMBA-SD) technique, intended to deal with realistic MRI noise, based on a Richardson-Lucy (RL) algorithm adapted to Rician and noncentral Chi likelihood models. To quantify the benefits of using proper noise models, RUMBA-SD was compared with dRL-SD, a well-established method based on the RL algorithm for Gaussian noise. Another aim of the study was to quantify the impact of including a total variation (TV) spatial regularization term in the estimation framework. To do this, we developed TV spatially-regularized versions of both RUMBA-SD and dRL-SD algorithms. The evaluation was performed by comparing various quality metrics on 132 three-dimensional synthetic phantoms involving different inter-fiber angles and volume fractions, which were contaminated with noise mimicking patterns generated by data processing in multichannel scanners. The results demonstrate that the inclusion of proper likelihood models leads to an increased ability to resolve fiber crossings with smaller inter-fiber angles and to better detect non-dominant fibers. The inclusion of TV regularization dramatically improved the resolution power of both techniques. The above findings were also verified in human brain data. PMID:26470024

  3. Adaptation and Validation of a Nutrition Environment Measures Survey for University Grab-and-Go Establishments.

    PubMed

    Lo, Brian K C; Minaker, Leia; Chan, Alicia N T; Hrgetic, Jessica; Mah, Catherine L

    2016-03-01

    To adapt and validate a survey instrument to assess the nutrition environment of grab-and-go establishments at a university campus. A version of the Nutrition Environment Measures Survey for grab-and-go establishments (NEMS-GG) was adapted from existing NEMS instruments and tested for reliability and validity through a cross-sectional assessment of the grab-and-go establishments at the University of Toronto. Product availability, price, and presence of nutrition information were evaluated. Cohen's kappa coefficient and intra-class correlation coefficients (ICC) were assessed for inter-rater reliability, and construct validity was assessed using the known-groups comparison method (via store scores). Fifteen grab-and-go establishments were assessed. Inter-rater reliability was high with an almost perfect agreement for availability (mean κ = 0.995) and store scores (ICC = 0.999). The tool demonstrated good face and construct validity. About half of the venues carried fruit and vegetables (46.7% and 53.3%, respectively). Regular and healthier entrée items were generally the same price. Healthier grains were cheaper than regular options. Six establishments displayed nutrition information. Establishments operated by the university's Food Services consistently scored the highest across all food premise types for nutrition signage, availability, and cost of healthier options. Health promotion strategies are needed to address availability and variety of healthier grab-and-go options in university settings.

  4. Effects of volume-based overload plyometric training on maximal-intensity exercise adaptations in young basketball players.

    PubMed

    Asadi, Abbas; Ramirez-Campillo, Rodrigo; Meylan, Cesar; Nakamura, Fabio Y; Cañas-Jamett, Rodrigo; Izquierdo, Mikel

    2017-12-01

    The aim of the present study was to compare maximal-intensity exercise adaptations in young basketball players (who were strong individuals at baseline) participating in regular basketball training versus regular plus a volume-based plyometric training program in the pre-season period. Young basketball players were recruited and assigned either to a plyometric with regular basketball training group (experimental group [EG]; N.=8), or a basketball training only group (control group [CG]; N.=8). The athletes in EG performed periodized (i.e., from 117 to 183 jumps per session) plyometric training for eight weeks. Before and after the intervention, players were assessed in vertical and broad jump, change of direction, maximal strength and a 60-meter sprint test. No significant improvements were found in the CG, while the EG improved vertical jump (effect size [ES] 2.8), broad jump (ES=2.4), agility T test (ES=2.2), Illinois agility test (ES=1.4), maximal strength (ES=1.8), and 60-m sprint (ES=1.6) (P<0.05) after intervention, and the improvements were greater compared to the CG (P<0.05). Plyometric training in addition to regular basketball practice can lead to meaningful improvements in maximal-intensity exercise adaptations among young basketball players during the pre-season.

  5. Semantic Segmentation of Forest Stands of Pure Species as a Global Optimization Problem

    NASA Astrophysics Data System (ADS)

    Dechesne, C.; Mallet, C.; Le Bris, A.; Gouet-Brunet, V.

    2017-05-01

    Forest stand delineation is a fundamental task for forest management purposes, that is still mainly manually performed through visual inspection of geospatial (very) high spatial resolution images. Stand detection has been barely addressed in the literature which has mainly focused, in forested environments, on individual tree extraction and tree species classification. From a methodological point of view, stand detection can be considered as a semantic segmentation problem. It offers two advantages. First, one can retrieve the dominant tree species per segment. Secondly, one can benefit from existing low-level tree species label maps from the literature as a basis for high-level object extraction. Thus, the semantic segmentation issue becomes a regularization issue in a weakly structured environment and can be formulated in an energetical framework. This papers aims at investigating which regularization strategies of the literature are the most adapted to delineate and classify forest stands of pure species. Both airborne lidar point clouds and multispectral very high spatial resolution images are integrated for that purpose. The local methods (such as filtering and probabilistic relaxation) are not adapted for such problem since the increase of the classification accuracy is below 5%. The global methods, based on an energy model, tend to be more efficient with an accuracy gain up to 15%. The segmentation results using such models have an accuracy ranging from 96% to 99%.

  6. CosmosDG: An hp -adaptive Discontinuous Galerkin Code for Hyper-resolved Relativistic MHD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anninos, Peter; Lau, Cheuk; Bryant, Colton

    We have extended Cosmos++, a multidimensional unstructured adaptive mesh code for solving the covariant Newtonian and general relativistic radiation magnetohydrodynamic (MHD) equations, to accommodate both discrete finite volume and arbitrarily high-order finite element structures. The new finite element implementation, called CosmosDG, is based on a discontinuous Galerkin (DG) formulation, using both entropy-based artificial viscosity and slope limiting procedures for the regularization of shocks. High-order multistage forward Euler and strong-stability preserving Runge–Kutta time integration options complement high-order spatial discretization. We have also added flexibility in the code infrastructure allowing for both adaptive mesh and adaptive basis order refinement to be performedmore » separately or simultaneously in a local (cell-by-cell) manner. We discuss in this report the DG formulation and present tests demonstrating the robustness, accuracy, and convergence of our numerical methods applied to special and general relativistic MHD, although we note that an equivalent capability currently also exists in CosmosDG for Newtonian systems.« less

  7. Adaptive spatio-temporal filtering of disturbed ECGs: a multi-channel approach to heartbeat detection in smart clothing.

    PubMed

    Wiklund, Urban; Karlsson, Marcus; Ostlund, Nils; Berglin, Lena; Lindecrantz, Kaj; Karlsson, Stefan; Sandsjö, Leif

    2007-06-01

    Intermittent disturbances are common in ECG signals recorded with smart clothing: this is mainly because of displacement of the electrodes over the skin. We evaluated a novel adaptive method for spatio-temporal filtering for heartbeat detection in noisy multi-channel ECGs including short signal interruptions in single channels. Using multi-channel database recordings (12-channel ECGs from 10 healthy subjects), the results showed that multi-channel spatio-temporal filtering outperformed regular independent component analysis. We also recorded seven channels of ECG using a T-shirt with textile electrodes. Ten healthy subjects performed different sequences during a 10-min recording: resting, standing, flexing breast muscles, walking and pushups. Using adaptive multi-channel filtering, the sensitivity and precision was above 97% in nine subjects. Adaptive multi-channel spatio-temporal filtering can be used to detect heartbeats in ECGs with high noise levels. One application is heartbeat detection in noisy ECG recordings obtained by integrated textile electrodes in smart clothing.

  8. CosmosDG: An hp-adaptive Discontinuous Galerkin Code for Hyper-resolved Relativistic MHD

    NASA Astrophysics Data System (ADS)

    Anninos, Peter; Bryant, Colton; Fragile, P. Chris; Holgado, A. Miguel; Lau, Cheuk; Nemergut, Daniel

    2017-08-01

    We have extended Cosmos++, a multidimensional unstructured adaptive mesh code for solving the covariant Newtonian and general relativistic radiation magnetohydrodynamic (MHD) equations, to accommodate both discrete finite volume and arbitrarily high-order finite element structures. The new finite element implementation, called CosmosDG, is based on a discontinuous Galerkin (DG) formulation, using both entropy-based artificial viscosity and slope limiting procedures for the regularization of shocks. High-order multistage forward Euler and strong-stability preserving Runge-Kutta time integration options complement high-order spatial discretization. We have also added flexibility in the code infrastructure allowing for both adaptive mesh and adaptive basis order refinement to be performed separately or simultaneously in a local (cell-by-cell) manner. We discuss in this report the DG formulation and present tests demonstrating the robustness, accuracy, and convergence of our numerical methods applied to special and general relativistic MHD, although we note that an equivalent capability currently also exists in CosmosDG for Newtonian systems.

  9. Generating Models of Infinite-State Communication Protocols Using Regular Inference with Abstraction

    NASA Astrophysics Data System (ADS)

    Aarts, Fides; Jonsson, Bengt; Uijen, Johan

    In order to facilitate model-based verification and validation, effort is underway to develop techniques for generating models of communication system components from observations of their external behavior. Most previous such work has employed regular inference techniques which generate modest-size finite-state models. They typically suppress parameters of messages, although these have a significant impact on control flow in many communication protocols. We present a framework, which adapts regular inference to include data parameters in messages and states for generating components with large or infinite message alphabets. A main idea is to adapt the framework of predicate abstraction, successfully used in formal verification. Since we are in a black-box setting, the abstraction must be supplied externally, using information about how the component manages data parameters. We have implemented our techniques by connecting the LearnLib tool for regular inference with the protocol simulator ns-2, and generated a model of the SIP component as implemented in ns-2.

  10. Self-organizing adaptive map: autonomous learning of curves and surfaces from point samples.

    PubMed

    Piastra, Marco

    2013-05-01

    Competitive Hebbian Learning (CHL) (Martinetz, 1993) is a simple and elegant method for estimating the topology of a manifold from point samples. The method has been adopted in a number of self-organizing networks described in the literature and has given rise to related studies in the fields of geometry and computational topology. Recent results from these fields have shown that a faithful reconstruction can be obtained using the CHL method only for curves and surfaces. Within these limitations, these findings constitute a basis for defining a CHL-based, growing self-organizing network that produces a faithful reconstruction of an input manifold. The SOAM (Self-Organizing Adaptive Map) algorithm adapts its local structure autonomously in such a way that it can match the features of the manifold being learned. The adaptation process is driven by the defects arising when the network structure is inadequate, which cause a growth in the density of units. Regions of the network undergo a phase transition and change their behavior whenever a simple, local condition of topological regularity is met. The phase transition is eventually completed across the entire structure and the adaptation process terminates. In specific conditions, the structure thus obtained is homeomorphic to the input manifold. During the adaptation process, the network also has the capability to focus on the acquisition of input point samples in critical regions, with a substantial increase in efficiency. The behavior of the network has been assessed experimentally with typical data sets for surface reconstruction, including suboptimal conditions, e.g. with undersampling and noise. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. From Particular to Popular: Facilitating EFL Mobile-Supported Cooperative Reading

    ERIC Educational Resources Information Center

    Lan, Yu-Ju; Sung, Yao-Ting; Chang, Kuo-En

    2013-01-01

    This paper reports the results of an action research-based study that adapted a mobile-supported cooperative reading system into regular English as a foreign language (EFL) classes at one Taiwanese elementary school. The current study was comprised of two stages: adaptation and evaluation. During the adaptation stage, a mobile-supported…

  12. Subsurface characterization with localized ensemble Kalman filter employing adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Delijani, Ebrahim Biniaz; Pishvaie, Mahmoud Reza; Boozarjomehry, Ramin Bozorgmehry

    2014-07-01

    Ensemble Kalman filter, EnKF, as a Monte Carlo sequential data assimilation method has emerged promisingly for subsurface media characterization during past decade. Due to high computational cost of large ensemble size, EnKF is limited to small ensemble set in practice. This results in appearance of spurious correlation in covariance structure leading to incorrect or probable divergence of updated realizations. In this paper, a universal/adaptive thresholding method is presented to remove and/or mitigate spurious correlation problem in the forecast covariance matrix. This method is, then, extended to regularize Kalman gain directly. Four different thresholding functions have been considered to threshold forecast covariance and gain matrices. These include hard, soft, lasso and Smoothly Clipped Absolute Deviation (SCAD) functions. Three benchmarks are used to evaluate the performances of these methods. These benchmarks include a small 1D linear model and two 2D water flooding (in petroleum reservoirs) cases whose levels of heterogeneity/nonlinearity are different. It should be noted that beside the adaptive thresholding, the standard distance dependant localization and bootstrap Kalman gain are also implemented for comparison purposes. We assessed each setup with different ensemble sets to investigate the sensitivity of each method on ensemble size. The results indicate that thresholding of forecast covariance yields more reliable performance than Kalman gain. Among thresholding function, SCAD is more robust for both covariance and gain estimation. Our analyses emphasize that not all assimilation cycles do require thresholding and it should be performed wisely during the early assimilation cycles. The proposed scheme of adaptive thresholding outperforms other methods for subsurface characterization of underlying benchmarks.

  13. An immune clock of human pregnancy

    PubMed Central

    Aghaeepour, Nima; Ganio, Edward A.; Mcilwain, David; Tsai, Amy S.; Tingle, Martha; Van Gassen, Sofie; Gaudilliere, Dyani K.; Baca, Quentin; McNeil, Leslie; Okada, Robin; Ghaemi, Mohammad S.; Furman, David; Wong, Ronald J.; Winn, Virginia D.; Druzin, Maurice L.; El-Sayed, Yaser Y.; Quaintance, Cecele; Gibbs, Ronald; Darmstadt, Gary L.; Shaw, Gary M.; Stevenson, David K.; Tibshirani, Robert; Nolan, Garry P.; Lewis, David B.; Angst, Martin S.; Gaudilliere, Brice

    2017-01-01

    The maintenance of pregnancy relies on finely tuned immune adaptations. We demonstrate that these adaptations are precisely timed, reflecting an immune clock of pregnancy in women delivering at term. Using mass cytometry, the abundance and functional responses of all major immune cell subsets were quantified in serial blood samples collected throughout pregnancy. Cell signaling–based Elastic Net, a regularized regression method adapted from the elastic net algorithm, was developed to infer and prospectively validate a predictive model of interrelated immune events that accurately captures the chronology of pregnancy. Model components highlighted existing knowledge and revealed previously unreported biology, including a critical role for the interleukin-2–dependent STAT5ab signaling pathway in modulating T cell function during pregnancy. These findings unravel the precise timing of immunological events occurring during a term pregnancy and provide the analytical framework to identify immunological deviations implicated in pregnancy-related pathologies. PMID:28864494

  14. Estimation and Selection via Absolute Penalized Convex Minimization And Its Multistage Adaptive Applications

    PubMed Central

    Huang, Jian; Zhang, Cun-Hui

    2013-01-01

    The ℓ1-penalized method, or the Lasso, has emerged as an important tool for the analysis of large data sets. Many important results have been obtained for the Lasso in linear regression which have led to a deeper understanding of high-dimensional statistical problems. In this article, we consider a class of weighted ℓ1-penalized estimators for convex loss functions of a general form, including the generalized linear models. We study the estimation, prediction, selection and sparsity properties of the weighted ℓ1-penalized estimator in sparse, high-dimensional settings where the number of predictors p can be much larger than the sample size n. Adaptive Lasso is considered as a special case. A multistage method is developed to approximate concave regularized estimation by applying an adaptive Lasso recursively. We provide prediction and estimation oracle inequalities for single- and multi-stage estimators, a general selection consistency theorem, and an upper bound for the dimension of the Lasso estimator. Important models including the linear regression, logistic regression and log-linear models are used throughout to illustrate the applications of the general results. PMID:24348100

  15. Solving regularly and singularly perturbed reaction-diffusion equations in three space dimensions

    NASA Astrophysics Data System (ADS)

    Moore, Peter K.

    2007-06-01

    In [P.K. Moore, Effects of basis selection and h-refinement on error estimator reliability and solution efficiency for higher-order methods in three space dimensions, Int. J. Numer. Anal. Mod. 3 (2006) 21-51] a fixed, high-order h-refinement finite element algorithm, Href, was introduced for solving reaction-diffusion equations in three space dimensions. In this paper Href is coupled with continuation creating an automatic method for solving regularly and singularly perturbed reaction-diffusion equations. The simple quasilinear Newton solver of Moore, (2006) is replaced by the nonlinear solver NITSOL [M. Pernice, H.F. Walker, NITSOL: a Newton iterative solver for nonlinear systems, SIAM J. Sci. Comput. 19 (1998) 302-318]. Good initial guesses for the nonlinear solver are obtained using continuation in the small parameter ɛ. Two strategies allow adaptive selection of ɛ. The first depends on the rate of convergence of the nonlinear solver and the second implements backtracking in ɛ. Finally a simple method is used to select the initial ɛ. Several examples illustrate the effectiveness of the algorithm.

  16. Background field removal using a region adaptive kernel for quantitative susceptibility mapping of human brain

    NASA Astrophysics Data System (ADS)

    Fang, Jinsheng; Bao, Lijun; Li, Xu; van Zijl, Peter C. M.; Chen, Zhong

    2017-08-01

    Background field removal is an important MR phase preprocessing step for quantitative susceptibility mapping (QSM). It separates the local field induced by tissue magnetic susceptibility sources from the background field generated by sources outside a region of interest, e.g. brain, such as air-tissue interface. In the vicinity of air-tissue boundary, e.g. skull and paranasal sinuses, where large susceptibility variations exist, present background field removal methods are usually insufficient and these regions often need to be excluded by brain mask erosion at the expense of losing information of local field and thus susceptibility measures in these regions. In this paper, we propose an extension to the variable-kernel sophisticated harmonic artifact reduction for phase data (V-SHARP) background field removal method using a region adaptive kernel (R-SHARP), in which a scalable spherical Gaussian kernel (SGK) is employed with its kernel radius and weights adjustable according to an energy "functional" reflecting the magnitude of field variation. Such an energy functional is defined in terms of a contour and two fitting functions incorporating regularization terms, from which a curve evolution model in level set formation is derived for energy minimization. We utilize it to detect regions of with a large field gradient caused by strong susceptibility variation. In such regions, the SGK will have a small radius and high weight at the sphere center in a manner adaptive to the voxel energy of the field perturbation. Using the proposed method, the background field generated from external sources can be effectively removed to get a more accurate estimation of the local field and thus of the QSM dipole inversion to map local tissue susceptibility sources. Numerical simulation, phantom and in vivo human brain data demonstrate improved performance of R-SHARP compared to V-SHARP and RESHARP (regularization enabled SHARP) methods, even when the whole paranasal sinus regions are preserved in the brain mask. Shadow artifacts due to strong susceptibility variations in the derived QSM maps could also be largely eliminated using the R-SHARP method, leading to more accurate QSM reconstruction.

  17. Background field removal using a region adaptive kernel for quantitative susceptibility mapping of human brain.

    PubMed

    Fang, Jinsheng; Bao, Lijun; Li, Xu; van Zijl, Peter C M; Chen, Zhong

    2017-08-01

    Background field removal is an important MR phase preprocessing step for quantitative susceptibility mapping (QSM). It separates the local field induced by tissue magnetic susceptibility sources from the background field generated by sources outside a region of interest, e.g. brain, such as air-tissue interface. In the vicinity of air-tissue boundary, e.g. skull and paranasal sinuses, where large susceptibility variations exist, present background field removal methods are usually insufficient and these regions often need to be excluded by brain mask erosion at the expense of losing information of local field and thus susceptibility measures in these regions. In this paper, we propose an extension to the variable-kernel sophisticated harmonic artifact reduction for phase data (V-SHARP) background field removal method using a region adaptive kernel (R-SHARP), in which a scalable spherical Gaussian kernel (SGK) is employed with its kernel radius and weights adjustable according to an energy "functional" reflecting the magnitude of field variation. Such an energy functional is defined in terms of a contour and two fitting functions incorporating regularization terms, from which a curve evolution model in level set formation is derived for energy minimization. We utilize it to detect regions of with a large field gradient caused by strong susceptibility variation. In such regions, the SGK will have a small radius and high weight at the sphere center in a manner adaptive to the voxel energy of the field perturbation. Using the proposed method, the background field generated from external sources can be effectively removed to get a more accurate estimation of the local field and thus of the QSM dipole inversion to map local tissue susceptibility sources. Numerical simulation, phantom and in vivo human brain data demonstrate improved performance of R-SHARP compared to V-SHARP and RESHARP (regularization enabled SHARP) methods, even when the whole paranasal sinus regions are preserved in the brain mask. Shadow artifacts due to strong susceptibility variations in the derived QSM maps could also be largely eliminated using the R-SHARP method, leading to more accurate QSM reconstruction. Copyright © 2017. Published by Elsevier Inc.

  18. Nonlinear inversion of electrical resistivity imaging using pruning Bayesian neural networks

    NASA Astrophysics Data System (ADS)

    Jiang, Fei-Bo; Dai, Qian-Wei; Dong, Li

    2016-06-01

    Conventional artificial neural networks used to solve electrical resistivity imaging (ERI) inversion problem suffer from overfitting and local minima. To solve these problems, we propose to use a pruning Bayesian neural network (PBNN) nonlinear inversion method and a sample design method based on the K-medoids clustering algorithm. In the sample design method, the training samples of the neural network are designed according to the prior information provided by the K-medoids clustering results; thus, the training process of the neural network is well guided. The proposed PBNN, based on Bayesian regularization, is used to select the hidden layer structure by assessing the effect of each hidden neuron to the inversion results. Then, the hyperparameter α k , which is based on the generalized mean, is chosen to guide the pruning process according to the prior distribution of the training samples under the small-sample condition. The proposed algorithm is more efficient than other common adaptive regularization methods in geophysics. The inversion of synthetic data and field data suggests that the proposed method suppresses the noise in the neural network training stage and enhances the generalization. The inversion results with the proposed method are better than those of the BPNN, RBFNN, and RRBFNN inversion methods as well as the conventional least squares inversion.

  19. RZA-NLMF algorithm-based adaptive sparse sensing for realizing compressive sensing

    NASA Astrophysics Data System (ADS)

    Gui, Guan; Xu, Li; Adachi, Fumiyuki

    2014-12-01

    Nonlinear sparse sensing (NSS) techniques have been adopted for realizing compressive sensing in many applications such as radar imaging. Unlike the NSS, in this paper, we propose an adaptive sparse sensing (ASS) approach using the reweighted zero-attracting normalized least mean fourth (RZA-NLMF) algorithm which depends on several given parameters, i.e., reweighted factor, regularization parameter, and initial step size. First, based on the independent assumption, Cramer-Rao lower bound (CRLB) is derived as for the performance comparisons. In addition, reweighted factor selection method is proposed for achieving robust estimation performance. Finally, to verify the algorithm, Monte Carlo-based computer simulations are given to show that the ASS achieves much better mean square error (MSE) performance than the NSS.

  20. Diffuse optical correlation tomography of cerebral blood flow during cortical spreading depression in rat brain

    NASA Astrophysics Data System (ADS)

    Zhou, Chao; Yu, Guoqiang; Furuya, Daisuke; Greenberg, Joel; Yodh, Arjun; Durduran, Turgut

    2006-02-01

    Diffuse optical correlation methods were adapted for three-dimensional (3D) tomography of cerebral blood flow (CBF) in small animal models. The image reconstruction was optimized using a noise model for diffuse correlation tomography which enabled better data selection and regularization. The tomographic approach was demonstrated with simulated data and during in-vivo cortical spreading depression (CSD) in rat brain. Three-dimensional images of CBF were obtained through intact skull in tissues(~4mm) deep below the cortex.

  1. Gwyscan: a library to support non-equidistant scanning probe microscope measurements

    NASA Astrophysics Data System (ADS)

    Klapetek, Petr; Yacoot, Andrew; Grolich, Petr; Valtr, Miroslav; Nečas, David

    2017-03-01

    We present a software library and related methodology for enabling easy integration of adaptive step (non-equidistant) scanning techniques into metrological scanning probe microscopes or scanning probe microscopes where individual x, y position data are recorded during measurements. Scanning with adaptive steps can reduce the amount of data collected in SPM measurements thereby leading to faster data acquisition, a smaller amount of data collection required for a specific analytical task and less sensitivity to mechanical and thermal drift. Implementation of adaptive scanning routines into a custom built microscope is not normally an easy task: regular data are much easier to handle for previewing (e.g. levelling) and storage. We present an environment to make implementation of adaptive scanning easier for an instrument developer, specifically taking into account data acquisition approaches that are used in high accuracy microscopes as those developed by National Metrology Institutes. This includes a library with algorithms written in C and LabVIEW for handling data storage, regular mesh preview generation and planning the scan path on basis of different assumptions. A set of modules for Gwyddion open source software for handling these data and for their further analysis is presented. Using this combination of data acquisition and processing tools one can implement adaptive scanning in a relatively easy way into an instrument that was previously measuring on a regular grid. The performance of the presented approach is shown and general non-equidistant data processing steps are discussed.

  2. A ℓ2, 1 norm regularized multi-kernel learning for false positive reduction in Lung nodule CAD.

    PubMed

    Cao, Peng; Liu, Xiaoli; Zhang, Jian; Li, Wei; Zhao, Dazhe; Huang, Min; Zaiane, Osmar

    2017-03-01

    The aim of this paper is to describe a novel algorithm for False Positive Reduction in lung nodule Computer Aided Detection(CAD). In this paper, we describes a new CT lung CAD method which aims to detect solid nodules. Specially, we proposed a multi-kernel classifier with a ℓ 2, 1 norm regularizer for heterogeneous feature fusion and selection from the feature subset level, and designed two efficient strategies to optimize the parameters of kernel weights in non-smooth ℓ 2, 1 regularized multiple kernel learning algorithm. The first optimization algorithm adapts a proximal gradient method for solving the ℓ 2, 1 norm of kernel weights, and use an accelerated method based on FISTA; the second one employs an iterative scheme based on an approximate gradient descent method. The results demonstrates that the FISTA-style accelerated proximal descent method is efficient for the ℓ 2, 1 norm formulation of multiple kernel learning with the theoretical guarantee of the convergence rate. Moreover, the experimental results demonstrate the effectiveness of the proposed methods in terms of Geometric mean (G-mean) and Area under the ROC curve (AUC), and significantly outperforms the competing methods. The proposed approach exhibits some remarkable advantages both in heterogeneous feature subsets fusion and classification phases. Compared with the fusion strategies of feature-level and decision level, the proposed ℓ 2, 1 norm multi-kernel learning algorithm is able to accurately fuse the complementary and heterogeneous feature sets, and automatically prune the irrelevant and redundant feature subsets to form a more discriminative feature set, leading a promising classification performance. Moreover, the proposed algorithm consistently outperforms the comparable classification approaches in the literature. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. An HP Adaptive Discontinuous Galerkin Method for Hyperbolic Conservation Laws. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Bey, Kim S.

    1994-01-01

    This dissertation addresses various issues for model classes of hyperbolic conservation laws. The basic approach developed in this work employs a new family of adaptive, hp-version, finite element methods based on a special discontinuous Galerkin formulation for hyperbolic problems. The discontinuous Galerkin formulation admits high-order local approximations on domains of quite general geometry, while providing a natural framework for finite element approximations and for theoretical developments. The use of hp-versions of the finite element method makes possible exponentially convergent schemes with very high accuracies in certain cases; the use of adaptive hp-schemes allows h-refinement in regions of low regularity and p-enrichment to deliver high accuracy, while keeping problem sizes manageable and dramatically smaller than many conventional approaches. The use of discontinuous Galerkin methods is uncommon in applications, but the methods rest on a reasonable mathematical basis for low-order cases and has local approximation features that can be exploited to produce very efficient schemes, especially in a parallel, multiprocessor environment. The place of this work is to first and primarily focus on a model class of linear hyperbolic conservation laws for which concrete mathematical results, methodologies, error estimates, convergence criteria, and parallel adaptive strategies can be developed, and to then briefly explore some extensions to more general cases. Next, we provide preliminaries to the study and a review of some aspects of the theory of hyperbolic conservation laws. We also provide a review of relevant literature on this subject and on the numerical analysis of these types of problems.

  4. A denoising algorithm for CT image using low-rank sparse coding

    NASA Astrophysics Data System (ADS)

    Lei, Yang; Xu, Dong; Zhou, Zhengyang; Wang, Tonghe; Dong, Xue; Liu, Tian; Dhabaan, Anees; Curran, Walter J.; Yang, Xiaofeng

    2018-03-01

    We propose a denoising method of CT image based on low-rank sparse coding. The proposed method constructs an adaptive dictionary of image patches and estimates the sparse coding regularization parameters using the Bayesian interpretation. A low-rank approximation approach is used to simultaneously construct the dictionary and achieve sparse representation through clustering similar image patches. A variable-splitting scheme and a quadratic optimization are used to reconstruct CT image based on achieved sparse coefficients. We tested this denoising technology using phantom, brain and abdominal CT images. The experimental results showed that the proposed method delivers state-of-art denoising performance, both in terms of objective criteria and visual quality.

  5. Low-dose CT reconstruction via L1 dictionary learning regularization using iteratively reweighted least-squares.

    PubMed

    Zhang, Cheng; Zhang, Tao; Li, Ming; Peng, Chengtao; Liu, Zhaobang; Zheng, Jian

    2016-06-18

    In order to reduce the radiation dose of CT (computed tomography), compressed sensing theory has been a hot topic since it provides the possibility of a high quality recovery from the sparse sampling data. Recently, the algorithm based on DL (dictionary learning) was developed to deal with the sparse CT reconstruction problem. However, the existing DL algorithm focuses on the minimization problem with the L2-norm regularization term, which leads to reconstruction quality deteriorating while the sampling rate declines further. Therefore, it is essential to improve the DL method to meet the demand of more dose reduction. In this paper, we replaced the L2-norm regularization term with the L1-norm one. It is expected that the proposed L1-DL method could alleviate the over-smoothing effect of the L2-minimization and reserve more image details. The proposed algorithm solves the L1-minimization problem by a weighting strategy, solving the new weighted L2-minimization problem based on IRLS (iteratively reweighted least squares). Through the numerical simulation, the proposed algorithm is compared with the existing DL method (adaptive dictionary based statistical iterative reconstruction, ADSIR) and other two typical compressed sensing algorithms. It is revealed that the proposed algorithm is more accurate than the other algorithms especially when further reducing the sampling rate or increasing the noise. The proposed L1-DL algorithm can utilize more prior information of image sparsity than ADSIR. By transforming the L2-norm regularization term of ADSIR with the L1-norm one and solving the L1-minimization problem by IRLS strategy, L1-DL could reconstruct the image more exactly.

  6. The Cluster AgeS Experiment (CASE). Detecting Aperiodic Photometric Variability with the Friends of Friends Algorithm

    NASA Astrophysics Data System (ADS)

    Rozyczka, M.; Narloch, W.; Pietrukowicz, P.; Thompson, I. B.; Pych, W.; Poleski, R.

    2018-03-01

    We adapt the friends of friends algorithm to the analysis of light curves, and show that it can be succesfully applied to searches for transient phenomena in large photometric databases. As a test case we search OGLE-III light curves for known dwarf novae. A single combination of control parameters allows us to narrow the search to 1% of the data while reaching a ≍90% detection efficiency. A search involving ≍2% of the data and three combinations of control parameters can be significantly more effective - in our case a 100% efficiency is reached. The method can also quite efficiently detect semi-regular variability. In particular, 28 new semi-regular variables have been found in the field of the globular cluster M22, which was examined earlier with the help of periodicity-searching algorithms.

  7. A fast 4D cone beam CT reconstruction method based on the OSC-TV algorithm.

    PubMed

    Mascolo-Fortin, Julia; Matenine, Dmitri; Archambault, Louis; Després, Philippe

    2018-01-01

    Four-dimensional cone beam computed tomography allows for temporally resolved imaging with useful applications in radiotherapy, but raises particular challenges in terms of image quality and computation time. The purpose of this work is to develop a fast and accurate 4D algorithm by adapting a GPU-accelerated ordered subsets convex algorithm (OSC), combined with the total variation minimization regularization technique (TV). Different initialization schemes were studied to adapt the OSC-TV algorithm to 4D reconstruction: each respiratory phase was initialized either with a 3D reconstruction or a blank image. Reconstruction algorithms were tested on a dynamic numerical phantom and on a clinical dataset. 4D iterations were implemented for a cluster of 8 GPUs. All developed methods allowed for an adequate visualization of the respiratory movement and compared favorably to the McKinnon-Bates and adaptive steepest descent projection onto convex sets algorithms, while the 4D reconstructions initialized from a prior 3D reconstruction led to better overall image quality. The most suitable adaptation of OSC-TV to 4D CBCT was found to be a combination of a prior FDK reconstruction and a 4D OSC-TV reconstruction with a reconstruction time of 4.5 minutes. This relatively short reconstruction time could facilitate a clinical use.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Besse, Nicolas; Latu, Guillaume; Ghizzo, Alain

    In this paper we present a new method for the numerical solution of the relativistic Vlasov-Maxwell system on a phase-space grid using an adaptive semi-Lagrangian method. The adaptivity is performed through a wavelet multiresolution analysis, which gives a powerful and natural refinement criterion based on the local measurement of the approximation error and regularity of the distribution function. Therefore, the multiscale expansion of the distribution function allows to get a sparse representation of the data and thus save memory space and CPU time. We apply this numerical scheme to reduced Vlasov-Maxwell systems arising in laser-plasma physics. Interaction of relativistically strongmore » laser pulses with overdense plasma slabs is investigated. These Vlasov simulations revealed a rich variety of phenomena associated with the fast particle dynamics induced by electromagnetic waves as electron trapping, particle acceleration, and electron plasma wavebreaking. However, the wavelet based adaptive method that we developed here, does not yield significant improvements compared to Vlasov solvers on a uniform mesh due to the substantial overhead that the method introduces. Nonetheless they might be a first step towards more efficient adaptive solvers based on different ideas for the grid refinement or on a more efficient implementation. Here the Vlasov simulations are performed in a two-dimensional phase-space where the development of thin filaments, strongly amplified by relativistic effects requires an important increase of the total number of points of the phase-space grid as they get finer as time goes on. The adaptive method could be more useful in cases where these thin filaments that need to be resolved are a very small fraction of the hyper-volume, which arises in higher dimensions because of the surface-to-volume scaling and the essentially one-dimensional structure of the filaments. Moreover, the main way to improve the efficiency of the adaptive method is to increase the local character in phase-space of the numerical scheme, by considering multiscale reconstruction with more compact support and by replacing the semi-Lagrangian method with more local - in space - numerical scheme as compact finite difference schemes, discontinuous-Galerkin method or finite element residual schemes which are well suited for parallel domain decomposition techniques.« less

  9. Lower Tropospheric Ozone Retrievals from Infrared Satellite Observations Using a Self-Adapting Regularization Method

    NASA Astrophysics Data System (ADS)

    Eremenko, M.; Sgheri, L.; Ridolfi, M.; Dufour, G.; Cuesta, J.

    2017-12-01

    Lower tropospheric ozone (O3) retrievals from nadir sounders is challenging due to the lack of vertical sensitivity of the measurements and towards the lowest layers. If improvements have been made during the last decade, it is still important to explore possibilities to improve the retrieval algorithms themselves. O3 retrieval from nadir satellite observations is an ill-conditioned problem, which requires regularization using constraint matrices. Up to now, most of the retrieval algorithms rely on a fixed constraint. The constraint is determined and fixed beforehand, on the basis of sensitivity tests. This does not allow ones to take advantage of the entire capabilities of the satellite measurements, which vary with the thermal conditions of the observed scenes. To overcome this limitation, we developed a self-adapting and altitude-dependent regularization scheme. A crucial step is the choice of the strength of the constraint. This choice is done during an iterative process and depends on the measurement errors and on the sensitivity of the measurements to the target parameters at the different altitudes. The challenge is to limit the use of a priori constraints to the minimal amount needed to perform the inversion. The algorithm has been tested on synthetic observations matching the future IASI-NG satellite instrument. IASI-NG measurements are simulated on the basis of O3 concentrations taken from an atmospheric model and retrieved using two retrieval schemes (the standard and self-adapting ones). Comparison of the results shows that the sensitivity of the observations to the O3 amount in the lowest layers (given by the degrees of freedom for the solution) is increased, which allows a better description of the ozone distribution, especially in the case of large ozone plumes. Biases are reduced and the spatial correlation is improved. Tentative of application to real observations from IASI, currently onboard the Metop satellite will also be presented.

  10. Catalytic micromotor generating self-propelled regular motion through random fluctuation.

    PubMed

    Yamamoto, Daigo; Mukai, Atsushi; Okita, Naoaki; Yoshikawa, Kenichi; Shioi, Akihisa

    2013-07-21

    Most of the current studies on nano∕microscale motors to generate regular motion have adapted the strategy to fabricate a composite with different materials. In this paper, we report that a simple object solely made of platinum generates regular motion driven by a catalytic chemical reaction with hydrogen peroxide. Depending on the morphological symmetry of the catalytic particles, a rich variety of random and regular motions are observed. The experimental trend is well reproduced by a simple theoretical model by taking into account of the anisotropic viscous effect on the self-propelled active Brownian fluctuation.

  11. Catalytic micromotor generating self-propelled regular motion through random fluctuation

    NASA Astrophysics Data System (ADS)

    Yamamoto, Daigo; Mukai, Atsushi; Okita, Naoaki; Yoshikawa, Kenichi; Shioi, Akihisa

    2013-07-01

    Most of the current studies on nano/microscale motors to generate regular motion have adapted the strategy to fabricate a composite with different materials. In this paper, we report that a simple object solely made of platinum generates regular motion driven by a catalytic chemical reaction with hydrogen peroxide. Depending on the morphological symmetry of the catalytic particles, a rich variety of random and regular motions are observed. The experimental trend is well reproduced by a simple theoretical model by taking into account of the anisotropic viscous effect on the self-propelled active Brownian fluctuation.

  12. Computer-aided detection of clustered microcalcifications in multiscale bilateral filtering regularized reconstructed digital breast tomosynthesis volume

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samala, Ravi K., E-mail: rsamala@umich.edu; Chan, Heang-Ping; Lu, Yao

    Purpose: Develop a computer-aided detection (CADe) system for clustered microcalcifications in digital breast tomosynthesis (DBT) volume enhanced with multiscale bilateral filtering (MSBF) regularization. Methods: With Institutional Review Board approval and written informed consent, two-view DBT of 154 breasts, of which 116 had biopsy-proven microcalcification (MC) clusters and 38 were free of MCs, was imaged with a General Electric GEN2 prototype DBT system. The DBT volumes were reconstructed with MSBF-regularized simultaneous algebraic reconstruction technique (SART) that was designed to enhance MCs and reduce background noise while preserving the quality of other tissue structures. The contrast-to-noise ratio (CNR) of MCs was furthermore » improved with enhancement-modulated calcification response (EMCR) preprocessing, which combined multiscale Hessian response to enhance MCs by shape and bandpass filtering to remove the low-frequency structured background. MC candidates were then located in the EMCR volume using iterative thresholding and segmented by adaptive region growing. Two sets of potential MC objects, cluster centroid objects and MC seed objects, were generated and the CNR of each object was calculated. The number of candidates in each set was controlled based on the breast volume. Dynamic clustering around the centroid objects grouped the MC candidates to form clusters. Adaptive criteria were designed to reduce false positive (FP) clusters based on the size, CNR values and the number of MCs in the cluster, cluster shape, and cluster based maximum intensity projection. Free-response receiver operating characteristic (FROC) and jackknife alternative FROC (JAFROC) analyses were used to assess the performance and compare with that of a previous study. Results: Unpaired two-tailedt-test showed a significant increase (p < 0.0001) in the ratio of CNRs for MCs with and without MSBF regularization compared to similar ratios for FPs. For view-based detection, a sensitivity of 85% was achieved at an FP rate of 2.16 per DBT volume. For case-based detection, a sensitivity of 85% was achieved at an FP rate of 0.85 per DBT volume. JAFROC analysis showed a significant improvement in the performance of the current CADe system compared to that of our previous system (p = 0.003). Conclusions: MBSF regularized SART reconstruction enhances MCs. The enhancement in the signals, in combination with properly designed adaptive threshold criteria, effective MC feature analysis, and false positive reduction techniques, leads to a significant improvement in the detection of clustered MCs in DBT.« less

  13. Nonlocal sparse model with adaptive structural clustering for feature extraction of aero-engine bearings

    NASA Astrophysics Data System (ADS)

    Zhang, Han; Chen, Xuefeng; Du, Zhaohui; Li, Xiang; Yan, Ruqiang

    2016-04-01

    Fault information of aero-engine bearings presents two particular phenomena, i.e., waveform distortion and impulsive feature frequency band dispersion, which leads to a challenging problem for current techniques of bearing fault diagnosis. Moreover, although many progresses of sparse representation theory have been made in feature extraction of fault information, the theory also confronts inevitable performance degradation due to the fact that relatively weak fault information has not sufficiently prominent and sparse representations. Therefore, a novel nonlocal sparse model (coined NLSM) and its algorithm framework has been proposed in this paper, which goes beyond simple sparsity by introducing more intrinsic structures of feature information. This work adequately exploits the underlying prior information that feature information exhibits nonlocal self-similarity through clustering similar signal fragments and stacking them together into groups. Within this framework, the prior information is transformed into a regularization term and a sparse optimization problem, which could be solved through block coordinate descent method (BCD), is formulated. Additionally, the adaptive structural clustering sparse dictionary learning technique, which utilizes k-Nearest-Neighbor (kNN) clustering and principal component analysis (PCA) learning, is adopted to further enable sufficient sparsity of feature information. Moreover, the selection rule of regularization parameter and computational complexity are described in detail. The performance of the proposed framework is evaluated through numerical experiment and its superiority with respect to the state-of-the-art method in the field is demonstrated through the vibration signals of experimental rig of aircraft engine bearings.

  14. Effects of Cooperative Education on Student Adaptation to University.

    ERIC Educational Resources Information Center

    Carrell, Suzanne E.; Rowe, Patricia M.

    1993-01-01

    In a comparison of cooperative education and regular students in arts, math, and science (n=267), co-op students reported better social adjustment and attachment to the university and greater commitment to educational goals. Arts students were better adapted to university than others. (SK)

  15. The Relationship between Gentle Tactile Stimulation on the Fetus and Its Temperament 3 Months after Birth

    PubMed Central

    Wang, Zhe-Wei; Hua, Jing; Xu, Yu-Hong

    2015-01-01

    Objective. The aim of this study was to evaluate the effect of gentle tactile stimulation on the fetus in its temperament 3 months after birth. Method. A total of 302 mother-3-month-infant dyads enrolled the retrospective cohort study. 76 mothers had regular gentle tactile stimulation on the fetus in their pregnancy; 62 mothers had irregular tactile stimulation on the fetus, and the rest of 164 mothers who had no tactile stimulation served as nonexposure group. Temperament was assessed using the EITS (a nine-dimensional scale of temperament). Results. Significant difference in temperament type was found among infants in 3 groups at 3 months of age. In the regular practice group, the babies with easy type temperament accounted for 73.7%, which was higher than that in irregular practice group (53.2%, P = 0.012) and that in the control group (42.1%, P < 0.001). Compared to infants in no practice group, the infants who had received regular gentle tactile stimulation before birth were lower in negative mood (P = 0.047) while higher in adaptability (P < 0.001), approach (P = 0.001), and persistence (P = 0.001), respectively. Conclusion. Regular gentle tactile stimulation on fetus may promote the formation of easy type infant temperament. PMID:26180374

  16. How evolution learns to generalise: Using the principles of learning theory to understand the evolution of developmental organisation.

    PubMed

    Kouvaris, Kostas; Clune, Jeff; Kounios, Loizos; Brede, Markus; Watson, Richard A

    2017-04-01

    One of the most intriguing questions in evolution is how organisms exhibit suitable phenotypic variation to rapidly adapt in novel selective environments. Such variability is crucial for evolvability, but poorly understood. In particular, how can natural selection favour developmental organisations that facilitate adaptive evolution in previously unseen environments? Such a capacity suggests foresight that is incompatible with the short-sighted concept of natural selection. A potential resolution is provided by the idea that evolution may discover and exploit information not only about the particular phenotypes selected in the past, but their underlying structural regularities: new phenotypes, with the same underlying regularities, but novel particulars, may then be useful in new environments. If true, we still need to understand the conditions in which natural selection will discover such deep regularities rather than exploiting 'quick fixes' (i.e., fixes that provide adaptive phenotypes in the short term, but limit future evolvability). Here we argue that the ability of evolution to discover such regularities is formally analogous to learning principles, familiar in humans and machines, that enable generalisation from past experience. Conversely, natural selection that fails to enhance evolvability is directly analogous to the learning problem of over-fitting and the subsequent failure to generalise. We support the conclusion that evolving systems and learning systems are different instantiations of the same algorithmic principles by showing that existing results from the learning domain can be transferred to the evolution domain. Specifically, we show that conditions that alleviate over-fitting in learning systems successfully predict which biological conditions (e.g., environmental variation, regularity, noise or a pressure for developmental simplicity) enhance evolvability. This equivalence provides access to a well-developed theoretical framework from learning theory that enables a characterisation of the general conditions for the evolution of evolvability.

  17. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics.

    PubMed

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-04-06

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

  18. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics

    PubMed Central

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-01-01

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods. PMID:28383503

  19. Verifying Three-Dimensional Skull Model Reconstruction Using Cranial Index of Symmetry

    PubMed Central

    Kung, Woon-Man; Chen, Shuo-Tsung; Lin, Chung-Hsiang; Lu, Yu-Mei; Chen, Tzu-Hsuan; Lin, Muh-Shi

    2013-01-01

    Background Difficulty exists in scalp adaptation for cranioplasty with customized computer-assisted design/manufacturing (CAD/CAM) implant in situations of excessive wound tension and sub-cranioplasty dead space. To solve this clinical problem, the CAD/CAM technique should include algorithms to reconstruct a depressed contour to cover the skull defect. Satisfactory CAM-derived alloplastic implants are based on highly accurate three-dimensional (3-D) CAD modeling. Thus, it is quite important to establish a symmetrically regular CAD/CAM reconstruction prior to depressing the contour. The purpose of this study is to verify the aesthetic outcomes of CAD models with regular contours using cranial index of symmetry (CIS). Materials and methods From January 2011 to June 2012, decompressive craniectomy (DC) was performed for 15 consecutive patients in our institute. 3-D CAD models of skull defects were reconstructed using commercial software. These models were checked in terms of symmetry by CIS scores. Results CIS scores of CAD reconstructions were 99.24±0.004% (range 98.47–99.84). CIS scores of these CAD models were statistically significantly greater than 95%, identical to 99.5%, but lower than 99.6% (p<0.001, p = 0.064, p = 0.021 respectively, Wilcoxon matched pairs signed rank test). These data evidenced the highly accurate symmetry of these CAD models with regular contours. Conclusions CIS calculation is beneficial to assess aesthetic outcomes of CAD-reconstructed skulls in terms of cranial symmetry. This enables further accurate CAD models and CAM cranial implants with depressed contours, which are essential in patients with difficult scalp adaptation. PMID:24204566

  20. Higher and lowest order mixed finite element approximation of subsurface flow problems with solutions of low regularity

    NASA Astrophysics Data System (ADS)

    Bause, Markus

    2008-02-01

    In this work we study mixed finite element approximations of Richards' equation for simulating variably saturated subsurface flow and simultaneous reactive solute transport. Whereas higher order schemes have proved their ability to approximate reliably reactive solute transport (cf., e.g. [Bause M, Knabner P. Numerical simulation of contaminant biodegradation by higher order methods and adaptive time stepping. Comput Visual Sci 7;2004:61-78]), the Raviart- Thomas mixed finite element method ( RT0) with a first order accurate flux approximation is popular for computing the underlying water flow field (cf. [Bause M, Knabner P. Computation of variably saturated subsurface flow by adaptive mixed hybrid finite element methods. Adv Water Resour 27;2004:565-581, Farthing MW, Kees CE, Miller CT. Mixed finite element methods and higher order temporal approximations for variably saturated groundwater flow. Adv Water Resour 26;2003:373-394, Starke G. Least-squares mixed finite element solution of variably saturated subsurface flow problems. SIAM J Sci Comput 21;2000:1869-1885, Younes A, Mosé R, Ackerer P, Chavent G. A new formulation of the mixed finite element method for solving elliptic and parabolic PDE with triangular elements. J Comp Phys 149;1999:148-167, Woodward CS, Dawson CN. Analysis of expanded mixed finite element methods for a nonlinear parabolic equation modeling flow into variably saturated porous media. SIAM J Numer Anal 37;2000:701-724]). This combination might be non-optimal. Higher order techniques could increase the accuracy of the flow field calculation and thereby improve the prediction of the solute transport. Here, we analyse the application of the Brezzi- Douglas- Marini element ( BDM1) with a second order accurate flux approximation to elliptic, parabolic and degenerate problems whose solutions lack the regularity that is assumed in optimal order error analyses. For the flow field calculation a superiority of the BDM1 approach to the RT0 one is observed, which however is less significant for the accompanying solute transport.

  1. Frequent conjugative transfer accelerates adaptation of a broad-host-range plasmid to an unfavorable Pseudomonas putida host.

    PubMed

    Heuer, Holger; Fox, Randal E; Top, Eva M

    2007-03-01

    IncP-1 plasmids are known to be promiscuous, but it is not understood if they are equally well adapted to various species within their host range. Moreover, little is known about their fate in bacterial communities. We determined if the IncP-1beta plasmid pB10 was unstable in some Proteobacteria, and whether plasmid stability was enhanced after long-term carriage in a single host and when regularly switched between isogenic hosts. Plasmid pB10 was found to be very unstable in Pseudomonas putida H2, and conferred a high cost (c. 20% decrease in fitness relative to the plasmid-free host). H2(pB10) was then evolved under conditions that selected for plasmid maintenance, with or without regular plasmid transfer (host-switching). When tested in the ancestral host, the evolved plasmids were more stable and their cost was significantly reduced (9% and 16% for plasmids from host-switched and nonswitched lineages, respectively). Our findings suggest that IncP-1 plasmids can rapidly adapt to an unfavorable host by improving their overall stability, and that regular conjugative transfer accelerates this process.

  2. Comparative genomic analysis of Lactobacillus plantarum ZJ316 reveals its genetic adaptation and potential probiotic profiles* #

    PubMed Central

    Li, Ping; Li, Xuan; Gu, Qing; Lou, Xiu-yu; Zhang, Xiao-mei; Song, Da-feng; Zhang, Chen

    2016-01-01

    Objective: In previous studies, Lactobacillus plantarum ZJ316 showed probiotic properties, such as antimicrobial activity against various pathogens and the capacity to significantly improve pig growth and pork quality. The purpose of this study was to reveal the genes potentially related to its genetic adaptation and probiotic profiles based on comparative genomic analysis. Methods: The genome sequence of L. plantarum ZJ316 was compared with those of eight L. plantarum strains deposited in GenBank. BLASTN, Mauve, and MUMmer programs were used for genome alignment and comparison. CRISPRFinder was applied for searching the clustered regularly interspaced short palindromic repeats (CRISPRs). Results: We identified genes that encode proteins related to genetic adaptation and probiotic profiles, including carbohydrate transport and metabolism, proteolytic enzyme systems and amino acid biosynthesis, CRISPR adaptive immunity, stress responses, bile salt resistance, ability to adhere to the host intestinal wall, exopolysaccharide (EPS) biosynthesis, and bacteriocin biosynthesis. Conclusions: Comparative characterization of the L. plantarum ZJ316 genome provided the genetic basis for further elucidating the functional mechanisms of its probiotic properties. ZJ316 could be considered a potential probiotic candidate. PMID:27487802

  3. Interactive prostate segmentation using atlas-guided semi-supervised learning and adaptive feature selection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Sang Hyun; Gao, Yaozong, E-mail: yzgao@cs.unc.edu; Shi, Yinghuan, E-mail: syh@nju.edu.cn

    Purpose: Accurate prostate segmentation is necessary for maximizing the effectiveness of radiation therapy of prostate cancer. However, manual segmentation from 3D CT images is very time-consuming and often causes large intra- and interobserver variations across clinicians. Many segmentation methods have been proposed to automate this labor-intensive process, but tedious manual editing is still required due to the limited performance. In this paper, the authors propose a new interactive segmentation method that can (1) flexibly generate the editing result with a few scribbles or dots provided by a clinician, (2) fast deliver intermediate results to the clinician, and (3) sequentially correctmore » the segmentations from any type of automatic or interactive segmentation methods. Methods: The authors formulate the editing problem as a semisupervised learning problem which can utilize a priori knowledge of training data and also the valuable information from user interactions. Specifically, from a region of interest near the given user interactions, the appropriate training labels, which are well matched with the user interactions, can be locally searched from a training set. With voting from the selected training labels, both confident prostate and background voxels, as well as unconfident voxels can be estimated. To reflect informative relationship between voxels, location-adaptive features are selected from the confident voxels by using regression forest and Fisher separation criterion. Then, the manifold configuration computed in the derived feature space is enforced into the semisupervised learning algorithm. The labels of unconfident voxels are then predicted by regularizing semisupervised learning algorithm. Results: The proposed interactive segmentation method was applied to correct automatic segmentation results of 30 challenging CT images. The correction was conducted three times with different user interactions performed at different time periods, in order to evaluate both the efficiency and the robustness. The automatic segmentation results with the original average Dice similarity coefficient of 0.78 were improved to 0.865–0.872 after conducting 55–59 interactions by using the proposed method, where each editing procedure took less than 3 s. In addition, the proposed method obtained the most consistent editing results with respect to different user interactions, compared to other methods. Conclusions: The proposed method obtains robust editing results with few interactions for various wrong segmentation cases, by selecting the location-adaptive features and further imposing the manifold regularization. The authors expect the proposed method to largely reduce the laborious burdens of manual editing, as well as both the intra- and interobserver variability across clinicians.« less

  4. Gradient maintenance: A new algorithm for fast online replanning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahunbay, Ergun E., E-mail: eahunbay@mcw.edu; Li, X. Allen

    2015-06-15

    Purpose: Clinical use of online adaptive replanning has been hampered by the unpractically long time required to delineate volumes based on the image of the day. The authors propose a new replanning algorithm, named gradient maintenance (GM), which does not require the delineation of organs at risk (OARs), and can enhance automation, drastically reducing planning time and improving consistency and throughput of online replanning. Methods: The proposed GM algorithm is based on the hypothesis that if the dose gradient toward each OAR in daily anatomy can be maintained the same as that in the original plan, the intended plan qualitymore » of the original plan would be preserved in the adaptive plan. The algorithm requires a series of partial concentric rings (PCRs) to be automatically generated around the target toward each OAR on the planning and the daily images. The PCRs are used in the daily optimization objective function. The PCR dose constraints are generated with dose–volume data extracted from the original plan. To demonstrate this idea, GM plans generated using daily images acquired using an in-room CT were compared to regular optimization and image guided radiation therapy repositioning plans for representative prostate and pancreatic cancer cases. Results: The adaptive replanning using the GM algorithm, requiring only the target contour from the CT of the day, can be completed within 5 min without using high-power hardware. The obtained adaptive plans were almost as good as the regular optimization plans and were better than the repositioning plans for the cases studied. Conclusions: The newly proposed GM replanning algorithm, requiring only target delineation, not full delineation of OARs, substantially increased planning speed for online adaptive replanning. The preliminary results indicate that the GM algorithm may be a solution to improve the ability for automation and may be especially suitable for sites with small-to-medium size targets surrounded by several critical structures.« less

  5. A weakly-compressible Cartesian grid approach for hydrodynamic flows

    NASA Astrophysics Data System (ADS)

    Bigay, P.; Oger, G.; Guilcher, P.-M.; Le Touzé, D.

    2017-11-01

    The present article aims at proposing an original strategy to solve hydrodynamic flows. In introduction, the motivations for this strategy are developed. It aims at modeling viscous and turbulent flows including complex moving geometries, while avoiding meshing constraints. The proposed approach relies on a weakly-compressible formulation of the Navier-Stokes equations. Unlike most hydrodynamic CFD (Computational Fluid Dynamics) solvers usually based on implicit incompressible formulations, a fully-explicit temporal scheme is used. A purely Cartesian grid is adopted for numerical accuracy and algorithmic simplicity purposes. This characteristic allows an easy use of Adaptive Mesh Refinement (AMR) methods embedded within a massively parallel framework. Geometries are automatically immersed within the Cartesian grid with an AMR compatible treatment. The method proposed uses an Immersed Boundary Method (IBM) adapted to the weakly-compressible formalism and imposed smoothly through a regularization function, which stands as another originality of this work. All these features have been implemented within an in-house solver based on this WCCH (Weakly-Compressible Cartesian Hydrodynamic) method which meets the above requirements whilst allowing the use of high-order (> 3) spatial schemes rarely used in existing hydrodynamic solvers. The details of this WCCH method are presented and validated in this article.

  6. The Detroit Approach to Adapted Physical Education and Recreation.

    ERIC Educational Resources Information Center

    Elkins, Bruce; Czapski, Stephen

    The report describes Detroit's Adaptive Physical Education Consortium Project in Michigan. Among the main objectives of the project are to coordinate all physical education and recreation services to the handicapped in the Detroit area; to facilitate the mainstreaming of capable handicapped individuals into existing "regular" physical…

  7. On the Use of Nonlinear Regularization in Inverse Methods for the Solar Tachocline Profile Determination

    NASA Astrophysics Data System (ADS)

    Corbard, T.; Berthomieu, G.; Provost, J.; Blanc-Feraud, L.

    Inferring the solar rotation from observed frequency splittings represents an ill-posed problem in the sense of Hadamard and the traditional approach used to override this difficulty consists in regularizing the problem by adding some a priori information on the global smoothness of the solution defined as the norm of its first or second derivative. Nevertheless, inversions of rotational splittings (e.g. Corbard et al., 1998; Schou et al., 1998) have shown that the surface layers and the so-called solar tachocline (Spiegel & Zahn 1992) at the base of the convection zone are regions in which high radial gradients of the rotation rate occur. %there exist high gradients in the solar rotation profile near %the surface and at the base of the convection zone (e.g. Corbard et al. 1998) %in the so-called solar tachocline (Spiegel & Zahn 1992). Therefore, the global smoothness a-priori which tends to smooth out every high gradient in the solution may not be appropriate for the study of a zone like the tachocline which is of particular interest for the study of solar dynamics (e.g. Elliot 1997). In order to infer the fine structure of such regions with high gradients by inverting helioseismic data, we have to find a way to preserve these zones in the inversion process. Setting a more adapted constraint on the solution leads to non-linear regularization methods that are in current use for edge-preserving regularization in computed imaging (e.g. Blanc-Feraud et al. 1995). In this work, we investigate their use in the helioseismic context of rotational inversions.

  8. Perspectives on the Integration of Regular and Special Education: Eliminating the Knowledge Dichotomy at the University Level.

    ERIC Educational Resources Information Center

    Aldinger, Loviah E., Ed.

    Five papers describe ways to integrate knowledge from regular and special education at the university level. L. Hudson and M. Carroll ("The Preservice Teacher Experiences Variation in the Meaning Making of Handicapped and Nonhandicapped Learners") review adaptations in a competency based teacher education program to include information on high…

  9. A numerical study of adaptive space and time discretisations for Gross–Pitaevskii equations

    PubMed Central

    Thalhammer, Mechthild; Abhau, Jochen

    2012-01-01

    As a basic principle, benefits of adaptive discretisations are an improved balance between required accuracy and efficiency as well as an enhancement of the reliability of numerical computations. In this work, the capacity of locally adaptive space and time discretisations for the numerical solution of low-dimensional nonlinear Schrödinger equations is investigated. The considered model equation is related to the time-dependent Gross–Pitaevskii equation arising in the description of Bose–Einstein condensates in dilute gases. The performance of the Fourier-pseudo spectral method constrained to uniform meshes versus the locally adaptive finite element method and of higher-order exponential operator splitting methods with variable time stepsizes is studied. Numerical experiments confirm that a local time stepsize control based on a posteriori local error estimators or embedded splitting pairs, respectively, is effective in different situations with an enhancement either in efficiency or reliability. As expected, adaptive time-splitting schemes combined with fast Fourier transform techniques are favourable regarding accuracy and efficiency when applied to Gross–Pitaevskii equations with a defocusing nonlinearity and a mildly varying regular solution. However, the numerical solution of nonlinear Schrödinger equations in the semi-classical regime becomes a demanding task. Due to the highly oscillatory and nonlinear nature of the problem, the spatial mesh size and the time increments need to be of the size of the decisive parameter 0<ε≪1, especially when it is desired to capture correctly the quantitative behaviour of the wave function itself. The required high resolution in space constricts the feasibility of numerical computations for both, the Fourier pseudo-spectral and the finite element method. Nevertheless, for smaller parameter values locally adaptive time discretisations facilitate to determine the time stepsizes sufficiently small in order that the numerical approximation captures correctly the behaviour of the analytical solution. Further illustrations for Gross–Pitaevskii equations with a focusing nonlinearity or a sharp Gaussian as initial condition, respectively, complement the numerical study. PMID:25550676

  10. A numerical study of adaptive space and time discretisations for Gross-Pitaevskii equations.

    PubMed

    Thalhammer, Mechthild; Abhau, Jochen

    2012-08-15

    As a basic principle, benefits of adaptive discretisations are an improved balance between required accuracy and efficiency as well as an enhancement of the reliability of numerical computations. In this work, the capacity of locally adaptive space and time discretisations for the numerical solution of low-dimensional nonlinear Schrödinger equations is investigated. The considered model equation is related to the time-dependent Gross-Pitaevskii equation arising in the description of Bose-Einstein condensates in dilute gases. The performance of the Fourier-pseudo spectral method constrained to uniform meshes versus the locally adaptive finite element method and of higher-order exponential operator splitting methods with variable time stepsizes is studied. Numerical experiments confirm that a local time stepsize control based on a posteriori local error estimators or embedded splitting pairs, respectively, is effective in different situations with an enhancement either in efficiency or reliability. As expected, adaptive time-splitting schemes combined with fast Fourier transform techniques are favourable regarding accuracy and efficiency when applied to Gross-Pitaevskii equations with a defocusing nonlinearity and a mildly varying regular solution. However, the numerical solution of nonlinear Schrödinger equations in the semi-classical regime becomes a demanding task. Due to the highly oscillatory and nonlinear nature of the problem, the spatial mesh size and the time increments need to be of the size of the decisive parameter [Formula: see text], especially when it is desired to capture correctly the quantitative behaviour of the wave function itself. The required high resolution in space constricts the feasibility of numerical computations for both, the Fourier pseudo-spectral and the finite element method. Nevertheless, for smaller parameter values locally adaptive time discretisations facilitate to determine the time stepsizes sufficiently small in order that the numerical approximation captures correctly the behaviour of the analytical solution. Further illustrations for Gross-Pitaevskii equations with a focusing nonlinearity or a sharp Gaussian as initial condition, respectively, complement the numerical study.

  11. Multidimensional NMR inversion without Kronecker products: Multilinear inversion

    NASA Astrophysics Data System (ADS)

    Medellín, David; Ravi, Vivek R.; Torres-Verdín, Carlos

    2016-08-01

    Multidimensional NMR inversion using Kronecker products poses several challenges. First, kernel compression is only possible when the kernel matrices are separable, and in recent years, there has been an increasing interest in NMR sequences with non-separable kernels. Second, in three or more dimensions, the singular value decomposition is not unique; therefore kernel compression is not well-defined for higher dimensions. Without kernel compression, the Kronecker product yields matrices that require large amounts of memory, making the inversion intractable for personal computers. Finally, incorporating arbitrary regularization terms is not possible using the Lawson-Hanson (LH) or the Butler-Reeds-Dawson (BRD) algorithms. We develop a minimization-based inversion method that circumvents the above problems by using multilinear forms to perform multidimensional NMR inversion without using kernel compression or Kronecker products. The new method is memory efficient, requiring less than 0.1% of the memory required by the LH or BRD methods. It can also be extended to arbitrary dimensions and adapted to include non-separable kernels, linear constraints, and arbitrary regularization terms. Additionally, it is easy to implement because only a cost function and its first derivative are required to perform the inversion.

  12. Blur kernel estimation with algebraic tomography technique and intensity profiles of object boundaries

    NASA Astrophysics Data System (ADS)

    Ingacheva, Anastasia; Chukalina, Marina; Khanipov, Timur; Nikolaev, Dmitry

    2018-04-01

    Motion blur caused by camera vibration is a common source of degradation in photographs. In this paper we study the problem of finding the point spread function (PSF) of a blurred image using the tomography technique. The PSF reconstruction result strongly depends on the particular tomography technique used. We present a tomography algorithm with regularization adapted specifically for this task. We use the algebraic reconstruction technique (ART algorithm) as the starting algorithm and introduce regularization. We use the conjugate gradient method for numerical implementation of the proposed approach. The algorithm is tested using a dataset which contains 9 kernels extracted from real photographs by the Adobe corporation where the point spread function is known. We also investigate influence of noise on the quality of image reconstruction and investigate how the number of projections influence the magnitude change of the reconstruction error.

  13. Reading Proficiency and Adaptability in Orthographic Processing: An Examination of the Effect of Type of Orthography Read on Brain Activity in Regular and Dyslexic Readers

    PubMed Central

    Bar-Kochva, Irit; Breznitz, Zvia

    2014-01-01

    Regular readers were found to adjust the routine of reading to the demands of processing imposed by different orthographies. Dyslexic readers may lack such adaptability in reading. This hypothesis was tested among readers of Hebrew, as Hebrew has two forms of script differing in phonological transparency. Event-related potentials were recorded from 24 regular and 24 dyslexic readers while they carried out a lexical decision task in these two forms of script. The two forms of script elicited distinct amplitudes and latencies at ∼165 ms after target onset, and these effects were larger in regular than in dyslexic readers. These early effects appeared not to be merely a result of the visual difference between the two forms of script (the presence of diacritics). The next effect of form of script was obtained on amplitudes elicited at latencies associated with orthographic-lexical processing and the categorization of stimuli, and these appeared earlier in regular readers (∼340 ms) than in dyslexic readers (∼400 ms). The behavioral measures showed inferior reading skills of dyslexic readers compared to regular readers in reading of both forms of script. Taken together, the results suggest that although dyslexic readers are not indifferent to the type of orthography read, they fail to adjust the routine of reading to the demands of processing imposed by both a transparent and an opaque orthography. PMID:24465844

  14. Bas-relief generation using adaptive histogram equalization.

    PubMed

    Sun, Xianfang; Rosin, Paul L; Martin, Ralph R; Langbein, Frank C

    2009-01-01

    An algorithm is presented to automatically generate bas-reliefs based on adaptive histogram equalization (AHE), starting from an input height field. A mesh model may alternatively be provided, in which case a height field is first created via orthogonal or perspective projection. The height field is regularly gridded and treated as an image, enabling a modified AHE method to be used to generate a bas-relief with a user-chosen height range. We modify the original image-contrast-enhancement AHE method to use gradient weights also to enhance the shape features of the bas-relief. To effectively compress the height field, we limit the height-dependent scaling factors used to compute relative height variations in the output from height variations in the input; this prevents any height differences from having too great effect. Results of AHE over different neighborhood sizes are averaged to preserve information at different scales in the resulting bas-relief. Compared to previous approaches, the proposed algorithm is simple and yet largely preserves original shape features. Experiments show that our results are, in general, comparable to and in some cases better than the best previously published methods.

  15. Parallel architectures for iterative methods on adaptive, block structured grids

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1983-01-01

    A parallel computer architecture well suited to the solution of partial differential equations in complicated geometries is proposed. Algorithms for partial differential equations contain a great deal of parallelism. But this parallelism can be difficult to exploit, particularly on complex problems. One approach to extraction of this parallelism is the use of special purpose architectures tuned to a given problem class. The architecture proposed here is tuned to boundary value problems on complex domains. An adaptive elliptic algorithm which maps effectively onto the proposed architecture is considered in detail. Two levels of parallelism are exploited by the proposed architecture. First, by making use of the freedom one has in grid generation, one can construct grids which are locally regular, permitting a one to one mapping of grids to systolic style processor arrays, at least over small regions. All local parallelism can be extracted by this approach. Second, though there may be a regular global structure to the grids constructed, there will be parallelism at this level. One approach to finding and exploiting this parallelism is to use an architecture having a number of processor clusters connected by a switching network. The use of such a network creates a highly flexible architecture which automatically configures to the problem being solved.

  16. Leveraging Multi-University Collaboration to Develop Portable and Adaptable Online Course Content

    ERIC Educational Resources Information Center

    Frolik, Jeff; Flikkema, Paul G.; Weller, Tom; Haden, Carol; Shiroma, Wayne; Franklin, Rhonda

    2013-01-01

    Individual faculty and institutions regularly develop novel educational materials that could benefit others, but these innovations often fail to gain traction outside the developers' circle as barriers to adoption are numerous. We present evidence that development targeting adaptation, rather than complete adoption, of innovative materials and…

  17. Leveling the Playing Field: Adapted PE Brings Together Kids with and without Disabilities.

    ERIC Educational Resources Information Center

    Jarrett, Denise

    2000-01-01

    Adapted physical education (APE) makes whatever adjustments are needed to allow students with disabilities to participate in regular physical education classes. APE at Beaverton School District (Oregon) is described, as well as the individualized special physical education classes provided to children with severe disabilities. (SV)

  18. A practical globalization of one-shot optimization for optimal design of tokamak divertors

    NASA Astrophysics Data System (ADS)

    Blommaert, Maarten; Dekeyser, Wouter; Baelmans, Martine; Gauger, Nicolas R.; Reiter, Detlev

    2017-01-01

    In past studies, nested optimization methods were successfully applied to design of the magnetic divertor configuration in nuclear fusion reactors. In this paper, so-called one-shot optimization methods are pursued. Due to convergence issues, a globalization strategy for the one-shot solver is sought. Whereas Griewank introduced a globalization strategy using a doubly augmented Lagrangian function that includes primal and adjoint residuals, its practical usability is limited by the necessity of second order derivatives and expensive line search iterations. In this paper, a practical alternative is offered that avoids these drawbacks by using a regular augmented Lagrangian merit function that penalizes only state residuals. Additionally, robust rank-two Hessian estimation is achieved by adaptation of Powell's damped BFGS update rule. The application of the novel one-shot approach to magnetic divertor design is considered in detail. For this purpose, the approach is adapted to be complementary with practical in parts adjoint sensitivities. Using the globalization strategy, stable convergence of the one-shot approach is achieved.

  19. Improving thoracic four-dimensional cone-beam CT reconstruction with anatomical-adaptive image regularization (AAIR)

    PubMed Central

    Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T; Cooper, Benjamin J; Kuncic, Zdenka; Keall, Paul J

    2015-01-01

    Total-variation (TV) minimization reconstructions can significantly reduce noise and streaks in thoracic four-dimensional cone-beam computed tomography (4D CBCT) images compared to the Feldkamp-Davis-Kress (FDK) algorithm currently used in practice. TV minimization reconstructions are, however, prone to over-smoothing anatomical details and are also computationally inefficient. The aim of this study is to demonstrate a proof of concept that these disadvantages can be overcome by incorporating the general knowledge of the thoracic anatomy via anatomy segmentation into the reconstruction. The proposed method, referred as the anatomical-adaptive image regularization (AAIR) method, utilizes the adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS) framework, but introduces an additional anatomy segmentation step in every iteration. The anatomy segmentation information is implemented in the reconstruction using a heuristic approach to adaptively suppress over-smoothing at anatomical structures of interest. The performance of AAIR depends on parameters describing the weighting of the anatomy segmentation prior and segmentation threshold values. A sensitivity study revealed that the reconstruction outcome is not sensitive to these parameters as long as they are chosen within a suitable range. AAIR was validated using a digital phantom and a patient scan, and was compared to FDK, ASD-POCS, and the prior image constrained compressed sensing (PICCS) method. For the phantom case, AAIR reconstruction was quantitatively shown to be the most accurate as indicated by the mean absolute difference and the structural similarity index. For the patient case, AAIR resulted in the highest signal-to-noise ratio (i.e. the lowest level of noise and streaking) and the highest contrast-to-noise ratios for the tumor and the bony anatomy (i.e. the best visibility of anatomical details). Overall, AAIR was much less prone to over-smoothing anatomical details compared to ASD-POCS, and did not suffer from residual noise/streaking and motion blur migrated from the prior image as in PICCS. AAIR was also found to be more computationally efficient than both ASD-POCS and PICCS, with a reduction in computation time of over 50% compared to ASD-POCS. The use of anatomy segmentation was, for the first time, demonstrated to significantly improve image quality and computational efficiency for thoracic 4D CBCT reconstruction. Further developments are required to facilitate AAIR for practical use. PMID:25565244

  20. Adaptation of the World Health Organization's Selected Practice Recommendations for Contraceptive Use for the United States.

    PubMed

    Curtis, Kathryn M; Tepper, Naomi K; Jamieson, Denise J; Marchbanks, Polly A

    2013-05-01

    The Centers for Disease Control and Prevention (CDC) recently adapted global guidance on contraceptive use from the World Health Organization (WHO) to create the US Selected Practice Recommendations for Contraceptive Use (US SPR). The WHO guidance includes evidence-based recommendations on common, yet sometimes complex, contraceptive management questions. We determined the need and scope for the adaptation, conducted 30 systematic reviews of the scientific evidence and convened a meeting of health care professionals to discuss translation of the evidence into recommendations. The US SPR provides recommendations on contraceptive management issues such as how to initiate contraceptive methods, what regular follow-up is needed, and how to address problems, including missed pills and side effects such as unscheduled bleeding. The US SPR is intended to serve as a source of clinical guidance for providers in assisting women and men to initiate and successfully use contraception to prevent unintended pregnancy. Published by Elsevier Inc.

  1. CRISPR/Cas9 Platforms for Genome Editing in Plants: Developments and Applications.

    PubMed

    Ma, Xingliang; Zhu, Qinlong; Chen, Yuanling; Liu, Yao-Guang

    2016-07-06

    The clustered regularly interspaced short palindromic repeat (CRISPR)-associated protein9 (Cas9) genome editing system (CRISPR/Cas9) is adapted from the prokaryotic type II adaptive immunity system. The CRISPR/Cas9 tool surpasses other programmable nucleases, such as ZFNs and TALENs, for its simplicity and high efficiency. Various plant-specific CRISPR/Cas9 vector systems have been established for adaption of this technology to many plant species. In this review, we present an overview of current advances on applications of this technology in plants, emphasizing general considerations for establishment of CRISPR/Cas9 vector platforms, strategies for multiplex editing, methods for analyzing the induced mutations, factors affecting editing efficiency and specificity, and features of the induced mutations and applications of the CRISPR/Cas9 system in plants. In addition, we provide a perspective on the challenges of CRISPR/Cas9 technology and its significance for basic plant research and crop genetic improvement. Copyright © 2016 The Author. Published by Elsevier Inc. All rights reserved.

  2. Self-Avoiding Walks Over Adaptive Triangular Grids

    NASA Technical Reports Server (NTRS)

    Heber, Gerd; Biswas, Rupak; Gao, Guang R.; Saini, Subhash (Technical Monitor)

    1999-01-01

    Space-filling curves is a popular approach based on a geometric embedding for linearizing computational meshes. We present a new O(n log n) combinatorial algorithm for constructing a self avoiding walk through a two dimensional mesh containing n triangles. We show that for hierarchical adaptive meshes, the algorithm can be locally adapted and easily parallelized by taking advantage of the regularity of the refinement rules. The proposed approach should be very useful in the runtime partitioning and load balancing of adaptive unstructured grids.

  3. On the adaptive flexibility of evaluative priming.

    PubMed

    Fiedler, Klaus; Bluemke, Matthias; Unkelbach, Christian

    2011-05-01

    If priming effects serve an adaptive function, they have to be both robust and flexible. In four experiments, we demonstrated regular evaluative-priming effects for relatively long stimulus-onset asynchronies, which can, however, be eliminated or reversed strategically. When participants responded to both primes and targets, rather than only to targets, the standard congruity effect disappeared. In Experiments 1a-1c, this result was regularly obtained, independently of the prime response (valence or gender classification) and the response mode (pronunciation or keystroke). In Experiment 2, we showed that once the default congruity effect was eliminated, strategic-priming effects reflected the statistical contingency between prime valence and target valence. Positive contingencies produced congruity, whereas negative contingencies produced equally strong incongruity effects. Altogether, these findings are consistent with an adaptive-cognitive perspective, which highlights the role of flexible strategic processes in working memory as opposed to fixed structures in semantic long-term memory or in the sensorimotor system.

  4. Adaptive non-local means on local principle neighborhood for noise/artifacts reduction in low-dose CT images.

    PubMed

    Zhang, Yuanke; Lu, Hongbing; Rong, Junyan; Meng, Jing; Shang, Junliang; Ren, Pinghong; Zhang, Junying

    2017-09-01

    Low-dose CT (LDCT) technique can reduce the x-ray radiation exposure to patients at the cost of degraded images with severe noise and artifacts. Non-local means (NLM) filtering has shown its potential in improving LDCT image quality. However, currently most NLM-based approaches employ a weighted average operation directly on all neighbor pixels with a fixed filtering parameter throughout the NLM filtering process, ignoring the non-stationary noise nature of LDCT images. In this paper, an adaptive NLM filtering scheme on local principle neighborhoods (PC-NLM) is proposed for structure-preserving noise/artifacts reduction in LDCT images. Instead of using neighboring patches directly, in the PC-NLM scheme, the principle component analysis (PCA) is first applied on local neighboring patches of the target patch to decompose the local patches into uncorrelated principle components (PCs), then a NLM filtering is used to regularize each PC of the target patch and finally the regularized components is transformed to get the target patch in image domain. Especially, in the NLM scheme, the filtering parameter is estimated adaptively from local noise level of the neighborhood as well as the signal-to-noise ratio (SNR) of the corresponding PC, which guarantees a "weaker" NLM filtering on PCs with higher SNR and a "stronger" filtering on PCs with lower SNR. The PC-NLM procedure is iteratively performed several times for better removal of the noise and artifacts, and an adaptive iteration strategy is developed to reduce the computational load by determining whether a patch should be processed or not in next round of the PC-NLM filtering. The effectiveness of the presented PC-NLM algorithm is validated by experimental phantom studies and clinical studies. The results show that it can achieve promising gain over some state-of-the-art methods in terms of artifact suppression and structure preservation. With the use of PCA on local neighborhoods to extract principal structural components, as well as adaptive NLM filtering on PCs of the target patch using filtering parameter estimated based on the local noise level and corresponding SNR, the proposed PC-NLM method shows its efficacy in preserving fine anatomical structures and suppressing noise/artifacts in LDCT images. © 2017 American Association of Physicists in Medicine.

  5. Regularized minimum I-divergence methods for the inverse blackbody radiation problem

    NASA Astrophysics Data System (ADS)

    Choi, Kerkil; Lanterman, Aaron D.; Shin, Jaemin

    2006-08-01

    This paper proposes iterative methods for estimating the area temperature distribution of a blackbody from its total radiated power spectrum measurements. This is called the inverse blackbody radiation problem. This problem is inherently ill-posed due to the characteristics of the kernel in the underlying integral equation given by Planck's law. The functions involved in the problem are all non-negative. Csiszár's I-divergence is an information-theoretic discrepancy measure between two non-negative functions. We derive iterative methods for minimizing Csiszár's I-divergence between the measured power spectrum and the power spectrum arising from the estimate according to the integral equation. Due to the ill-posedness of the problem, unconstrained algorithms often produce poor estimates, especially when the measurements are corrupted by noise. To alleviate this difficulty, we apply regularization methods to our algorithms. Penalties based on Shannon's entropy, the L1-norm and Good's roughness are chosen to suppress the undesirable artefacts. When a penalty is applied, the pertinent optimization that needs to be performed at each iteration is no longer trivial. In particular, Good's roughness causes couplings between estimate components. To handle this issue, we adapt Green's one-step-late method. This choice is based on the important fact that our minimum I-divergence algorithms can be interpreted as asymptotic forms of certain expectation-maximization algorithms. The effectiveness of our methods is illustrated via various numerical experiments.

  6. Myocardium Segmentation From DE MRI Using Multicomponent Gaussian Mixture Model and Coupled Level Set.

    PubMed

    Liu, Jie; Zhuang, Xiahai; Wu, Lianming; An, Dongaolei; Xu, Jianrong; Peters, Terry; Gu, Lixu

    2017-11-01

    Objective: In this paper, we propose a fully automatic framework for myocardium segmentation of delayed-enhancement (DE) MRI images without relying on prior patient-specific information. Methods: We employ a multicomponent Gaussian mixture model to deal with the intensity heterogeneity of myocardium caused by the infarcts. To differentiate the myocardium from other tissues with similar intensities, while at the same time maintain spatial continuity, we introduce a coupled level set (CLS) to regularize the posterior probability. The CLS, as a spatial regularization, can be adapted to the image characteristics dynamically. We also introduce an image intensity gradient based term into the CLS, adding an extra force to the posterior probability based framework, to improve the accuracy of myocardium boundary delineation. The prebuilt atlases are propagated to the target image to initialize the framework. Results: The proposed method was tested on datasets of 22 clinical cases, and achieved Dice similarity coefficients of 87.43 ± 5.62% (endocardium), 90.53 ± 3.20% (epicardium) and 73.58 ± 5.58% (myocardium), which have outperformed three variants of the classic segmentation methods. Conclusion: The results can provide a benchmark for the myocardial segmentation in the literature. Significance: DE MRI provides an important tool to assess the viability of myocardium. The accurate segmentation of myocardium, which is a prerequisite for further quantitative analysis of myocardial infarction (MI) region, can provide important support for the diagnosis and treatment management for MI patients. Objective: In this paper, we propose a fully automatic framework for myocardium segmentation of delayed-enhancement (DE) MRI images without relying on prior patient-specific information. Methods: We employ a multicomponent Gaussian mixture model to deal with the intensity heterogeneity of myocardium caused by the infarcts. To differentiate the myocardium from other tissues with similar intensities, while at the same time maintain spatial continuity, we introduce a coupled level set (CLS) to regularize the posterior probability. The CLS, as a spatial regularization, can be adapted to the image characteristics dynamically. We also introduce an image intensity gradient based term into the CLS, adding an extra force to the posterior probability based framework, to improve the accuracy of myocardium boundary delineation. The prebuilt atlases are propagated to the target image to initialize the framework. Results: The proposed method was tested on datasets of 22 clinical cases, and achieved Dice similarity coefficients of 87.43 ± 5.62% (endocardium), 90.53 ± 3.20% (epicardium) and 73.58 ± 5.58% (myocardium), which have outperformed three variants of the classic segmentation methods. Conclusion: The results can provide a benchmark for the myocardial segmentation in the literature. Significance: DE MRI provides an important tool to assess the viability of myocardium. The accurate segmentation of myocardium, which is a prerequisite for further quantitative analysis of myocardial infarction (MI) region, can provide important support for the diagnosis and treatment management for MI patients.

  7. European information on climate change impacts, vulnerability and adaptation

    NASA Astrophysics Data System (ADS)

    Jol, A.; Isoard, S.

    2010-09-01

    Vulnerability to natural and technological disasters is increasing due to a combination of intensifying land use, increasing industrial development, further urban expansion and expanding infrastructure and also climate change. At EU level the European Commission's White Paper on adaptation to climate change (published in 2009) highlights that adaptation actions should be focused on the most vulnerable areas and communities in Europe (e.g. mountains, coastal areas, river flood prone areas, Mediterranean, Arctic). Mainstreaming of climate change into existing EU policies will be a key policy, including within the Water Framework Directive, Marine Strategy Framework Directive, Nature protection and biodiversity policies, integrated coastal zone management, other (sectoral) policies (agriculture, forestry, energy, transport, health) and disaster risk prevention. 2010 is the international year on biodiversity and the Conference of Parties of the biodiversity convention will meet in autumn 2010 (Japan) to discuss amongst other post-2010 strategies, objectives and indicators. Both within the Biodiversity Convention (CBD) and the Climate Change Convention (UNFCCC) there is increasing recognition of the need for integration of biodiversity conservation into climate change mitigation and adaptation activities. Furthermore a number of European countries and also some regions have started to prepare and/or have adopted national adaptation plans or frameworks. Sharing of good practices on climate change vulnerability methods and adaptation actions is so far limited, but is essential to improve such plans, at national, sub national and local level where much of the adaptation action is already taking place and will be expanding in future, also involving increasingly the business community. The EU Clearinghouse on CC impacts, vulnerability and adaptation should address these needs and it is planned to be operational end of 2011. The EEA is expected to have a role in its development in 2010 and is likely to manage the system after 2011. The European Commission in its Communication in 2009 on disaster risk prevention also calls for improving and better sharing of data on disasters, disaster risk mapping and disaster risk management, in the context of the EU civil protection mechanism. Such information might also be linked to the planned EU Clearinghouse on climate change adaptation. The activities of EEA on climate change impacts, vulnerability and adaptation (including disaster risk reduction) include indicators of the impacts of climate change; a regularly updated overview of national assessments and adaptation plans on the EEA web site and specific focused reports, e.g. on adaptation to the challenges of changing water resources in the Alps (2009) and on analysis of past trends in natural disasters (due in 2010) and regular expert meetings and workshops with EEA member countries. The ECAC presentation will include the latest developments in the EU Clearinghouse on adaptation and progress in relevant EEA activities.

  8. [Advances in clustered regularly interspaced short palindromic repeats--a review].

    PubMed

    Wang, Lili; He, Jin; Wang, Jieping

    2011-08-01

    The recently discovered Clustered Regularly Interspaced Short Palindromic Repeat (CRISPRs) can protect bacteria and archaea with adaptive and heritable defense systems against the invasion of phage- and plasmid- associated mobile genetic elements. Here, we review the structure, diversity, mechanism of interference and self versus non-self discrimination of CRISPR systems. We also discuss the potential applications of this novel interference system.

  9. A Demons algorithm for image registration with locally adaptive regularization.

    PubMed

    Cahill, Nathan D; Noble, J Alison; Hawkes, David J

    2009-01-01

    Thirion's Demons is a popular algorithm for nonrigid image registration because of its linear computational complexity and ease of implementation. It approximately solves the diffusion registration problem by successively estimating force vectors that drive the deformation toward alignment and smoothing the force vectors by Gaussian convolution. In this article, we show how the Demons algorithm can be generalized to allow image-driven locally adaptive regularization in a manner that preserves both the linear complexity and ease of implementation of the original Demons algorithm. We show that the proposed algorithm exhibits lower target registration error and requires less computational effort than the original Demons algorithm on the registration of serial chest CT scans of patients with lung nodules.

  10. Adaptive exponential integrate-and-fire model as an effective description of neuronal activity.

    PubMed

    Brette, Romain; Gerstner, Wulfram

    2005-11-01

    We introduce a two-dimensional integrate-and-fire model that combines an exponential spike mechanism with an adaptation equation, based on recent theoretical findings. We describe a systematic method to estimate its parameters with simple electrophysiological protocols (current-clamp injection of pulses and ramps) and apply it to a detailed conductance-based model of a regular spiking neuron. Our simple model predicts correctly the timing of 96% of the spikes (+/-2 ms) of the detailed model in response to injection of noisy synaptic conductances. The model is especially reliable in high-conductance states, typical of cortical activity in vivo, in which intrinsic conductances were found to have a reduced role in shaping spike trains. These results are promising because this simple model has enough expressive power to reproduce qualitatively several electrophysiological classes described in vitro.

  11. Adaptation of Instructional Materials for Use with Mainstreamed Students. Final Report.

    ERIC Educational Resources Information Center

    Macro Systems, Inc., Silver Spring, MD.

    The report documents a project to adapt and design instructional media and materials of a regular senior high social studies curriculum for use with mainstreamed mildly handicapped students. Supplementary materials were developed and tested at five sites to accompany the world history textbook, "Our Common Heritage." Products included the…

  12. Spatially adaptive bases in wavelet-based coding of semi-regular meshes

    NASA Astrophysics Data System (ADS)

    Denis, Leon; Florea, Ruxandra; Munteanu, Adrian; Schelkens, Peter

    2010-05-01

    In this paper we present a wavelet-based coding approach for semi-regular meshes, which spatially adapts the employed wavelet basis in the wavelet transformation of the mesh. The spatially-adaptive nature of the transform requires additional information to be stored in the bit-stream in order to allow the reconstruction of the transformed mesh at the decoder side. In order to limit this overhead, the mesh is first segmented into regions of approximately equal size. For each spatial region, a predictor is selected in a rate-distortion optimal manner by using a Lagrangian rate-distortion optimization technique. When compared against the classical wavelet transform employing the butterfly subdivision filter, experiments reveal that the proposed spatially-adaptive wavelet transform significantly decreases the energy of the wavelet coefficients for all subbands. Preliminary results show also that employing the proposed transform for the lowest-resolution subband systematically yields improved compression performance at low-to-medium bit-rates. For the Venus and Rabbit test models the compression improvements add up to 1.47 dB and 0.95 dB, respectively.

  13. Research on the Forward and Reverse Calculation Based on the Adaptive Zero-Velocity Interval Adjustment for the Foot-Mounted Inertial Pedestrian-Positioning System

    PubMed Central

    Wang, Qiuying; Guo, Zheng; Sun, Zhiguo; Cui, Xufei; Liu, Kaiyue

    2018-01-01

    Pedestrian-positioning technology based on the foot-mounted micro inertial measurement unit (MIMU) plays an important role in the field of indoor navigation and has received extensive attention in recent years. However, the positioning accuracy of the inertial-based pedestrian-positioning method is rapidly reduced because of the relatively low measurement accuracy of the measurement sensor. The zero-velocity update (ZUPT) is an error correction method which was proposed to solve the cumulative error because, on a regular basis, the foot is stationary during the ordinary gait; this is intended to reduce the position error growth of the system. However, the traditional ZUPT has poor performance because the time of foot touchdown is short when the pedestrians move faster, which decreases the positioning accuracy. Considering these problems, a forward and reverse calculation method based on the adaptive zero-velocity interval adjustment for the foot-mounted MIMU location method is proposed in this paper. To solve the inaccuracy of the zero-velocity interval detector during fast pedestrian movement where the contact time of the foot on the ground is short, an adaptive zero-velocity interval detection algorithm based on fuzzy logic reasoning is presented in this paper. In addition, to improve the effectiveness of the ZUPT algorithm, forward and reverse multiple solutions are presented. Finally, with the basic principles and derivation process of this method, the MTi-G710 produced by the XSENS company is used to complete the test. The experimental results verify the correctness and applicability of the proposed method. PMID:29883399

  14. Electrooptical adaptive switching network for the hypercube computer

    NASA Technical Reports Server (NTRS)

    Chow, E.; Peterson, J.

    1988-01-01

    An all-optical network design for the hyperswitch network using regular free-space interconnects between electronic processor nodes is presented. The adaptive routing model used is described, and an adaptive routing control example is presented. The design demonstrates that existing electrooptical techniques are sufficient for implementing efficient parallel architectures without the need for more complex means of implementing arbitrary interconnection schemes. The electrooptical hyperswitch network significantly improves the communication performance of the hypercube computer.

  15. How little data is enough? Phase-diagram analysis of sparsity-regularized X-ray computed tomography

    PubMed Central

    Jørgensen, J. S.; Sidky, E. Y.

    2015-01-01

    We introduce phase-diagram analysis, a standard tool in compressed sensing (CS), to the X-ray computed tomography (CT) community as a systematic method for determining how few projections suffice for accurate sparsity-regularized reconstruction. In CS, a phase diagram is a convenient way to study and express certain theoretical relations between sparsity and sufficient sampling. We adapt phase-diagram analysis for empirical use in X-ray CT for which the same theoretical results do not hold. We demonstrate in three case studies the potential of phase-diagram analysis for providing quantitative answers to questions of undersampling. First, we demonstrate that there are cases where X-ray CT empirically performs comparably with a near-optimal CS strategy, namely taking measurements with Gaussian sensing matrices. Second, we show that, in contrast to what might have been anticipated, taking randomized CT measurements does not lead to improved performance compared with standard structured sampling patterns. Finally, we show preliminary results of how well phase-diagram analysis can predict the sufficient number of projections for accurately reconstructing a large-scale image of a given sparsity by means of total-variation regularization. PMID:25939620

  16. How little data is enough? Phase-diagram analysis of sparsity-regularized X-ray computed tomography.

    PubMed

    Jørgensen, J S; Sidky, E Y

    2015-06-13

    We introduce phase-diagram analysis, a standard tool in compressed sensing (CS), to the X-ray computed tomography (CT) community as a systematic method for determining how few projections suffice for accurate sparsity-regularized reconstruction. In CS, a phase diagram is a convenient way to study and express certain theoretical relations between sparsity and sufficient sampling. We adapt phase-diagram analysis for empirical use in X-ray CT for which the same theoretical results do not hold. We demonstrate in three case studies the potential of phase-diagram analysis for providing quantitative answers to questions of undersampling. First, we demonstrate that there are cases where X-ray CT empirically performs comparably with a near-optimal CS strategy, namely taking measurements with Gaussian sensing matrices. Second, we show that, in contrast to what might have been anticipated, taking randomized CT measurements does not lead to improved performance compared with standard structured sampling patterns. Finally, we show preliminary results of how well phase-diagram analysis can predict the sufficient number of projections for accurately reconstructing a large-scale image of a given sparsity by means of total-variation regularization.

  17. Sparse graph regularization for robust crop mapping using hyperspectral remotely sensed imagery with very few in situ data

    NASA Astrophysics Data System (ADS)

    Xue, Zhaohui; Du, Peijun; Li, Jun; Su, Hongjun

    2017-02-01

    The generally limited availability of training data relative to the usually high data dimension pose a great challenge to accurate classification of hyperspectral imagery, especially for identifying crops characterized with highly correlated spectra. However, traditional parametric classification models are problematic due to the need of non-singular class-specific covariance matrices. In this research, a novel sparse graph regularization (SGR) method is presented, aiming at robust crop mapping using hyperspectral imagery with very few in situ data. The core of SGR lies in propagating labels from known data to unknown, which is triggered by: (1) the fraction matrix generated for the large unknown data by using an effective sparse representation algorithm with respect to the few training data serving as the dictionary; (2) the prediction function estimated for the few training data by formulating a regularization model based on sparse graph. Then, the labels of large unknown data can be obtained by maximizing the posterior probability distribution based on the two ingredients. SGR is more discriminative, data-adaptive, robust to noise, and efficient, which is unique with regard to previously proposed approaches and has high potentials in discriminating crops, especially when facing insufficient training data and high-dimensional spectral space. The study area is located at Zhangye basin in the middle reaches of Heihe watershed, Gansu, China, where eight crop types were mapped with Compact Airborne Spectrographic Imager (CASI) and Shortwave Infrared Airborne Spectrogrpahic Imager (SASI) hyperspectral data. Experimental results demonstrate that the proposed method significantly outperforms other traditional and state-of-the-art methods.

  18. Crowd density estimation based on convolutional neural networks with mixed pooling

    NASA Astrophysics Data System (ADS)

    Zhang, Li; Zheng, Hong; Zhang, Ying; Zhang, Dongming

    2017-09-01

    Crowd density estimation is an important topic in the fields of machine learning and video surveillance. Existing methods do not provide satisfactory classification accuracy; moreover, they have difficulty in adapting to complex scenes. Therefore, we propose a method based on convolutional neural networks (CNNs). The proposed method improves performance of crowd density estimation in two key ways. First, we propose a feature pooling method named mixed pooling to regularize the CNNs. It replaces deterministic pooling operations with a parameter that, by studying the algorithm, could combine the conventional max pooling with average pooling methods. Second, we present a classification strategy, in which an image is divided into two cells and respectively categorized. The proposed approach was evaluated on three datasets: two ground truth image sequences and the University of California, San Diego, anomaly detection dataset. The results demonstrate that the proposed approach performs more effectively and easily than other methods.

  19. SkyFACT: high-dimensional modeling of gamma-ray emission with adaptive templates and penalized likelihoods

    NASA Astrophysics Data System (ADS)

    Storm, Emma; Weniger, Christoph; Calore, Francesca

    2017-08-01

    We present SkyFACT (Sky Factorization with Adaptive Constrained Templates), a new approach for studying, modeling and decomposing diffuse gamma-ray emission. Like most previous analyses, the approach relies on predictions from cosmic-ray propagation codes like GALPROP and DRAGON. However, in contrast to previous approaches, we account for the fact that models are not perfect and allow for a very large number (gtrsim 105) of nuisance parameters to parameterize these imperfections. We combine methods of image reconstruction and adaptive spatio-spectral template regression in one coherent hybrid approach. To this end, we use penalized Poisson likelihood regression, with regularization functions that are motivated by the maximum entropy method. We introduce methods to efficiently handle the high dimensionality of the convex optimization problem as well as the associated semi-sparse covariance matrix, using the L-BFGS-B algorithm and Cholesky factorization. We test the method both on synthetic data as well as on gamma-ray emission from the inner Galaxy, |l|<90o and |b|<20o, as observed by the Fermi Large Area Telescope. We finally define a simple reference model that removes most of the residual emission from the inner Galaxy, based on conventional diffuse emission components as well as components for the Fermi bubbles, the Fermi Galactic center excess, and extended sources along the Galactic disk. Variants of this reference model can serve as basis for future studies of diffuse emission in and outside the Galactic disk.

  20. Psychometric survey of nursing competences illustrated with nursing students and apprentices

    PubMed

    Reichardt, Christoph; Wernecke, Frances; Giesler, Marianne; Petersen-Ewert, Corinna

    2016-09-01

    Background: The term competences is discussed differently in various disciplines of science. Furthermore there is no international or discipline comprehensive accepted definition of this term. Problem: So far, there are few practical, reliable and valid measuring instruments for a survey of general nursing skills. This article describes the adaptation process of a measuring instrument for medical skills into one for nursing competences. Method: The measurement quality of the questionnaire was audited using a sample of two different courses of studies and regular nursing apprentices. Another research question focused whether the adapted questionnaire is able to detect a change of nursing skills. For the validation of reliability and validity data from the first point of measurement was used (n = 240). The data from the second point of measurement, which was conducted two years later (n = 163), were used to validate, whether the questionnaire is able to detect a change of nursing competences. Results/Conclusions: The results indicate that the adapted version of the questionnaire is reliable and valid. Also the questionnaire was able to detect significant, partly even strong, effects of change in nursing skills (d = 0,17 – 1,04). It was possible to adapt the questionnaire for the measurement of nursing competences.

  1. Evaluating Descriptive Metrics of the Human Cone Mosaic

    PubMed Central

    Cooper, Robert F.; Wilk, Melissa A.; Tarima, Sergey; Carroll, Joseph

    2016-01-01

    Purpose To evaluate how metrics used to describe the cone mosaic change in response to simulated photoreceptor undersampling (i.e., cell loss or misidentification). Methods Using an adaptive optics ophthalmoscope, we acquired images of the cone mosaic from the center of fixation to 10° along the temporal, superior, inferior, and nasal meridians in 20 healthy subjects. Regions of interest (n = 1780) were extracted at regular intervals along each meridian. Cone mosaic geometry was assessed using a variety of metrics − density, density recovery profile distance (DRPD), nearest neighbor distance (NND), intercell distance (ICD), farthest neighbor distance (FND), percentage of six-sided Voronoi cells, nearest neighbor regularity (NNR), number of neighbors regularity (NoNR), and Voronoi cell area regularity (VCAR). The “performance” of each metric was evaluated by determining the level of simulated loss necessary to obtain 80% statistical power. Results Of the metrics assessed, NND and DRPD were the least sensitive to undersampling, classifying mosaics that lost 50% of their coordinates as indistinguishable from normal. The NoNR was the most sensitive, detecting a significant deviation from normal with only a 10% cell loss. Conclusions The robustness of cone spacing metrics makes them unsuitable for reliably detecting small deviations from normal or for tracking small changes in the mosaic over time. In contrast, regularity metrics are more sensitive to diffuse loss and, therefore, better suited for detecting such changes, provided the fraction of misidentified cells is minimal. Combining metrics with a variety of sensitivities may provide a more complete picture of the integrity of the photoreceptor mosaic. PMID:27273598

  2. A guideline for adults with an indwelling urinary catheter in different health care Settings - methodological procedures

    PubMed

    Barbezat, Isabelle; Willener, Rita; Jenni, Giovanna; Hürlimann, Barbara; Geese, Franziska; Spichiger, Elisabeth

    2017-07-01

    Background: People with an indwelling urinary catheter often suffer from complications and health care professionals are regularly confronted with questions about catheter management. Clinical guidelines are widely accepted to promote evidence-based practice. In the literature, the adaptation of a guideline is described as a valid alternative to the development of a new one. Aim: To translate a guideline for the care for adults with an indwelling urinary catheter in the acute and long term care setting as well as for home care. To adapt the guideline to the Swiss context. Method: In a systematic and pragmatic process, clinical questions were identified, guidelines were searched and evaluated regarding clinical relevance and quality. After each step, the next steps were defined. Results: An English guideline was translated, adapted to the local context and supplemented. The adapted guideline was reviewed by experts, adapted again and approved. After 34 months and an investment of a total of 145 man working days, a guideline for the care for people with an indwelling urinary catheter is available for both institutions. Conclusions: Translation and adaptation of a guideline was a valuable alternative to the development of a new one; nevertheless, the efforts necessary should not be underestimated. For such a project, sufficient professional and methodological resources should be made available to achieve efficient guideline work by a constant team.

  3. Minimal residual method provides optimal regularization parameter for diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Jagannath, Ravi Prasad K.; Yalavarthy, Phaneendra K.

    2012-10-01

    The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.

  4. Minimal residual method provides optimal regularization parameter for diffuse optical tomography.

    PubMed

    Jagannath, Ravi Prasad K; Yalavarthy, Phaneendra K

    2012-10-01

    The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.

  5. The Influence of Motivation and Adaptation on Students' Subjective Well-Being, Meaning in Life and Academic Performance

    ERIC Educational Resources Information Center

    Bailey, Thomas Hamilton; Phillips, Lisa J.

    2016-01-01

    High rates of mental illness among students and discontinuation with university studies are regularly reported. The current study sought to explore relationships between motivation, university adaptation and indicators of mental health and well-being and academic performance of 184 first-year university students (73% female, mean age?=?19.3…

  6. Introducing Adaptivity Features to a Regular Learning Management System to Support Creation of Advanced eLessons

    ERIC Educational Resources Information Center

    Komlenov, Zivana; Budimac, Zoran; Ivanovic, Mirjana

    2010-01-01

    In order to improve the learning process for students with different pre-knowledge, personal characteristics and preferred learning styles, a certain degree of adaptability must be introduced to online courses. In learning environments that support such kind of functionalities students can explicitly choose different paths through course contents…

  7. Adding Statistical Machine Translation Adaptation to Computer-Assisted Translation

    DTIC Science & Technology

    2013-09-01

    are automatically searched and used to suggest possible translations; (2) spell-checkers; (3) glossaries; (4) dictionaries ; (5) alignment and...matching against TMs to propose translations; spell-checking, glossary, and dictionary look-up; support for multiple file formats; regular expressions...on Telecommunications. Tehran, 2012, 822–826. Bertoldi, N.; Federico, M. Domain Adaptation for Statistical Machine Translation with Monolingual

  8. Weighted low-rank sparse model via nuclear norm minimization for bearing fault detection

    NASA Astrophysics Data System (ADS)

    Du, Zhaohui; Chen, Xuefeng; Zhang, Han; Yang, Boyuan; Zhai, Zhi; Yan, Ruqiang

    2017-07-01

    It is a fundamental task in the machine fault diagnosis community to detect impulsive signatures generated by the localized faults of bearings. The main goal of this paper is to exploit the low-rank physical structure of periodic impulsive features and further establish a weighted low-rank sparse model for bearing fault detection. The proposed model mainly consists of three basic components: an adaptive partition window, a nuclear norm regularization and a weighted sequence. Firstly, due to the periodic repetition mechanism of impulsive feature, an adaptive partition window could be designed to transform the impulsive feature into a data matrix. The highlight of partition window is to accumulate all local feature information and align them. Then, all columns of the data matrix share similar waveforms and a core physical phenomenon arises, i.e., these singular values of the data matrix demonstrates a sparse distribution pattern. Therefore, a nuclear norm regularization is enforced to capture that sparse prior. However, the nuclear norm regularization treats all singular values equally and thus ignores one basic fact that larger singular values have more information volume of impulsive features and should be preserved as much as possible. Therefore, a weighted sequence with adaptively tuning weights inversely proportional to singular amplitude is adopted to guarantee the distribution consistence of large singular values. On the other hand, the proposed model is difficult to solve due to its non-convexity and thus a new algorithm is developed to search one satisfying stationary solution through alternatively implementing one proximal operator operation and least-square fitting. Moreover, the sensitivity analysis and selection principles of algorithmic parameters are comprehensively investigated through a set of numerical experiments, which shows that the proposed method is robust and only has a few adjustable parameters. Lastly, the proposed model is applied to the wind turbine (WT) bearing fault detection and its effectiveness is sufficiently verified. Compared with the current popular bearing fault diagnosis techniques, wavelet analysis and spectral kurtosis, our model achieves a higher diagnostic accuracy.

  9. A medical imaging analysis system for trigger finger using an adaptive texture-based active shape model (ATASM) in ultrasound images

    PubMed Central

    Chuang, Bo-I; Kuo, Li-Chieh; Yang, Tai-Hua; Su, Fong-Chin; Jou, I-Ming; Lin, Wei-Jr; Sun, Yung-Nien

    2017-01-01

    Trigger finger has become a prevalent disease that greatly affects occupational activity and daily life. Ultrasound imaging is commonly used for the clinical diagnosis of trigger finger severity. Due to image property variations, traditional methods cannot effectively segment the finger joint’s tendon structure. In this study, an adaptive texture-based active shape model method is used for segmenting the tendon and synovial sheath. Adapted weights are applied in the segmentation process to adjust the contribution of energy terms depending on image characteristics at different positions. The pathology is then determined according to the wavelet and co-occurrence texture features of the segmented tendon area. In the experiments, the segmentation results have fewer errors, with respect to the ground truth, than contours drawn by regular users. The mean values of the absolute segmentation difference of the tendon and synovial sheath are 3.14 and 4.54 pixels, respectively. The average accuracy of pathological determination is 87.14%. The segmentation results are all acceptable in data of both clear and fuzzy boundary cases in 74 images. And the symptom classifications of 42 cases are also a good reference for diagnosis according to the expert clinicians’ opinions. PMID:29077737

  10. Imaging objects behind small obstacles using axicon lens

    NASA Astrophysics Data System (ADS)

    Perinchery, Sandeep M.; Shinde, Anant; Murukeshan, V. M.

    2017-06-01

    Axicon lenses are conical prisms, which are known to focus a light source to a line comprising of multiple points along the optical axis. In this study, we analyze the potential of axicon lenses to view, image and record the object behind opaque obstacles in free space. The advantage of an axicon lens over a regular lens is demonstrated experimentally. Parameters such as obstacle size, object and the obstacle position in the context of imaging behind obstacles are tested using Zemax optical simulation. This proposed concept can be easily adapted to most of the optical imaging methods and microscopy modalities.

  11. SU-F-R-41: Regularized PCA Can Model Treatment-Related Changes in Head and Neck Patients Using Daily CBCTs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chetvertkov, M; Henry Ford Health System, Detroit, MI; Siddiqui, F

    2016-06-15

    Purpose: To use daily cone beam CTs (CBCTs) to develop regularized principal component analysis (PCA) models of anatomical changes in head and neck (H&N) patients, to guide replanning decisions in adaptive radiation therapy (ART). Methods: Known deformations were applied to planning CT (pCT) images of 10 H&N patients to model several different systematic anatomical changes. A Pinnacle plugin was used to interpolate systematic changes over 35 fractions, generating a set of 35 synthetic CTs for each patient. Deformation vector fields (DVFs) were acquired between the pCT and synthetic CTs and random fraction-to-fraction changes were superimposed on the DVFs. Standard non-regularizedmore » and regularized patient-specific PCA models were built using the DVFs. The ability of PCA to extract the known deformations was quantified. PCA models were also generated from clinical CBCTs, for which the deformations and DVFs were not known. It was hypothesized that resulting eigenvectors/eigenfunctions with largest eigenvalues represent the major anatomical deformations during the course of treatment. Results: As demonstrated with quantitative results in the supporting document regularized PCA is more successful than standard PCA at capturing systematic changes early in the treatment. Regularized PCA is able to detect smaller systematic changes against the background of random fraction-to-fraction changes. To be successful at guiding ART, regularized PCA should be coupled with models of when anatomical changes occur: early, late or throughout the treatment course. Conclusion: The leading eigenvector/eigenfunction from the both PCA approaches can tentatively be identified as a major systematic change during radiotherapy course when systematic changes are large enough with respect to random fraction-to-fraction changes. In all cases the regularized PCA approach appears to be more reliable at capturing systematic changes, enabling dosimetric consequences to be projected once trends are established early in the treatment course. This work is supported in part by a grant from Varian Medical Systems, Palo Alto, CA.« less

  12. Some problems of human adaptation and ecology under the aspect of general pathology

    NASA Technical Reports Server (NTRS)

    Kaznacheyev, V. P.

    1980-01-01

    The main problems of human adaptation at the level of the body and the population in connection with the features of current morbidity of the population and certain demographic processes are analyzed. The concepts of health and adaptation of the individual and human populations are determined. The importance of the anthropo-ecological approach to the investigation of the adaptation process of human populations is demonstrated. Certain features of the etiopathogenesis of diseases are considered in connection with the population-ecological regularities of human adaptation. The importance of research on general pathology aspects of adaptation and the ecology of man for planning, and organization of public health protection is discussed.

  13. On dynamical systems approaches and methods in f ( R ) cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alho, Artur; Carloni, Sante; Uggla, Claes, E-mail: aalho@math.ist.utl.pt, E-mail: sante.carloni@tecnico.ulisboa.pt, E-mail: claes.uggla@kau.se

    We discuss dynamical systems approaches and methods applied to flat Robertson-Walker models in f ( R )-gravity. We argue that a complete description of the solution space of a model requires a global state space analysis that motivates globally covering state space adapted variables. This is shown explicitly by an illustrative example, f ( R ) = R + α R {sup 2}, α > 0, for which we introduce new regular dynamical systems on global compactly extended state spaces for the Jordan and Einstein frames. This example also allows us to illustrate several local and global dynamical systems techniquesmore » involving, e.g., blow ups of nilpotent fixed points, center manifold analysis, averaging, and use of monotone functions. As a result of applying dynamical systems methods to globally state space adapted dynamical systems formulations, we obtain pictures of the entire solution spaces in both the Jordan and the Einstein frames. This shows, e.g., that due to the domain of the conformal transformation between the Jordan and Einstein frames, not all the solutions in the Jordan frame are completely contained in the Einstein frame. We also make comparisons with previous dynamical systems approaches to f ( R ) cosmology and discuss their advantages and disadvantages.« less

  14. Photoreceptor counting and montaging of en-face retinal images from an adaptive optics fundus camera

    PubMed Central

    Xue, Bai; Choi, Stacey S.; Doble, Nathan; Werner, John S.

    2008-01-01

    A fast and efficient method for quantifying photoreceptor density in images obtained with an en-face flood-illuminated adaptive optics (AO) imaging system is described. To improve accuracy of cone counting, en-face images are analyzed over extended areas. This is achieved with two separate semiautomated algorithms: (1) a montaging algorithm that joins retinal images with overlapping common features without edge effects and (2) a cone density measurement algorithm that counts the individual cones in the montaged image. The accuracy of the cone density measurement algorithm is high, with >97% agreement for a simulated retinal image (of known density, with low contrast) and for AO images from normal eyes when compared with previously reported histological data. Our algorithms do not require spatial regularity in cone packing and are, therefore, useful for counting cones in diseased retinas, as demonstrated for eyes with Stargardt’s macular dystrophy and retinitis pigmentosa. PMID:17429482

  15. Photoreceptor counting and montaging of en-face retinal images from an adaptive optics fundus camera

    NASA Astrophysics Data System (ADS)

    Xue, Bai; Choi, Stacey S.; Doble, Nathan; Werner, John S.

    2007-05-01

    A fast and efficient method for quantifying photoreceptor density in images obtained with an en-face flood-illuminated adaptive optics (AO) imaging system is described. To improve accuracy of cone counting, en-face images are analyzed over extended areas. This is achieved with two separate semiautomated algorithms: (1) a montaging algorithm that joins retinal images with overlapping common features without edge effects and (2) a cone density measurement algorithm that counts the individual cones in the montaged image. The accuracy of the cone density measurement algorithm is high, with >97% agreement for a simulated retinal image (of known density, with low contrast) and for AO images from normal eyes when compared with previously reported histological data. Our algorithms do not require spatial regularity in cone packing and are, therefore, useful for counting cones in diseased retinas, as demonstrated for eyes with Stargardt's macular dystrophy and retinitis pigmentosa.

  16. Joint Inversion of Body-Wave Arrival Times and Surface-Wave Dispersion Data in the Wavelet Domain Constrained by Sparsity Regularization

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Fang, H.; Yao, H.; Maceira, M.; van der Hilst, R. D.

    2014-12-01

    Recently, Zhang et al. (2014, Pure and Appiled Geophysics) have developed a joint inversion code incorporating body-wave arrival times and surface-wave dispersion data. The joint inversion code was based on the regional-scale version of the double-difference tomography algorithm tomoDD. The surface-wave inversion part uses the propagator matrix solver in the algorithm DISPER80 (Saito, 1988) for forward calculation of dispersion curves from layered velocity models and the related sensitivities. The application of the joint inversion code to the SAFOD site in central California shows that the fault structure is better imaged in the new model, which is able to fit both the body-wave and surface-wave observations adequately. Here we present a new joint inversion method that solves the model in the wavelet domain constrained by sparsity regularization. Compared to the previous method, it has the following advantages: (1) The method is both data- and model-adaptive. For the velocity model, it can be represented by different wavelet coefficients at different scales, which are generally sparse. By constraining the model wavelet coefficients to be sparse, the inversion in the wavelet domain can inherently adapt to the data distribution so that the model has higher spatial resolution in the good data coverage zone. Fang and Zhang (2014, Geophysical Journal International) have showed the superior performance of the wavelet-based double-difference seismic tomography method compared to the conventional method. (2) For the surface wave inversion, the joint inversion code takes advantage of the recent development of direct inversion of surface wave dispersion data for 3-D variations of shear wave velocity without the intermediate step of phase or group velocity maps (Fang et al., 2014, Geophysical Journal International). A fast marching method is used to compute, at each period, surface wave traveltimes and ray paths between sources and receivers. We will test the new joint inversion code at the SAFOD site to compare its performance over the previous code. We will also select another fault zone such as the San Jacinto Fault Zone to better image its structure.

  17. Adult lactose digestion status and effects on disease

    PubMed Central

    Szilagyi, Andrew

    2015-01-01

    BACKGROUND: Adult assimilation of lactose divides humans into dominant lactase-persistent and recessive nonpersistent phenotypes. OBJECTIVES: To review three medical parameters of lactose digestion, namely: the changing concept of lactose intolerance; the possible impact on diseases of microbial adaptation in lactase-nonpersistent populations; and the possibility that the evolution of lactase has influenced some disease pattern distributions. METHODS: A PubMed, Google Scholar and manual review of articles were used to provide a narrative review of the topic. RESULTS: The concept of lactose intolerance is changing and merging with food intolerances. Microbial adaptation to regular lactose consumption in lactase-nonpersistent individuals is supported by limited evidence. There is evidence suggestive of a relationship among geographical distributions of latitude, sunhine exposure and lactase proportional distributions worldwide. DISCUSSION: The definition of lactose intolerance has shifted away from association with lactose maldigestion. Lactose sensitivity is described equally in lactose digesters and maldigesters. The important medical consequence of withholding dairy foods could have a detrimental impact on several diseases; in addition, microbial adaptation in lactase-nonpersistent populations may alter risk for some diseases. There is suggestive evidence that the emergence of lactase persistence, together with human migrations before and after the emergence of lactase persistence, have impacted modern-day diseases. CONCLUSIONS: Lactose maldigestion and lactose intolerance are not synonymous. Withholding dairy foods is a poor method to treat lactose intolerance. Further epidemiological work could shed light on the possible effects of microbial adaptation in lactose maldigesters. The evolutionary impact of lactase may be still ongoing. PMID:25855879

  18. Adaptive low-rank subspace learning with online optimization for robust visual tracking.

    PubMed

    Liu, Risheng; Wang, Di; Han, Yuzhuo; Fan, Xin; Luo, Zhongxuan

    2017-04-01

    In recent years, sparse and low-rank models have been widely used to formulate appearance subspace for visual tracking. However, most existing methods only consider the sparsity or low-rankness of the coefficients, which is not sufficient enough for appearance subspace learning on complex video sequences. Moreover, as both the low-rank and the column sparse measures are tightly related to all the samples in the sequences, it is challenging to incrementally solve optimization problems with both nuclear norm and column sparse norm on sequentially obtained video data. To address above limitations, this paper develops a novel low-rank subspace learning with adaptive penalization (LSAP) framework for subspace based robust visual tracking. Different from previous work, which often simply decomposes observations as low-rank features and sparse errors, LSAP simultaneously learns the subspace basis, low-rank coefficients and column sparse errors to formulate appearance subspace. Within LSAP framework, we introduce a Hadamard production based regularization to incorporate rich generative/discriminative structure constraints to adaptively penalize the coefficients for subspace learning. It is shown that such adaptive penalization can significantly improve the robustness of LSAP on severely corrupted dataset. To utilize LSAP for online visual tracking, we also develop an efficient incremental optimization scheme for nuclear norm and column sparse norm minimizations. Experiments on 50 challenging video sequences demonstrate that our tracker outperforms other state-of-the-art methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Adaptive density trajectory cluster based on time and space distance

    NASA Astrophysics Data System (ADS)

    Liu, Fagui; Zhang, Zhijie

    2017-10-01

    There are some hotspot problems remaining in trajectory cluster for discovering mobile behavior regularity, such as the computation of distance between sub trajectories, the setting of parameter values in cluster algorithm and the uncertainty/boundary problem of data set. As a result, based on the time and space, this paper tries to define the calculation method of distance between sub trajectories. The significance of distance calculation for sub trajectories is to clearly reveal the differences in moving trajectories and to promote the accuracy of cluster algorithm. Besides, a novel adaptive density trajectory cluster algorithm is proposed, in which cluster radius is computed through using the density of data distribution. In addition, cluster centers and number are selected by a certain strategy automatically, and uncertainty/boundary problem of data set is solved by designed weighted rough c-means. Experimental results demonstrate that the proposed algorithm can perform the fuzzy trajectory cluster effectively on the basis of the time and space distance, and obtain the optimal cluster centers and rich cluster results information adaptably for excavating the features of mobile behavior in mobile and sociology network.

  20. Effect of sensory adaptation on anxiety of children with developmental disabilities: a new approach.

    PubMed

    Shapiro, Michele; Melmed, Raphael N; Sgan-Cohen, Harold D; Parush, Shula

    2009-01-01

    The aim of this study was to evaluate the effect of a sensory-adapted dental environment (SADE) on anxiety, relaxation, and cooperation of children with developmental disabilities (CDDs). Pharmacological treatment has been widely used to reduce anxiety, but nonpharmacological methods may be similarly effective. The standardized clinical situation chosen was a dental hygiene cleaning. A SADE was structured. Sixteen CDDs participated in an open cross-over intervention trial measuring behavioral and psychophysiological variables. There was a substantial increase in relaxation and cooperation in the SADE as opposed to the regular dental environment (RDE). This was reflected by: mean duration of anxious behaviors (SADE = 9.04 minutes vs. RDE = 23.44 minutes; P < .01); mean magnitude of anxious behaviors (SADE = 8.49 vs. RDE = 15.50; P < .01); cooperation levels (SADE = 331 vs. RDE = 1.94; P < .01); mean electrodermal activity (EDA; SADE = 1230 vs. RDE = 446; P < .001); and difference in degree of relaxation by EDA (SADE=2014 vs. RDE=763; P < .004). The findings indicate the potential importance of considering the sensory-adapted environment as a preferable dental environment for this population.

  1. Glycation products in infant formulas: chemical, analytical and physiological aspects.

    PubMed

    Pischetsrieder, Monika; Henle, Thomas

    2012-04-01

    Infant formulas are milk-based products, which are adapted to the composition of human milk. To ensure microbiological safety and long shelf life, infant formulas usually undergo rigid heat treatment. As a consequence of the special composition and the heat regimen, infant formulas are more prone to thermally induced degradation reactions than regular milk products. Degradation reactions observed during milk processing comprise lactosylation yielding the Amadori product lactulosyllysine, the formation of advanced glycation end products (AGEs), and protein-free sugar degradation products, as well as protein or lipid oxidation. Several methods have been developed to estimate the heat impact applied during the manufacturing of infant formulas, including indirect methods such as fluorescence analysis as well as the analysis of defined reaction products. Most studies confirm a higher degree of damage in infant formulas compared to regular milk products. Differences between various types of infant formulas, such as liquid, powdered or hypoallergenic formulas depend on the analyzed markers and brands. A considerable portion of protein degradation products in infant formulas can be avoided when process parameters and the quality of the ingredients are carefully controlled. The nutritional consequences of thermal degradation products in infant formulas are largely unknown.

  2. A practical globalization of one-shot optimization for optimal design of tokamak divertors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blommaert, Maarten, E-mail: maarten.blommaert@kuleuven.be; Dekeyser, Wouter; Baelmans, Martine

    In past studies, nested optimization methods were successfully applied to design of the magnetic divertor configuration in nuclear fusion reactors. In this paper, so-called one-shot optimization methods are pursued. Due to convergence issues, a globalization strategy for the one-shot solver is sought. Whereas Griewank introduced a globalization strategy using a doubly augmented Lagrangian function that includes primal and adjoint residuals, its practical usability is limited by the necessity of second order derivatives and expensive line search iterations. In this paper, a practical alternative is offered that avoids these drawbacks by using a regular augmented Lagrangian merit function that penalizes onlymore » state residuals. Additionally, robust rank-two Hessian estimation is achieved by adaptation of Powell's damped BFGS update rule. The application of the novel one-shot approach to magnetic divertor design is considered in detail. For this purpose, the approach is adapted to be complementary with practical in parts adjoint sensitivities. Using the globalization strategy, stable convergence of the one-shot approach is achieved.« less

  3. Implementing an Adaptive Physical Education Program for Educable Mentally Retarded Children, Kindergarten through Third Grade.

    ERIC Educational Resources Information Center

    Keller, Mimi

    An adaptive physical education program was implemented for two special classes of educable mentally retarded children, grades K-3 in California. Children from a regular kindergarten class also participated in the program. The program operated for 5 months, with children receiving motor skills training 40 minutes per day, 4 days per week. Analysis…

  4. Long-term adaptation to change in implicit contextual learning.

    PubMed

    Zellin, Martina; von Mühlenen, Adrian; Müller, Hermann J; Conci, Markus

    2014-08-01

    The visual world consists of spatial regularities that are acquired through experience in order to guide attentional orienting. For instance, in visual search, detection of a target is faster when a layout of nontarget items is encountered repeatedly, suggesting that learned contextual associations can guide attention (contextual cuing). However, scene layouts sometimes change, requiring observers to adapt previous memory representations. Here, we investigated the long-term dynamics of contextual adaptation after a permanent change of the target location. We observed fast and reliable learning of initial context-target associations after just three repetitions. However, adaptation of acquired contextual representations to relocated targets was slow and effortful, requiring 3 days of training with overall 80 repetitions. A final test 1 week later revealed equivalent effects of contextual cuing for both target locations, and these were comparable to the effects observed on day 1. That is, observers learned both initial target locations and relocated targets, given extensive training combined with extended periods of consolidation. Thus, while implicit contextual learning efficiently extracts statistical regularities of our environment at first, it is rather insensitive to change in the longer term, especially when subtle changes in context-target associations need to be acquired.

  5. DIMENSION-BASED STATISTICAL LEARNING OF VOWELS

    PubMed Central

    Liu, Ran; Holt, Lori L.

    2015-01-01

    Speech perception depends on long-term representations that reflect regularities of the native language. However, listeners rapidly adapt when speech acoustics deviate from these regularities due to talker idiosyncrasies such as foreign accents and dialects. To better understand these dual aspects of speech perception, we probe native English listeners’ baseline perceptual weighting of two acoustic dimensions (spectral quality and vowel duration) towards vowel categorization and examine how they subsequently adapt to an “artificial accent” that deviates from English norms in the correlation between the two dimensions. At baseline, listeners rely relatively more on spectral quality than vowel duration to signal vowel category, but duration nonetheless contributes. Upon encountering an “artificial accent” in which the spectral-duration correlation is perturbed relative to English language norms, listeners rapidly down-weight reliance on duration. Listeners exhibit this type of short-term statistical learning even in the context of nonwords, confirming that lexical information is not necessary to this form of adaptive plasticity in speech perception. Moreover, learning generalizes to both novel lexical contexts and acoustically-distinct altered voices. These findings are discussed in the context of a mechanistic proposal for how supervised learning may contribute to this type of adaptive plasticity in speech perception. PMID:26280268

  6. On convergence and convergence rates for Ivanov and Morozov regularization and application to some parameter identification problems in elliptic PDEs

    NASA Astrophysics Data System (ADS)

    Kaltenbacher, Barbara; Klassen, Andrej

    2018-05-01

    In this paper we provide a convergence analysis of some variational methods alternative to the classical Tikhonov regularization, namely Ivanov regularization (also called the method of quasi solutions) with some versions of the discrepancy principle for choosing the regularization parameter, and Morozov regularization (also called the method of the residuals). After motivating nonequivalence with Tikhonov regularization by means of an example, we prove well-definedness of the Ivanov and the Morozov method, convergence in the sense of regularization, as well as convergence rates under variational source conditions. Finally, we apply these results to some linear and nonlinear parameter identification problems in elliptic boundary value problems.

  7. A Piecewise Deterministic Markov Toy Model for Traffic/Maintenance and Associated Hamilton–Jacobi Integrodifferential Systems on Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goreac, Dan, E-mail: Dan.Goreac@u-pem.fr; Kobylanski, Magdalena, E-mail: Magdalena.Kobylanski@u-pem.fr; Martinez, Miguel, E-mail: Miguel.Martinez@u-pem.fr

    2016-10-15

    We study optimal control problems in infinite horizon whxen the dynamics belong to a specific class of piecewise deterministic Markov processes constrained to star-shaped networks (corresponding to a toy traffic model). We adapt the results in Soner (SIAM J Control Optim 24(6):1110–1122, 1986) to prove the regularity of the value function and the dynamic programming principle. Extending the networks and Krylov’s “shaking the coefficients” method, we prove that the value function can be seen as the solution to a linearized optimization problem set on a convenient set of probability measures. The approach relies entirely on viscosity arguments. As a by-product,more » the dual formulation guarantees that the value function is the pointwise supremum over regular subsolutions of the associated Hamilton–Jacobi integrodifferential system. This ensures that the value function satisfies Perron’s preconization for the (unique) candidate to viscosity solution.« less

  8. Ecological Rationality: A Framework for Understanding and Aiding the Aging Decision Maker

    PubMed Central

    Mata, Rui; Pachur, Thorsten; von Helversen, Bettina; Hertwig, Ralph; Rieskamp, Jörg; Schooler, Lael

    2012-01-01

    The notion of ecological rationality sees human rationality as the result of the adaptive fit between the human mind and the environment. Ecological rationality focuses the study of decision making on two key questions: First, what are the environmental regularities to which people’s decision strategies are matched, and how frequently do these regularities occur in natural environments? Second, how well can people adapt their use of specific strategies to particular environmental regularities? Research on aging suggests a number of changes in cognitive function, for instance, deficits in learning and memory that may impact decision-making skills. However, it has been shown that simple strategies can work well in many natural environments, which suggests that age-related deficits in strategy use may not necessarily translate into reduced decision quality. Consequently, we argue that predictions about the impact of aging on decision performance depend not only on how aging affects decision-relevant capacities but also on the decision environment in which decisions are made. In sum, we propose that the concept of the ecological rationality is crucial to understanding and aiding the aging decision maker. PMID:22347843

  9. Ecological rationality: a framework for understanding and aiding the aging decision maker.

    PubMed

    Mata, Rui; Pachur, Thorsten; von Helversen, Bettina; Hertwig, Ralph; Rieskamp, Jörg; Schooler, Lael

    2012-01-01

    The notion of ecological rationality sees human rationality as the result of the adaptive fit between the human mind and the environment. Ecological rationality focuses the study of decision making on two key questions: First, what are the environmental regularities to which people's decision strategies are matched, and how frequently do these regularities occur in natural environments? Second, how well can people adapt their use of specific strategies to particular environmental regularities? Research on aging suggests a number of changes in cognitive function, for instance, deficits in learning and memory that may impact decision-making skills. However, it has been shown that simple strategies can work well in many natural environments, which suggests that age-related deficits in strategy use may not necessarily translate into reduced decision quality. Consequently, we argue that predictions about the impact of aging on decision performance depend not only on how aging affects decision-relevant capacities but also on the decision environment in which decisions are made. In sum, we propose that the concept of the ecological rationality is crucial to understanding and aiding the aging decision maker.

  10. SU-G-IeP4-03: Cone Beam X-Ray Luminescence Computed Tomography Based On Generalized Gaussian Markov Random Field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, G; Xing, L

    2016-06-15

    Purpose: Cone beam X-ray luminescence computed tomography (CB-XLCT), which aims to achieve molecular and functional imaging by X-rays, has recently been proposed as a new imaging modality. However, the inverse problem of CB-XLCT is seriously ill-conditioned, hindering us to achieve good image quality. In this work, a novel reconstruction method based on Bayesian theory is proposed to tackle this problem Methods: Bayesian theory provides a natural framework for utilizing various kinds of available prior information to improve the reconstruction image quality. A generalized Gaussian Markov random field (GGMRF) model is proposed here to construct the prior model of the Bayesianmore » theory. The most important feature of GGMRF model is the adjustable shape parameter p, which can be continuously adjusted from 1 to 2. The reconstruction image tends to have more edge-preserving property when p is slide to 1, while having more noise tolerance property when p is slide to 2, just like the behavior of L1 and L2 regularization methods, respectively. The proposed method provides a flexible regularization framework to adapt to a wide range of applications. Results: Numerical simulations were implemented to test the performance of the proposed method. The Digimouse atlas were employed to construct a three-dimensional mouse model, and two small cylinders were placed inside to serve as the targets. Reconstruction results show that the proposed method tends to obtain better spatial resolution with a smaller shape parameter, while better signal-to-noise image with a larger shape parameter. Quantitative indexes, contrast-to-noise ratio (CNR) and full-width at half-maximum (FWHM), were used to assess the performance of the proposed method, and confirmed its effectiveness in CB-XLCT reconstruction. Conclusion: A novel reconstruction method for CB-XLCT is proposed based on GGMRF model, which enables an adjustable performance tradeoff between L1 and L2 regularization methods. Numerical simulations were conducted to demonstrate its performance.« less

  11. In vivo optical coherence tomography in endoscopic diagnostics of bladder disease

    NASA Astrophysics Data System (ADS)

    Daniltchenko, Dmitri; Lankenau, Eva; Konig, Frank; Shay, Brian; Huettmann, Gereon; Sachs, Markus D.; Schnorr, Dietmar; Loening, Stefan A.

    2004-07-01

    Purpose: OCT is a new imaging method which produces a 3 mm wide x 2.5 mm deep 2D picture with a resolution of 15 μm. Materials and Methods: We utilised the Tomograph Sirius 713, developed at the Medical Laser Centre in cooperation with 4-Optics AG, Lubeck, Germany. This apparatus uses a special Super-Luminescence-Diode (SLD) that produces light within the near infrared wavelength, with a central wavelength of 1300 nm and spectral width of 45 nm. The coherence length is reduced to 15 μm. The light is introduced into a fibreglass optic which is a couple of meters long and is easy to handle. To measure the depth of invasion and position of urothelial bladder tumours, the fibreglass optic is attached to a regular endoscope (Wolf, Knittlingen, Germany) via a OCT adapter. That way, in parallel to the regular endoscopic view of the bladder mucosa with or without pathologic findings, an OCT picture of the superficial as well as the deeper muscle layers is visible online. OCT was used to obtaine 275 images from the bladder of 30 patients. Results: OCT of normal bladder mucosa produces an image with a cross section of up to 2.5 mm. It is possible to distinguish transitional epithelium, lamina propria, smooth muscles and capillaries. In cystitis the thickness of the mucosa is constant, but the distinction between the different layers is blurred. In squamous metaplasia there is thickening of the epithelial layer, with preservation of lamination of the lower layers. In transitional cell carcinoma there is a complete loss of the regular layered structure. Thus, the border between tumour and normal bladder tissue can be easily distinguished. Conclusions: This method can provide valuable information on tumour invasion and extension in real time and therefore influence therapeutic strategies

  12. Positive Contrast Visualization of Nitinol Devices using Susceptibility Gradient Mapping

    PubMed Central

    Vonken, Evert-jan P.A.; Schär, Michael; Stuber, Matthias

    2008-01-01

    MRI visualization of devices is traditionally based on the signal loss due to T2* effects originating from the local susceptibility differences. To visualize nitinol devices with positive contrast a recently introduced post processing method is adapted to map the induced susceptibility gradients. This method operates on regular gradient echo MR images and maps the shift in k-space in a (small) neighborhood of every voxel by Fourier analysis followed by a center of mass calculation. The quantitative map of the local shifts generates the positive contrast image of the devices, while areas without susceptibility gradients render a background with noise only. The positive signal response of this method depends only on the choice of the voxel neighborhood size. The properties of the method are explained and the visualization of a nitinol wire and two stents are shown for illustration. PMID:18727096

  13. Sensor-Topology Based Simplicial Complex Reconstruction from Mobile Laser Scanning

    NASA Astrophysics Data System (ADS)

    Guinard, S.; Vallet, B.

    2018-05-01

    We propose a new method for the reconstruction of simplicial complexes (combining points, edges and triangles) from 3D point clouds from Mobile Laser Scanning (MLS). Our main goal is to produce a reconstruction of a scene that is adapted to the local geometry of objects. Our method uses the inherent topology of the MLS sensor to define a spatial adjacency relationship between points. We then investigate each possible connexion between adjacent points and filter them by searching collinear structures in the scene, or structures perpendicular to the laser beams. Next, we create triangles for each triplet of self-connected edges. Last, we improve this method with a regularization based on the co-planarity of triangles and collinearity of remaining edges. We compare our results to a naive simplicial complexes reconstruction based on edge length.

  14. HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.

    PubMed

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2011-01-01

    The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.

  15. Selection of regularization parameter in total variation image restoration.

    PubMed

    Liao, Haiyong; Li, Fang; Ng, Michael K

    2009-11-01

    We consider and study total variation (TV) image restoration. In the literature there are several regularization parameter selection methods for Tikhonov regularization problems (e.g., the discrepancy principle and the generalized cross-validation method). However, to our knowledge, these selection methods have not been applied to TV regularization problems. The main aim of this paper is to develop a fast TV image restoration method with an automatic selection of the regularization parameter scheme to restore blurred and noisy images. The method exploits the generalized cross-validation (GCV) technique to determine inexpensively how much regularization to use in each restoration step. By updating the regularization parameter in each iteration, the restored image can be obtained. Our experimental results for testing different kinds of noise show that the visual quality and SNRs of images restored by the proposed method is promising. We also demonstrate that the method is efficient, as it can restore images of size 256 x 256 in approximately 20 s in the MATLAB computing environment.

  16. Phenotypic plasticity and climate change: can polar bears respond to longer Arctic summers with an adaptive fast?

    PubMed

    Whiteman, John P; Harlow, Henry J; Durner, George M; Regehr, Eric V; Amstrup, Steven C; Ben-David, Merav

    2018-02-01

    Plasticity in the physiological and behavioural responses of animals to prolonged food shortages may determine the persistence of species under climate warming. This is particularly applicable for species that can "adaptively fast" by conserving protein to protect organ function while catabolizing endogenous tissues. Some Ursids, including polar bears (Ursus maritimus), adaptively fast during winter hibernation-and it has been suggested that polar bears also employ this strategy during summer. We captured 57 adult female polar bears in the Southern Beaufort Sea (SBS) during summer 2008 and 2009 and measured blood variables that indicate feeding, regular fasting, and adaptive fasting. We also assessed tissue δ 13 C and δ 15 N to infer diet, and body condition via mass and length. We found that bears on shore maintained lipid and protein stores by scavenging on bowhead whale (Balaena mysticetus) carcasses from human harvest, while those that followed the retreating sea ice beyond the continental shelf were food deprived. They had low ratios of blood urea to creatinine (U:C), normally associated with adaptive fasting. However, they also exhibited low albumin and glucose (indicative of protein loss) and elevated alanine aminotransferase and ghrelin (which fall during adaptive fasting). Thus, the ~ 70% of the SBS subpopulation that spends summer on the ice experiences more of a regular, rather than adaptive, fast. This fast will lengthen as summer ice declines. The resulting protein loss prior to winter could be a mechanism driving the reported correlation between summer ice and polar bear reproduction and survival in the SBS.

  17. SkyFACT: high-dimensional modeling of gamma-ray emission with adaptive templates and penalized likelihoods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Storm, Emma; Weniger, Christoph; Calore, Francesca, E-mail: e.m.storm@uva.nl, E-mail: c.weniger@uva.nl, E-mail: francesca.calore@lapth.cnrs.fr

    We present SkyFACT (Sky Factorization with Adaptive Constrained Templates), a new approach for studying, modeling and decomposing diffuse gamma-ray emission. Like most previous analyses, the approach relies on predictions from cosmic-ray propagation codes like GALPROP and DRAGON. However, in contrast to previous approaches, we account for the fact that models are not perfect and allow for a very large number (∼> 10{sup 5}) of nuisance parameters to parameterize these imperfections. We combine methods of image reconstruction and adaptive spatio-spectral template regression in one coherent hybrid approach. To this end, we use penalized Poisson likelihood regression, with regularization functions that aremore » motivated by the maximum entropy method. We introduce methods to efficiently handle the high dimensionality of the convex optimization problem as well as the associated semi-sparse covariance matrix, using the L-BFGS-B algorithm and Cholesky factorization. We test the method both on synthetic data as well as on gamma-ray emission from the inner Galaxy, |ℓ|<90{sup o} and | b |<20{sup o}, as observed by the Fermi Large Area Telescope. We finally define a simple reference model that removes most of the residual emission from the inner Galaxy, based on conventional diffuse emission components as well as components for the Fermi bubbles, the Fermi Galactic center excess, and extended sources along the Galactic disk. Variants of this reference model can serve as basis for future studies of diffuse emission in and outside the Galactic disk.« less

  18. An improved Bayesian tensor regularization and sampling algorithm to track neuronal fiber pathways in the language circuit.

    PubMed

    Mishra, Arabinda; Anderson, Adam W; Wu, Xi; Gore, John C; Ding, Zhaohua

    2010-08-01

    The purpose of this work is to design a neuronal fiber tracking algorithm, which will be more suitable for reconstruction of fibers associated with functionally important regions in the human brain. The functional activations in the brain normally occur in the gray matter regions. Hence the fibers bordering these regions are weakly myelinated, resulting in poor performance of conventional tractography methods to trace the fiber links between them. A lower fractional anisotropy in this region makes it even difficult to track the fibers in the presence of noise. In this work, the authors focused on a stochastic approach to reconstruct these fiber pathways based on a Bayesian regularization framework. To estimate the true fiber direction (propagation vector), the a priori and conditional probability density functions are calculated in advance and are modeled as multivariate normal. The variance of the estimated tensor element vector is associated with the uncertainty due to noise and partial volume averaging (PVA). An adaptive and multiple sampling of the estimated tensor element vector, which is a function of the pre-estimated variance, overcomes the effect of noise and PVA in this work. The algorithm has been rigorously tested using a variety of synthetic data sets. The quantitative comparison of the results to standard algorithms motivated the authors to implement it for in vivo DTI data analysis. The algorithm has been implemented to delineate fibers in two major language pathways (Broca's to SMA and Broca's to Wernicke's) across 12 healthy subjects. Though the mean of standard deviation was marginally bigger than conventional (Euler's) approach [P. J. Basser et al., "In vivo fiber tractography using DT-MRI data," Magn. Reson. Med. 44(4), 625-632 (2000)], the number of extracted fibers in this approach was significantly higher. The authors also compared the performance of the proposed method to Lu's method [Y. Lu et al., "Improved fiber tractography with Bayesian tensor regularization," Neuroimage 31(3), 1061-1074 (2006)] and Friman's stochastic approach [O. Friman et al., "A Bayesian approach for stochastic white matter tractography," IEEE Trans. Med. Imaging 25(8), 965-978 (2006)]. Overall performance of the approach is found to be superior to above two methods, particularly when the signal-to-noise ratio was low. The authors observed that an adaptive sampling of the tensor element vectors, estimated as a function of the variance in a Bayesian framework, can effectively delineate neuronal fibers to analyze the structure-function relationship in human brain. The simulated and in vivo results are in good agreement with the theoretical aspects of the algorithm.

  19. Methods for prostate stabilization during transperineal LDR brachytherapy.

    PubMed

    Podder, Tarun; Sherman, Jason; Rubens, Deborah; Messing, Edward; Strang, John; Ng, Wan-Sing; Yu, Yan

    2008-03-21

    In traditional prostate brachytherapy procedures for a low-dose-rate (LDR) radiation seed implant, stabilizing needles are first inserted to provide some rigidity and support to the prostate. Ideally this will provide better seed placement and an overall improved treatment. However, there is much speculation regarding the effectiveness of using regular brachytherapy needles as stabilizers. In this study, we explored the efficacy of two types of needle geometries (regular brachytherapy needle and hooked needle) and several clinically feasible configurations of the stabilization needles. To understand and assess the prostate movement during seed implantation, we collected in vivo data from patients during actual brachytherapy procedures. In vitro experimentation with tissue-equivalent phantoms allowed us to further understand the mechanics behind prostate stabilization. We observed superior stabilization with the hooked needles compared to the regular brachytherapy needles (more than 40% in bilateral parallel needle configuration). Prostate movement was also reduced significantly when regular brachytherapy needles were in an angulated configuration as compared to the parallel configuration (more than 60%). When the hooked needles were angulated for stabilization, further reduction in prostate displacement was observed. In general, for convenience of dosimetric planning and to avoid needle collision, all needles are desired to be in a parallel configuration. In this configuration, hooked needles provide improved stabilization of the prostate. On the other hand, both regular and hooked needles appear to be equally effective in reducing prostate movement when they are in angulated configurations, which will be useful in seed implantation using a robotic system. We have developed nonlinear spring-damper model for the prostate movement which can be used for adapting dosimetric planning during brachytherapy as well as for developing more realistic haptic devices and training simulators.

  20. Children with Brittle Bones.

    ERIC Educational Resources Information Center

    Alston, Jean

    1982-01-01

    Special help given to children with Osteogenesis Imperfecta (brittle bone disease) is described, including adapted equipment to allow for writing and use of a classroom assistant to aid participation in a regular classroom. (CL)

  1. Use of intervention mapping to adapt a health behavior change intervention for endometrial cancer survivors: the shape-up following cancer treatment program.

    PubMed

    Koutoukidis, Dimitrios A; Lopes, Sonia; Atkins, Lou; Croker, Helen; Knobf, M Tish; Lanceley, Anne; Beeken, Rebecca J

    2018-03-27

    About 80% of endometrial cancer survivors (ECS) are overweight or obese and have sedentary behaviors. Lifestyle behavior interventions are promising for improving dietary and physical activity behaviors, but the constructs associated with their effectiveness are often inadequately reported. The aim of this study was to systematically adapt an evidence-based behavior change program to improve healthy lifestyle behaviors in ECS. Following a review of the literature, focus groups and interviews were conducted with ECS (n = 16). An intervention mapping protocol was used for the program adaptation, which consisted of six steps: a needs assessment, formulation of matrices of change objectives, selection of theoretical methods and practical applications, program production, adoption and implementation planning, and evaluation planning. Social Cognitive Theory and Control Theory guided the adaptation of the intervention. The process consisted of eight 90-min group sessions focusing on shaping outcome expectations, knowledge, self-efficacy, and goals about healthy eating and physical activity. The adapted performance objectives included establishment of regular eating, balanced diet, and portion sizes, reduction in sedentary behaviors, increase in lifestyle and organized activities, formulation of a discrepancy-reducing feedback loop for all above behaviors, and trigger management. Information on managing fatigue and bowel issues unique to ECS were added. Systematic intervention mapping provided a framework to design a cancer survivor-centered lifestyle intervention. ECS welcomed the intervention and provided essential feedback for its adaptation. The program has been evaluated through a randomized controlled trial.

  2. Techniques to derive geometries for image-based Eulerian computations

    PubMed Central

    Dillard, Seth; Buchholz, James; Vigmostad, Sarah; Kim, Hyunggun; Udaykumar, H.S.

    2014-01-01

    Purpose The performance of three frequently used level set-based segmentation methods is examined for the purpose of defining features and boundary conditions for image-based Eulerian fluid and solid mechanics models. The focus of the evaluation is to identify an approach that produces the best geometric representation from a computational fluid/solid modeling point of view. In particular, extraction of geometries from a wide variety of imaging modalities and noise intensities, to supply to an immersed boundary approach, is targeted. Design/methodology/approach Two- and three-dimensional images, acquired from optical, X-ray CT, and ultrasound imaging modalities, are segmented with active contours, k-means, and adaptive clustering methods. Segmentation contours are converted to level sets and smoothed as necessary for use in fluid/solid simulations. Results produced by the three approaches are compared visually and with contrast ratio, signal-to-noise ratio, and contrast-to-noise ratio measures. Findings While the active contours method possesses built-in smoothing and regularization and produces continuous contours, the clustering methods (k-means and adaptive clustering) produce discrete (pixelated) contours that require smoothing using speckle-reducing anisotropic diffusion (SRAD). Thus, for images with high contrast and low to moderate noise, active contours are generally preferable. However, adaptive clustering is found to be far superior to the other two methods for images possessing high levels of noise and global intensity variations, due to its more sophisticated use of local pixel/voxel intensity statistics. Originality/value It is often difficult to know a priori which segmentation will perform best for a given image type, particularly when geometric modeling is the ultimate goal. This work offers insight to the algorithm selection process, as well as outlining a practical framework for generating useful geometric surfaces in an Eulerian setting. PMID:25750470

  3. Heat adaptation from regular hot water immersion decreases proinflammatory responses, HSP70 expression, and physical heat stress.

    PubMed

    Yang, Fwu-Lin; Lee, Chia-Chi; Subeq, Yi-Maun; Lee, Chung-Jen; Ke, Chun-Yen; Lee, Ru-Ping

    2017-10-01

    Hot-water immersion (HWI) is a type of thermal therapy for treating various diseases. In our study, the physiological responses to occasional and regular HWI have been explored. The rats were divided into a control group, occasional group (1D), and regular group (7D). The 1D and 7D groups received 42°C during 15mins HWI for 1 and 7 days, respectively. The blood samples were collected for proinflammatory cytokines examinations, the heart, liver and kidney were excised for subsequent IHC analysis to measure the level of heat shock protein 70 (HSP70). The results revealed that the body temperature increased significantly during HWI on Day 3 and significantly declined on Days 6 and 7. For the 7D group, body weight, heart rate, hematocrit, platelet, osmolarity, and lactate level were lower than those in the 1D group. Furthermore, the levels of granulocyte counts, tumor necrosis factor-α, and interleukin-6 were lower in the 7D group than in the 1D group. The induction of HSP70 in the 1D group was higher than in the other groups. Physiological responses to occasional HWI are disadvantageous because of heat stress. However, adaptation to heat from regular HWI resulted in decreased proinflammatory responses and physical heat stress. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. A comparison of viscous-plastic sea ice solvers with and without replacement pressure

    NASA Astrophysics Data System (ADS)

    Kimmritz, Madlen; Losch, Martin; Danilov, Sergey

    2017-07-01

    Recent developments of the explicit elastic-viscous-plastic (EVP) solvers call for a new comparison with implicit solvers for the equations of viscous-plastic sea ice dynamics. In Arctic sea ice simulations, the modified and the adaptive EVP solvers, and the implicit Jacobian-free Newton-Krylov (JFNK) solver are compared against each other. The adaptive EVP method shows convergence rates that are generally similar or even better than those of the modified EVP method, but the convergence of the EVP methods is found to depend dramatically on the use of the replacement pressure (RP). Apparently, using the RP can affect the pseudo-elastic waves in the EVP methods by introducing extra non-physical oscillations so that, in the extreme case, convergence to the VP solution can be lost altogether. The JFNK solver also suffers from higher failure rates with RP implying that with RP the momentum equations are stiffer and more difficult to solve. For practical purposes, both EVP methods can be used efficiently with an unexpectedly low number of sub-cycling steps without compromising the solutions. The differences between the RP solutions and the NoRP solutions (when the RP is not being used) can be reduced with lower thresholds of viscous regularization at the cost of increasing stiffness of the equations, and hence the computational costs of solving them.

  5. Continuous Dropout.

    PubMed

    Shen, Xu; Tian, Xinmei; Liu, Tongliang; Xu, Fang; Tao, Dacheng

    2017-10-03

    Dropout has been proven to be an effective algorithm for training robust deep networks because of its ability to prevent overfitting by avoiding the co-adaptation of feature detectors. Current explanations of dropout include bagging, naive Bayes, regularization, and sex in evolution. According to the activation patterns of neurons in the human brain, when faced with different situations, the firing rates of neurons are random and continuous, not binary as current dropout does. Inspired by this phenomenon, we extend the traditional binary dropout to continuous dropout. On the one hand, continuous dropout is considerably closer to the activation characteristics of neurons in the human brain than traditional binary dropout. On the other hand, we demonstrate that continuous dropout has the property of avoiding the co-adaptation of feature detectors, which suggests that we can extract more independent feature detectors for model averaging in the test stage. We introduce the proposed continuous dropout to a feedforward neural network and comprehensively compare it with binary dropout, adaptive dropout, and DropConnect on Modified National Institute of Standards and Technology, Canadian Institute for Advanced Research-10, Street View House Numbers, NORB, and ImageNet large scale visual recognition competition-12. Thorough experiments demonstrate that our method performs better in preventing the co-adaptation of feature detectors and improves test performance.

  6. Microleakage Evaluation at Implant-Abutment Interface Using Radiotracer Technique

    PubMed Central

    Siadat, Hakimeh; Arshad, Mahnaz; Mahgoli, Hossein-Ali; Fallahi, Babak

    2016-01-01

    Objectives: Microbial leakage through the implant-abutment (I-A) interface results in bacterial colonization in two-piece implants. The aim of this study was to compare microleakage rates in three types of Replace abutments namely Snappy, GoldAdapt, and customized ceramic using radiotracing. Materials and Methods: Three groups, one for each abutment type, of five implants and one positive and one negative control were considered (a total of 17 regular body implants). A torque of 35 N/cm was applied to the abutments. The samples were immersed in thallium 201 radioisotope solution for 24 hours to let the radiotracers leak through the I-A interface. Then, gamma photons received from the radiotracers were counted using a gamma counter device. In the next phase, cyclic fatigue loading process was applied followed by the same steps of immersion in the radioactive solution and photon counting. Results: Rate of microleakage significantly increased (P≤0.05) in all three types of abutments (i.e. Snappy, GoldAdapt, and ceramic) after cyclic loading. No statistically significant differences were observed between abutment types after cyclic loading. Conclusions: Microleakage significantly increases after cyclic loading in all three Replace abutments (GoldAdapt, Snappy, ceramic). Lowest microleakage before and after cyclic loading was observed in GoldAdapt followed by Snappy and ceramic. PMID:28392814

  7. Twenty years of experience with particulate silicone in plastic surgery.

    PubMed

    Planas, J; del Cacho, C

    1992-01-01

    The use of particulate silicone in plastic surgery involves the introduction of solid silicone into the body. The silicone is in small pieces in order for it to adapt to the shape of the defect. This way large quantities can be introduced through small incisions. It is also possible to distribute the silicone particles from outside the skin to make the corrections more regular. This method has been very useful for correcting post-traumatic depressions in the face and all areas where the depression has a rigid back support. We consider it the treatment of choice for correcting the funnel chest deformity.

  8. Self-organization leads to supraoptimal performance in public transportation systems.

    PubMed

    Gershenson, Carlos

    2011-01-01

    The performance of public transportation systems affects a large part of the population. Current theory assumes that passengers are served optimally when vehicles arrive at stations with regular intervals. In this paper, it is shown that self-organization can improve the performance of public transportation systems beyond the theoretical optimum by responding adaptively to local conditions. This is possible because of a "slower-is-faster" effect, where passengers wait more time at stations but total travel times are reduced. The proposed self-organizing method uses "antipheromones" to regulate headways, which are inspired by the stigmergy (communication via environment) of some ant colonies.

  9. A CRISPR view of development

    PubMed Central

    Harrison, Melissa M.; Jenkins, Brian V.; O’Connor-Giles, Kate M.

    2014-01-01

    The CRISPR (clustered regularly interspaced short palindromic repeat)–Cas9 (CRISPR-associated nuclease 9) system is poised to transform developmental biology by providing a simple, efficient method to precisely manipulate the genome of virtually any developing organism. This RNA-guided nuclease (RGN)-based approach already has been effectively used to induce targeted mutations in multiple genes simultaneously, create conditional alleles, and generate endogenously tagged proteins. Illustrating the adaptability of RGNs, the genomes of >20 different plant and animal species as well as multiple cell lines and primary cells have been successfully modified. Here we review the current and potential uses of RGNs to investigate genome function during development. PMID:25184674

  10. The compatibility of reform initiatives in inclusion and science education: Perceptions of science teachers

    NASA Astrophysics Data System (ADS)

    Chung, Su-Hsiang

    The purposes of this investigation were to examine science teachers' instructional adaptations, testing and grading policies, as well as their perceptions toward inclusion. In addition, whether the perceptions and adaptations differ among three disability areas (learning disabilities, emotional handicaps, and mental handicaps), school level (elementary, middle, and high school), course content (life and physical science), instructional approach (textbook-oriented or activity-oriented), and other related variables was examined. Especially, the intention was to determine whether the two educational reform efforts (inclusion and excellence in science education) are compatible. In this study, 900 questionnaires were mailed to teachers in Indiana and 424 (47%) were returned. Due to incomplete or blank data, 38 (4%) responses were excluded. The final results were derived from a total of 386 respondents contributing to this investigation. The descriptive data indicated that teachers adapted their instruction moderately to accommodate students' special needs. In particular, these adaptations were made more frequently for students with mental handicaps (MH) or learning disabilities (LD), but less for students with emotional handicaps (EH). With respect to testing policies, less than half of the teachers (44.5%) used "same testing standards as regular students" for integrated LD students, while a majority of the teachers (57%) used such a policy for EH students. Unfortunately, considerably fewer teachers modified their grading policies for these two groups of students. In contrast, approximately two thirds of the teachers indicated that they used different testing or grading policies for MH students who were in the regular settings. Moreover, the results also showed that changes in classroom procedure did not occur much in the science teachers' classrooms. Perceptions of science teachers toward inclusion practices were somewhat mixed. Overall, teachers had neutral attitudes. Among the three groups of teachers, elementary teachers were most positive toward inclusion practices, while high school teachers least. The present investigation adds several new insights into the existing research that school level, science teaching approach, and science course content are associated with the likelihood of adaptation making behaviors of science teachers. With respect to instructional adaptations, variables such as special education coursework, workshops attended, gender, as well as years of teaching experience, were also found significantly associated with teachers' instructional adjustments. Teachers who were more likely to adjust their teaching instruction included those who taught life science, those who used hands-on activities as the major teaching method, those who took more courses in special education, and those who attended more mainstreaming-related workshops. On the contrary, teachers with more experience made fewer adaptations in their general science classrooms.

  11. Beamspace fast fully adaptive brain source localization for limited data sequences

    NASA Astrophysics Data System (ADS)

    Ravan, Maryam

    2017-05-01

    In the electroencephalogram (EEG) or magnetoencephalogram (MEG) context, brain source localization methods that rely on estimating second order statistics often fail when the observations are taken over a short time interval, especially when the number of electrodes is large. To address this issue, in previous study, we developed a multistage adaptive processing called fast fully adaptive (FFA) approach that can significantly reduce the required sample support while still processing all available degrees of freedom (DOFs). This approach processes the observed data in stages through a decimation procedure. In this study, we introduce a new form of FFA approach called beamspace FFA. We first divide the brain into smaller regions and transform the measured data from the source space to the beamspace in each region. The FFA approach is then applied to the beamspaced data of each region. The goal of this modification is to benefit the correlation sensitivity reduction between sources in different brain regions. To demonstrate the performance of the beamspace FFA approach in the limited data scenario, simulation results with multiple deep and cortical sources as well as experimental results are compared with regular FFA and widely used FINE approaches. Both simulation and experimental results demonstrate that the beamspace FFA method can localize different types of multiple correlated brain sources in low signal to noise ratios more accurately with limited data.

  12. Polarimetric image reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Valenzuela, John R.

    In the field of imaging polarimetry Stokes parameters are sought and must be inferred from noisy and blurred intensity measurements. Using a penalized-likelihood estimation framework we investigate reconstruction quality when estimating intensity images and then transforming to Stokes parameters (traditional estimator), and when estimating Stokes parameters directly (Stokes estimator). We define our cost function for reconstruction by a weighted least squares data fit term and a regularization penalty. It is shown that under quadratic regularization, the traditional and Stokes estimators can be made equal by appropriate choice of regularization parameters. It is empirically shown that, when using edge preserving regularization, estimating the Stokes parameters directly leads to lower RMS error in reconstruction. Also, the addition of a cross channel regularization term further lowers the RMS error for both methods especially in the case of low SNR. The technique of phase diversity has been used in traditional incoherent imaging systems to jointly estimate an object and optical system aberrations. We extend the technique of phase diversity to polarimetric imaging systems. Specifically, we describe penalized-likelihood methods for jointly estimating Stokes images and optical system aberrations from measurements that contain phase diversity. Jointly estimating Stokes images and optical system aberrations involves a large parameter space. A closed-form expression for the estimate of the Stokes images in terms of the aberration parameters is derived and used in a formulation that reduces the dimensionality of the search space to the number of aberration parameters only. We compare the performance of the joint estimator under both quadratic and edge-preserving regularization. The joint estimator with edge-preserving regularization yields higher fidelity polarization estimates than with quadratic regularization. Under quadratic regularization, using the reduced-parameter search strategy, accurate aberration estimates can be obtained without recourse to regularization "tuning". Phase-diverse wavefront sensing is emerging as a viable candidate wavefront sensor for adaptive-optics systems. In a quadratically penalized weighted least squares estimation framework a closed form expression for the object being imaged in terms of the aberrations in the system is available. This expression offers a dramatic reduction of the dimensionality of the estimation problem and thus is of great interest for practical applications. We have derived an expression for an approximate joint covariance matrix for object and aberrations in the phase diversity context. Our expression for the approximate joint covariance is compared with the "known-object" Cramer-Rao lower bound that is typically used for system parameter optimization. Estimates of the optimal amount of defocus in a phase-diverse wavefront sensor derived from the joint-covariance matrix, the known-object Cramer-Rao bound, and Monte Carlo simulations are compared for an extended scene and a point object. It is found that our variance approximation, that incorporates the uncertainty of the object, leads to an improvement in predicting the optimal amount of defocus to use in a phase-diverse wavefront sensor.

  13. Adaptation in CRISPR-Cas Systems.

    PubMed

    Sternberg, Samuel H; Richter, Hagen; Charpentier, Emmanuelle; Qimron, Udi

    2016-03-17

    Clustered regularly interspaced short palindromic repeats (CRISPR) and CRISPR-associated (Cas) proteins constitute an adaptive immune system in prokaryotes. The system preserves memories of prior infections by integrating short segments of foreign DNA, termed spacers, into the CRISPR array in a process termed adaptation. During the past 3 years, significant progress has been made on the genetic requirements and molecular mechanisms of adaptation. Here we review these recent advances, with a focus on the experimental approaches that have been developed, the insights they generated, and a proposed mechanism for self- versus non-self-discrimination during the process of spacer selection. We further describe the regulation of adaptation and the protein players involved in this fascinating process that allows bacteria and archaea to harbor adaptive immunity. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. ℓ1-Regularized full-waveform inversion with prior model information based on orthant-wise limited memory quasi-Newton method

    NASA Astrophysics Data System (ADS)

    Dai, Meng-Xue; Chen, Jing-Bo; Cao, Jian

    2017-07-01

    Full-waveform inversion (FWI) is an ill-posed optimization problem which is sensitive to noise and initial model. To alleviate the ill-posedness of the problem, regularization techniques are usually adopted. The ℓ1-norm penalty is a robust regularization method that preserves contrasts and edges. The Orthant-Wise Limited-Memory Quasi-Newton (OWL-QN) method extends the widely-used limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method to the ℓ1-regularized optimization problems and inherits the efficiency of L-BFGS. To take advantage of the ℓ1-regularized method and the prior model information obtained from sonic logs and geological information, we implement OWL-QN algorithm in ℓ1-regularized FWI with prior model information in this paper. Numerical experiments show that this method not only improve the inversion results but also has a strong anti-noise ability.

  15. 3D undersampled golden-radial phase encoding for DCE-MRA using inherently regularized iterative SENSE.

    PubMed

    Prieto, Claudia; Uribe, Sergio; Razavi, Reza; Atkinson, David; Schaeffter, Tobias

    2010-08-01

    One of the current limitations of dynamic contrast-enhanced MR angiography is the requirement of both high spatial and high temporal resolution. Several undersampling techniques have been proposed to overcome this problem. However, in most of these methods the tradeoff between spatial and temporal resolution is constant for all the time frames and needs to be specified prior to data collection. This is not optimal for dynamic contrast-enhanced MR angiography where the dynamics of the process are difficult to predict and the image quality requirements are changing during the bolus passage. Here, we propose a new highly undersampled approach that allows the retrospective adaptation of the spatial and temporal resolution. The method combines a three-dimensional radial phase encoding trajectory with the golden angle profile order and non-Cartesian Sensitivity Encoding (SENSE) reconstruction. Different regularization images, obtained from the same acquired data, are used to stabilize the non-Cartesian SENSE reconstruction for the different phases of the bolus passage. The feasibility of the proposed method was demonstrated on a numerical phantom and in three-dimensional intracranial dynamic contrast-enhanced MR angiography of healthy volunteers. The acquired data were reconstructed retrospectively with temporal resolutions from 1.2 sec to 8.1 sec, providing a good depiction of small vessels, as well as distinction of different temporal phases.

  16. Virtual Training and Coaching of Health Behavior: Example from Mindfulness Meditation Training

    PubMed Central

    Hudlicka, Eva

    2014-01-01

    Objective Computer-based virtual coaches are increasingly being explored for patient education, counseling, and health behavior training and coaching. The objective of this research was to develop and evaluate a Virtual Mindfulness Coach for training and coaching in mindfulness meditation. Method The coach was implemented as an embodied conversational character, providing mindfulness training and coaching via mixed initiative, text-based, natural language dialogue with the student, and emphasizing affect-adaptive interaction. (The term ‘mixed initiative dialog’ refers to a human-machine dialogue where either can initiate a conversation or a change in the conversation topic.) Results Findings from a pilot evaluation study indicate that the coach-based training is more effective in helping students establish a regular practice than self-administered training using written and audio materials. The coached group also appeared to be in more advanced stages of change in terms of the transtheoretical model, and have a higher sense of self-efficacy regarding establishment of a regular mindfulness practice. Conclusion These results suggest that virtual coach-based training of mindfulness is both feasible, and potentially more effective, than a self-administered program. Of particular interest is the identification of the specific coach features that contribute to its effectiveness. Practice Implications Virtual coaches could provide easily-accessible and cost-effective customized training for a range of health behaviors. The affect-adaptive aspect of these coaches is particularly relevant for helping patients establish long-term behavior changes. PMID:23809167

  17. Accelerated dynamic EPR imaging using fast acquisition and compressive recovery

    NASA Astrophysics Data System (ADS)

    Ahmad, Rizwan; Samouilov, Alexandre; Zweier, Jay L.

    2016-12-01

    Electron paramagnetic resonance (EPR) allows quantitative imaging of tissue redox status, which provides important information about ischemic syndromes, cancer and other pathologies. For continuous wave EPR imaging, however, poor signal-to-noise ratio and low acquisition efficiency limit its ability to image dynamic processes in vivo including tissue redox, where conditions can change rapidly. Here, we present a data acquisition and processing framework that couples fast acquisition with compressive sensing-inspired image recovery to enable EPR-based redox imaging with high spatial and temporal resolutions. The fast acquisition (FA) allows collecting more, albeit noisier, projections in a given scan time. The composite regularization based processing method, called spatio-temporal adaptive recovery (STAR), not only exploits sparsity in multiple representations of the spatio-temporal image but also adaptively adjusts the regularization strength for each representation based on its inherent level of the sparsity. As a result, STAR adjusts to the disparity in the level of sparsity across multiple representations, without introducing any tuning parameter. Our simulation and phantom imaging studies indicate that a combination of fast acquisition and STAR (FASTAR) enables high-fidelity recovery of volumetric image series, with each volumetric image employing less than 10 s of scan. In addition to image fidelity, the time constants derived from FASTAR also match closely to the ground truth even when a small number of projections are used for recovery. This development will enhance the capability of EPR to study fast dynamic processes that cannot be investigated using existing EPR imaging techniques.

  18. Novel harmonic regularization approach for variable selection in Cox's proportional hazards model.

    PubMed

    Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan

    2014-01-01

    Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq  (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods.

  19. [Clustered regularly interspaced short palindromic repeats: structure, function and application--a review].

    PubMed

    Cui, Yujun; Li, Yanjun; Yan, Yanfeng; Yang, Ruifu

    2008-11-01

    CRISPRs (Clustered Regularly Interspaced Short Palindromic Repeats), the basis of spoligotyping technology, can provide prokaryotes with heritable adaptive immunity against phages' invasion. Studies on CRISPR loci and their associated elements, including various CAS (CRISPR-associated) proteins and leader sequences, are still in its infant period. We introduce the brief history', structure, function, bioinformatics research and application of this amazing immunity system in prokaryotic organism for inspiring more scientists to find their interest in this developing topic.

  20. Effects of regular exercise training on skeletal muscle contractile function

    NASA Technical Reports Server (NTRS)

    Fitts, Robert H.

    2003-01-01

    Skeletal muscle function is critical to movement and one's ability to perform daily tasks, such as eating and walking. One objective of this article is to review the contractile properties of fast and slow skeletal muscle and single fibers, with particular emphasis on the cellular events that control or rate limit the important mechanical properties. Another important goal of this article is to present the current understanding of how the contractile properties of limb skeletal muscle adapt to programs of regular exercise.

  1. Profiling interest of students in science: Learning in school and beyond

    NASA Astrophysics Data System (ADS)

    Dierks, Pay O.; Höffler, Tim N.; Parchmann, Ilka

    2014-05-01

    Background:Interest is assumed to be relevant for students' learning processes. Many studies have investigated students' interest in science; most of them however have not offered differentiated insights into the structure and elements of this interest. Purpose:The aim of this study is to obtain a precise image of secondary school students' interest for school and out-of-school learning opportunities, both formal and informal. The study is part of a larger project on measuring the students' Individual Concept about the Natural Sciences (ICoN), including self-efficacy, beliefs and achievements next to interest variables. Sample:Next to regular school students, a specific cohort will be analyzed as well: participants of science competitions who are regarded as having high interest, and perhaps different interest profiles than regular students. In the study described here, participants of the International Junior Science Olympiad (N = 133) and regular students from secondary schools (N = 305), age cohorts 10 to 17 years, participated. Design and methods:We adapted Holland's well-established RIASEC-framework to analyze if and how it can also be used to assess students' interest within science and in-school and out-of-school (leisure-time and enrichment) activities. The resulting questionnaire was piloted according to quality criteria and applied to analyze profiles of different groups (boys - girls, contest participants - non-participants). Results:The RIASEC-adaption to investigate profiles within science works apparently well for school and leisure-time activities. Concerning the interest in fostering measures, different emphases seem to appear. More research in this field needs to be done to adjust measures better to students' interests and other pre-conditions in the future. Contrasting different groups like gender and participation in a junior science contest uncovered specific interest profiles. Conclusions:The instrument seems to offer a promising approach to identify different interest profiles for different environments and groups of students. Based on the results, further studies will be carried out to form a solid foundation for the design of enrichment measures.

  2. Use of regularized principal component analysis to model anatomical changes during head and neck radiation therapy for treatment adaptation and response assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chetvertkov, Mikhail A., E-mail: chetvertkov@wayne

    2016-10-15

    Purpose: To develop standard (SPCA) and regularized (RPCA) principal component analysis models of anatomical changes from daily cone beam CTs (CBCTs) of head and neck (H&N) patients and assess their potential use in adaptive radiation therapy, and for extracting quantitative information for treatment response assessment. Methods: Planning CT images of ten H&N patients were artificially deformed to create “digital phantom” images, which modeled systematic anatomical changes during radiation therapy. Artificial deformations closely mirrored patients’ actual deformations and were interpolated to generate 35 synthetic CBCTs, representing evolving anatomy over 35 fractions. Deformation vector fields (DVFs) were acquired between pCT and syntheticmore » CBCTs (i.e., digital phantoms) and between pCT and clinical CBCTs. Patient-specific SPCA and RPCA models were built from these synthetic and clinical DVF sets. EigenDVFs (EDVFs) having the largest eigenvalues were hypothesized to capture the major anatomical deformations during treatment. Results: Principal component analysis (PCA) models achieve variable results, depending on the size and location of anatomical change. Random changes prevent or degrade PCA’s ability to detect underlying systematic change. RPCA is able to detect smaller systematic changes against the background of random fraction-to-fraction changes and is therefore more successful than SPCA at capturing systematic changes early in treatment. SPCA models were less successful at modeling systematic changes in clinical patient images, which contain a wider range of random motion than synthetic CBCTs, while the regularized approach was able to extract major modes of motion. Conclusions: Leading EDVFs from the both PCA approaches have the potential to capture systematic anatomical change during H&N radiotherapy when systematic changes are large enough with respect to random fraction-to-fraction changes. In all cases the RPCA approach appears to be more reliable at capturing systematic changes, enabling dosimetric consequences to be projected once trends are established early in a treatment course, or based on population models.« less

  3. Numerical and experimental validation of a particle Galerkin method for metal grinding simulation

    NASA Astrophysics Data System (ADS)

    Wu, C. T.; Bui, Tinh Quoc; Wu, Youcai; Luo, Tzui-Liang; Wang, Morris; Liao, Chien-Chih; Chen, Pei-Yin; Lai, Yu-Sheng

    2018-03-01

    In this paper, a numerical approach with an experimental validation is introduced for modelling high-speed metal grinding processes in 6061-T6 aluminum alloys. The derivation of the present numerical method starts with an establishment of a stabilized particle Galerkin approximation. A non-residual penalty term from strain smoothing is introduced as a means of stabilizing the particle Galerkin method. Additionally, second-order strain gradients are introduced to the penalized functional for the regularization of damage-induced strain localization problem. To handle the severe deformation in metal grinding simulation, an adaptive anisotropic Lagrangian kernel is employed. Finally, the formulation incorporates a bond-based failure criterion to bypass the prospective spurious damage growth issues in material failure and cutting debris simulation. A three-dimensional metal grinding problem is analyzed and compared with the experimental results to demonstrate the effectiveness and accuracy of the proposed numerical approach.

  4. HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS

    PubMed Central

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2012-01-01

    The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied. PMID:22661790

  5. Teaching adolescents with learning disabilities to generate and use task-specific strategies.

    PubMed

    Ellis, E S; Deshler, D D; Schumaker, J B

    1989-02-01

    The effects of an intervention designed to enhance students' roles as control agents for strategic functioning were investigated. The goal was to increase the ability of students labeled learning disabled to generate new strategies or adapt existing task-specific strategies for meeting varying demands of the regular classroom. Measures were taken in three areas: (a) metacognitive knowledge related to generating or adapting strategies, (b) ability to generate problem-solving strategies for novel problems, and (c) the effects of the intervention on students' regular classroom grades and teachers' perceptions of the students' self-reliance and work quality. A multiple baseline across subjects design was used. The intervention resulted in dramatic increases in the subjects' verbal expression of metacognitive knowledge and ability to generate task-specific strategies. Students' regular class grades increased; for those students who did not spontaneously generalize use of the strategy to problems encountered in these classes, providing instruction to target specific classes resulted in improved grades. Teacher perceptions of students' self-reliance and work quality did not change, probably because baseline measures were already high in both areas. Implications for instruction and future research are discussed.

  6. Cone Photoreceptor Irregularity on Adaptive Optics Scanning Laser Ophthalmoscopy Correlates With Severity of Diabetic Retinopathy and Macular Edema.

    PubMed

    Lammer, Jan; Prager, Sonja G; Cheney, Michael C; Ahmed, Amel; Radwan, Salma H; Burns, Stephen A; Silva, Paolo S; Sun, Jennifer K

    2016-12-01

    To determine whether cone density, spacing, or regularity in eyes with and without diabetes (DM) as assessed by high-resolution adaptive optics scanning laser ophthalmoscopy (AOSLO) correlates with presence of diabetes, diabetic retinopathy (DR) severity, or presence of diabetic macular edema (DME). Participants with type 1 or 2 DM and healthy controls underwent AOSLO imaging of four macular regions. Cone assessment was performed by independent graders for cone density, packing factor (PF), nearest neighbor distance (NND), and Voronoi tile area (VTA). Regularity indices (mean/SD) of NND (RI-NND) and VTA (RI-VTA) were calculated. Fifty-three eyes (53 subjects) were assessed. Mean ± SD age was 44 ± 12 years; 81% had DM (duration: 22 ± 13 years; glycated hemoglobin [HbA1c]: 8.0 ± 1.7%; DM type 1: 72%). No significant relationship was found between DM, HbA1c, or DR severity and cone density or spacing parameters. However, decreased regularity of cone arrangement in the macular quadrants was correlated with presence of DM (RI-NND: P = 0.04; RI-VTA: P = 0.04), increasing DR severity (RI-NND: P = 0.04), and presence of DME (RI-VTA: P = 0.04). Eyes with DME were associated with decreased density (P = 0.04), PF (P = 0.03), and RI-VTA (0.04). Although absolute cone density and spacing don't appear to change substantially in DM, decreased regularity of the cone arrangement is consistently associated with the presence of DM, increasing DR severity, and DME. Future AOSLO evaluation of cone regularity is warranted to determine whether these changes are correlated with, or predict, anatomic or functional deficits in patients with DM.

  7. Adaptive Unscented Kalman Filter Phase Unwrapping Method and Its Application on Gaofen-3 Interferometric SAR Data.

    PubMed

    Gao, Yandong; Zhang, Shubi; Li, Tao; Chen, Qianfu; Li, Shijin; Meng, Pengfei

    2018-06-02

    Phase unwrapping (PU) is a key step in the reconstruction of digital elevation models (DEMs) and the monitoring of surface deformation from interferometric synthetic aperture radar (SAR, InSAR) data. In this paper, an improved PU method that combines an amended matrix pencil model, an adaptive unscented kalman filter (AUKF), an efficient quality-guided strategy based on heapsort, and a circular median filter is proposed. PU theory and the existing UKFPU method are covered. Then, the improved method is presented with emphasis on the AUKF and the circular median filter. AUKF has been well used in other fields, but it is for the first time applied to interferometric images PU, to the best of our knowledge. First, the amended matrix pencil model is used to estimate the phase gradient. Then, an AUKF model is used to unwrap the interferometric phase based on an efficient quality-guided strategy based on heapsort. Finally, the key results are obtained by filtering the results using a circular median. The proposed method is compared with the minimum cost network flow (MCF), statistical cost network flow (SNAPHU), regularized phase tracking technique (RPTPU), and UKFPU methods using two sets of simulated data and two sets of experimental GF-3 SAR data. The improved method is shown to yield the greatest accuracy in the interferometric phase maps compared to the methods considered in this paper. Furthermore, the improved method is shown to be the most robust to noise and is thus most suitable for PU of GF-3 SAR data in high-noise and low-coherence regions.

  8. Total variation-based method for radar coincidence imaging with model mismatch for extended target

    NASA Astrophysics Data System (ADS)

    Cao, Kaicheng; Zhou, Xiaoli; Cheng, Yongqiang; Fan, Bo; Qin, Yuliang

    2017-11-01

    Originating from traditional optical coincidence imaging, radar coincidence imaging (RCI) is a staring/forward-looking imaging technique. In RCI, the reference matrix must be computed precisely to reconstruct the image as preferred; unfortunately, such precision is almost impossible due to the existence of model mismatch in practical applications. Although some conventional sparse recovery algorithms are proposed to solve the model-mismatch problem, they are inapplicable to nonsparse targets. We therefore sought to derive the signal model of RCI with model mismatch by replacing the sparsity constraint item with total variation (TV) regularization in the sparse total least squares optimization problem; in this manner, we obtain the objective function of RCI with model mismatch for an extended target. A more robust and efficient algorithm called TV-TLS is proposed, in which the objective function is divided into two parts and the perturbation matrix and scattering coefficients are updated alternately. Moreover, due to the ability of TV regularization to recover sparse signal or image with sparse gradient, TV-TLS method is also applicable to sparse recovering. Results of numerical experiments demonstrate that, for uniform extended targets, sparse targets, and real extended targets, the algorithm can achieve preferred imaging performance both in suppressing noise and in adapting to model mismatch.

  9. Novel Harmonic Regularization Approach for Variable Selection in Cox's Proportional Hazards Model

    PubMed Central

    Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan

    2014-01-01

    Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq  (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods. PMID:25506389

  10. Selection of regularization parameter for l1-regularized damage detection

    NASA Astrophysics Data System (ADS)

    Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing

    2018-06-01

    The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.

  11. Blind estimation of blur in hyperspectral images

    NASA Astrophysics Data System (ADS)

    Zhang, Mo; Vozel, Benoit; Chehdi, Kacem; Uss, Mykhail; Abramov, Sergey; Lukin, Vladimir

    2017-10-01

    Hyperspectral images acquired by remote sensing systems are generally degraded by noise and can be sometimes more severely degraded by blur. When no knowledge is available about the degradations present on the original image, blind restoration methods can only be considered. By blind, we mean absolutely no knowledge neither of the blur point spread function (PSF) nor the original latent channel and the noise level. In this study, we address the blind restoration of the degraded channels component-wise, according to a sequential scheme. For each degraded channel, the sequential scheme estimates the blur point spread function (PSF) in a first stage and deconvolves the degraded channel in a second and final stage by means of using the PSF previously estimated. We propose a new component-wise blind method for estimating effectively and accurately the blur point spread function. This method follows recent approaches suggesting the detection, selection and use of sufficiently salient edges in the current processed channel for supporting the regularized blur PSF estimation. Several modifications are beneficially introduced in our work. A new selection of salient edges through thresholding adequately the cumulative distribution of their corresponding gradient magnitudes is introduced. Besides, quasi-automatic and spatially adaptive tuning of the involved regularization parameters is considered. To prove applicability and higher efficiency of the proposed method, we compare it against the method it originates from and four representative edge-sparsifying regularized methods of the literature already assessed in a previous work. Our attention is mainly paid to the objective analysis (via ݈l1-norm) of the blur PSF error estimation accuracy. The tests are performed on a synthetic hyperspectral image. This synthetic hyperspectral image has been built from various samples from classified areas of a real-life hyperspectral image, in order to benefit from realistic spatial distribution of reference spectral signatures to recover after synthetic degradation. The synthetic hyperspectral image has been successively degraded with eight real blurs taken from the literature, each of a different support size. Conclusions, practical recommendations and perspectives are drawn from the results experimentally obtained.

  12. Event-Based Media Enrichment Using an Adaptive Probabilistic Hypergraph Model.

    PubMed

    Liu, Xueliang; Wang, Meng; Yin, Bao-Cai; Huet, Benoit; Li, Xuelong

    2015-11-01

    Nowadays, with the continual development of digital capture technologies and social media services, a vast number of media documents are captured and shared online to help attendees record their experience during events. In this paper, we present a method combining semantic inference and multimodal analysis for automatically finding media content to illustrate events using an adaptive probabilistic hypergraph model. In this model, media items are taken as vertices in the weighted hypergraph and the task of enriching media to illustrate events is formulated as a ranking problem. In our method, each hyperedge is constructed using the K-nearest neighbors of a given media document. We also employ a probabilistic representation, which assigns each vertex to a hyperedge in a probabilistic way, to further exploit the correlation among media data. Furthermore, we optimize the hypergraph weights in a regularization framework, which is solved as a second-order cone problem. The approach is initiated by seed media and then used to rank the media documents using a transductive inference process. The results obtained from validating the approach on an event dataset collected from EventMedia demonstrate the effectiveness of the proposed approach.

  13. An Efficient Augmented Lagrangian Method for Statistical X-Ray CT Image Reconstruction.

    PubMed

    Li, Jiaojiao; Niu, Shanzhou; Huang, Jing; Bian, Zhaoying; Feng, Qianjin; Yu, Gaohang; Liang, Zhengrong; Chen, Wufan; Ma, Jianhua

    2015-01-01

    Statistical iterative reconstruction (SIR) for X-ray computed tomography (CT) under the penalized weighted least-squares criteria can yield significant gains over conventional analytical reconstruction from the noisy measurement. However, due to the nonlinear expression of the objective function, most exiting algorithms related to the SIR unavoidably suffer from heavy computation load and slow convergence rate, especially when an edge-preserving or sparsity-based penalty or regularization is incorporated. In this work, to address abovementioned issues of the general algorithms related to the SIR, we propose an adaptive nonmonotone alternating direction algorithm in the framework of augmented Lagrangian multiplier method, which is termed as "ALM-ANAD". The algorithm effectively combines an alternating direction technique with an adaptive nonmonotone line search to minimize the augmented Lagrangian function at each iteration. To evaluate the present ALM-ANAD algorithm, both qualitative and quantitative studies were conducted by using digital and physical phantoms. Experimental results show that the present ALM-ANAD algorithm can achieve noticeable gains over the classical nonlinear conjugate gradient algorithm and state-of-the-art split Bregman algorithm in terms of noise reduction, contrast-to-noise ratio, convergence rate, and universal quality index metrics.

  14. Strategies for Competition Beyond Open Architecture (OA): Acquisition at the Edge of Chaos

    DTIC Science & Technology

    2014-11-08

    critical region in which the global properties of the system take on regular behavior, such as a power-law distribution of event sizes. Such ideas are...standardization can improve system developments. For example, satellite development can be improved by using standard interfaces for the sensors...installed on the satellite bus. This allows for more adaptability within the system.1 The adaptability is a key foundation for realizing capability on demand

  15. Subcortical processing of speech regularities underlies reading and music aptitude in children.

    PubMed

    Strait, Dana L; Hornickel, Jane; Kraus, Nina

    2011-10-17

    Neural sensitivity to acoustic regularities supports fundamental human behaviors such as hearing in noise and reading. Although the failure to encode acoustic regularities in ongoing speech has been associated with language and literacy deficits, how auditory expertise, such as the expertise that is associated with musical skill, relates to the brainstem processing of speech regularities is unknown. An association between musical skill and neural sensitivity to acoustic regularities would not be surprising given the importance of repetition and regularity in music. Here, we aimed to define relationships between the subcortical processing of speech regularities, music aptitude, and reading abilities in children with and without reading impairment. We hypothesized that, in combination with auditory cognitive abilities, neural sensitivity to regularities in ongoing speech provides a common biological mechanism underlying the development of music and reading abilities. We assessed auditory working memory and attention, music aptitude, reading ability, and neural sensitivity to acoustic regularities in 42 school-aged children with a wide range of reading ability. Neural sensitivity to acoustic regularities was assessed by recording brainstem responses to the same speech sound presented in predictable and variable speech streams. Through correlation analyses and structural equation modeling, we reveal that music aptitude and literacy both relate to the extent of subcortical adaptation to regularities in ongoing speech as well as with auditory working memory and attention. Relationships between music and speech processing are specifically driven by performance on a musical rhythm task, underscoring the importance of rhythmic regularity for both language and music. These data indicate common brain mechanisms underlying reading and music abilities that relate to how the nervous system responds to regularities in auditory input. Definition of common biological underpinnings for music and reading supports the usefulness of music for promoting child literacy, with the potential to improve reading remediation.

  16. Coarse-graining as a downward causation mechanism

    NASA Astrophysics Data System (ADS)

    Flack, Jessica C.

    2017-11-01

    Downward causation is the controversial idea that `higher' levels of organization can causally influence behaviour at `lower' levels of organization. Here I propose that we can gain traction on downward causation by being operational and examining how adaptive systems identify regularities in evolutionary or learning time and use these regularities to guide behaviour. I suggest that in many adaptive systems components collectively compute their macroscopic worlds through coarse-graining. I further suggest we move from simple feedback to downward causation when components tune behaviour in response to estimates of collectively computed macroscopic properties. I introduce a weak and strong notion of downward causation and discuss the role the strong form plays in the origins of new organizational levels. I illustrate these points with examples from the study of biological and social systems and deep neural networks. This article is part of the themed issue 'Reconceptualizing the origins of life'.

  17. Coarse-graining as a downward causation mechanism

    PubMed Central

    2017-01-01

    Downward causation is the controversial idea that ‘higher’ levels of organization can causally influence behaviour at ‘lower’ levels of organization. Here I propose that we can gain traction on downward causation by being operational and examining how adaptive systems identify regularities in evolutionary or learning time and use these regularities to guide behaviour. I suggest that in many adaptive systems components collectively compute their macroscopic worlds through coarse-graining. I further suggest we move from simple feedback to downward causation when components tune behaviour in response to estimates of collectively computed macroscopic properties. I introduce a weak and strong notion of downward causation and discuss the role the strong form plays in the origins of new organizational levels. I illustrate these points with examples from the study of biological and social systems and deep neural networks. This article is part of the themed issue ‘Reconceptualizing the origins of life’. PMID:29133440

  18. Algorithm for Lossless Compression of Calibrated Hyperspectral Imagery

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron B.; Klimesh, Matthew A.

    2010-01-01

    A two-stage predictive method was developed for lossless compression of calibrated hyperspectral imagery. The first prediction stage uses a conventional linear predictor intended to exploit spatial and/or spectral dependencies in the data. The compressor tabulates counts of the past values of the difference between this initial prediction and the actual sample value. To form the ultimate predicted value, in the second stage, these counts are combined with an adaptively updated weight function intended to capture information about data regularities introduced by the calibration process. Finally, prediction residuals are losslessly encoded using adaptive arithmetic coding. Algorithms of this type are commonly tested on a readily available collection of images from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) hyperspectral imager. On the standard calibrated AVIRIS hyperspectral images that are most widely used for compression benchmarking, the new compressor provides more than 0.5 bits/sample improvement over the previous best compression results. The algorithm has been implemented in Mathematica. The compression algorithm was demonstrated as beneficial on 12-bit calibrated AVIRIS images.

  19. Feasibility of a Sensory-Adapted Dental Environment for Children With Autism

    PubMed Central

    Stein Duker, Leah I.; Williams, Marian E.; Lane, Christianne Joy; Dawson, Michael E.; Borreson, Ann E.; Polido, José C.

    2015-01-01

    OBJECTIVE. To provide an example of an occupational therapy feasibility study and evaluate the implementation of a randomized controlled pilot and feasibility trial examining the impact of a sensory-adapted dental environment (SADE) to enhance oral care for children with autism spectrum disorder (ASD). METHOD. Twenty-two children with ASD and 22 typically developing children, ages 6–12 yr, attended a dental clinic in an urban hospital. Participants completed two dental cleanings, 3–4 mo apart, one in a regular environment and one in a SADE. Feasibility outcome measures were recruitment, retention, accrual, dropout, and protocol adherence. Intervention outcome measures were physiological stress, behavioral distress, pain, and cost. RESULTS. We successfully recruited and retained participants. Parents expressed satisfaction with research study participation. Dentists stated that the intervention could be incorporated in normal practice. Intervention outcome measures favored the SADE condition. CONCLUSION. Preliminary positive benefit of SADE in children with ASD warrants moving forward with a large-scale clinical trial. PMID:25871593

  20. Implementation of real-time energy management strategy based on reinforcement learning for hybrid electric vehicles and simulation validation

    PubMed Central

    Kong, Zehui; Liu, Teng

    2017-01-01

    To further improve the fuel economy of series hybrid electric tracked vehicles, a reinforcement learning (RL)-based real-time energy management strategy is developed in this paper. In order to utilize the statistical characteristics of online driving schedule effectively, a recursive algorithm for the transition probability matrix (TPM) of power-request is derived. The reinforcement learning (RL) is applied to calculate and update the control policy at regular time, adapting to the varying driving conditions. A facing-forward powertrain model is built in detail, including the engine-generator model, battery model and vehicle dynamical model. The robustness and adaptability of real-time energy management strategy are validated through the comparison with the stationary control strategy based on initial transition probability matrix (TPM) generated from a long naturalistic driving cycle in the simulation. Results indicate that proposed method has better fuel economy than stationary one and is more effective in real-time control. PMID:28671967

  1. Implementation of real-time energy management strategy based on reinforcement learning for hybrid electric vehicles and simulation validation.

    PubMed

    Kong, Zehui; Zou, Yuan; Liu, Teng

    2017-01-01

    To further improve the fuel economy of series hybrid electric tracked vehicles, a reinforcement learning (RL)-based real-time energy management strategy is developed in this paper. In order to utilize the statistical characteristics of online driving schedule effectively, a recursive algorithm for the transition probability matrix (TPM) of power-request is derived. The reinforcement learning (RL) is applied to calculate and update the control policy at regular time, adapting to the varying driving conditions. A facing-forward powertrain model is built in detail, including the engine-generator model, battery model and vehicle dynamical model. The robustness and adaptability of real-time energy management strategy are validated through the comparison with the stationary control strategy based on initial transition probability matrix (TPM) generated from a long naturalistic driving cycle in the simulation. Results indicate that proposed method has better fuel economy than stationary one and is more effective in real-time control.

  2. Active Recovery After High-Intensity Interval-Training Does Not Attenuate Training Adaptation.

    PubMed

    Wiewelhove, Thimo; Schneider, Christoph; Schmidt, Alina; Döweling, Alexander; Meyer, Tim; Kellmann, Michael; Pfeiffer, Mark; Ferrauti, Alexander

    2018-01-01

    Objective: High-intensity interval training (HIIT) can be extremely demanding and can consequently produce high blood lactate levels. Previous studies have shown that lactate is a potent metabolic stimulus, which is important for adaptation. Active recovery (ACT) after intensive exercise, however, enhances blood lactate removal in comparison with passive recovery (PAS) and, consequently, may attenuate endurance performance improvements. Therefore, the aim of this study was to examine the influence of regular ACT on training adaptations during a HIIT mesocycle. Methods: Twenty-six well-trained male intermittent sport athletes (age: 23.5 ± 2.5 years; O 2 max: 55.36 ± 3.69 ml min kg -1 ) participated in a randomized controlled trial consisting of 4 weeks of a running-based HIIT mesocycle with a total of 12 HIIT sessions. After each training session, participants completed 15 min of either moderate jogging (ACT) or PAS. Subjects were matched to the ACT or PAS groups according to age and performance. Before the HIIT program and 1 week after the last training session, the athletes performed a progressive incremental exercise test on a motor-driven treadmill to determine O 2 max, maximum running velocity (vmax), the running velocity at which O 2 max occurs (vO 2 max), and anaerobic lactate threshold (AT). Furthermore, repeated sprint ability (RSA) were determined. Results: In the whole group the HIIT mesocycle induced significant or small to moderate changes in vmax ( p < 0.001, effect size [ES] = 0.65,), vO 2 max ( p < 0.001, ES = 0.62), and AT ( p < 0.001, ES = 0.56) compared with the values before the intervention. O 2 max and RSA remained unchanged throughout the study. In addition, no significant differences in the changes were noted in any of the parameters between ACT and PAS except for AT ( p < 0.05, ES = 0.57). Conclusion: Regular use of individualized ACT did not attenuate training adaptations during a HIIT mesocycle compared to PAS. Interestingly, we found that the ACT group obtained a significantly higher AT following the training program compared to the PAS group. This could be because ACT allows a continuation of the training at a low intensity and may activate specific adaptive mechanisms that are not triggered during PAS.

  3. Active Recovery After High-Intensity Interval-Training Does Not Attenuate Training Adaptation

    PubMed Central

    Wiewelhove, Thimo; Schneider, Christoph; Schmidt, Alina; Döweling, Alexander; Meyer, Tim; Kellmann, Michael; Pfeiffer, Mark; Ferrauti, Alexander

    2018-01-01

    Objective: High-intensity interval training (HIIT) can be extremely demanding and can consequently produce high blood lactate levels. Previous studies have shown that lactate is a potent metabolic stimulus, which is important for adaptation. Active recovery (ACT) after intensive exercise, however, enhances blood lactate removal in comparison with passive recovery (PAS) and, consequently, may attenuate endurance performance improvements. Therefore, the aim of this study was to examine the influence of regular ACT on training adaptations during a HIIT mesocycle. Methods: Twenty-six well-trained male intermittent sport athletes (age: 23.5 ± 2.5 years; O2max: 55.36 ± 3.69 ml min kg-1) participated in a randomized controlled trial consisting of 4 weeks of a running-based HIIT mesocycle with a total of 12 HIIT sessions. After each training session, participants completed 15 min of either moderate jogging (ACT) or PAS. Subjects were matched to the ACT or PAS groups according to age and performance. Before the HIIT program and 1 week after the last training session, the athletes performed a progressive incremental exercise test on a motor-driven treadmill to determine O2max, maximum running velocity (vmax), the running velocity at which O2max occurs (vO2max), and anaerobic lactate threshold (AT). Furthermore, repeated sprint ability (RSA) were determined. Results: In the whole group the HIIT mesocycle induced significant or small to moderate changes in vmax (p < 0.001, effect size [ES] = 0.65,), vO2max (p < 0.001, ES = 0.62), and AT (p < 0.001, ES = 0.56) compared with the values before the intervention. O2max and RSA remained unchanged throughout the study. In addition, no significant differences in the changes were noted in any of the parameters between ACT and PAS except for AT (p < 0.05, ES = 0.57). Conclusion: Regular use of individualized ACT did not attenuate training adaptations during a HIIT mesocycle compared to PAS. Interestingly, we found that the ACT group obtained a significantly higher AT following the training program compared to the PAS group. This could be because ACT allows a continuation of the training at a low intensity and may activate specific adaptive mechanisms that are not triggered during PAS. PMID:29720949

  4. Hierarchical image segmentation via recursive superpixel with adaptive regularity

    NASA Astrophysics Data System (ADS)

    Nakamura, Kensuke; Hong, Byung-Woo

    2017-11-01

    A fast and accurate segmentation algorithm in a hierarchical way based on a recursive superpixel technique is presented. We propose a superpixel energy formulation in which the trade-off between data fidelity and regularization is dynamically determined based on the local residual in the energy optimization procedure. We also present an energy optimization algorithm that allows a pixel to be shared by multiple regions to improve the accuracy and appropriate the number of segments. The qualitative and quantitative evaluations demonstrate that our algorithm, combining the proposed energy and optimization, outperforms the conventional k-means algorithm by up to 29.10% in F-measure. We also perform comparative analysis with state-of-the-art algorithms in the hierarchical segmentation. Our algorithm yields smooth regions throughout the hierarchy as opposed to the others that include insignificant details. Our algorithm overtakes the other algorithms in terms of balance between accuracy and computational time. Specifically, our method runs 36.48% faster than the region-merging approach, which is the fastest of the comparing algorithms, while achieving a comparable accuracy.

  5. Spectral X-ray Radiography for Safeguards at Nuclear Fuel Fabrication Facilities: A Feasibility Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilbert, Andrew J.; McDonald, Benjamin S.; Smith, Leon E.

    The methods currently used by the International Atomic Energy Agency to account for nuclear materials at fuel fabrication facilities are time consuming and require in-field chemistry and operation by experts. Spectral X-ray radiography, along with advanced inverse algorithms, is an alternative inspection that could be completed noninvasively, without any in-field chemistry, with inspections of tens of seconds. The proposed inspection system and algorithms are presented here. The inverse algorithm uses total variation regularization and adaptive regularization parameter selection with the unbiased predictive risk estimator. Performance of the system is quantified with simulated X-ray inspection data and sensitivity of the outputmore » is tested against various inspection system instabilities. Material quantification from a fully-characterized inspection system is shown to be very accurate, with biases on nuclear material estimations of < 0.02%. It is shown that the results are sensitive to variations in the fuel powder sample density and detector pixel gain, which increase biases to 1%. Options to mitigate these inaccuracies are discussed.« less

  6. Prediction of adaptive self-regulatory responses to arthritis pain anxiety in exercising adults: Does pain acceptance matter?

    PubMed Central

    Cary, Miranda A; Gyurcsik, Nancy C; Brawley, Lawrence R

    2015-01-01

    BACKGROUND: Exercising for ≥150 min/week is a recommended strategy for self-managing arthritis. However, exercise nonadherence is a problem. Arthritis pain anxiety may interfere with regular exercise. According to the fear-avoidance model, individuals may confront their pain anxiety by using adaptive self-regulatory responses (eg, changing exercise type or duration). Furthermore, the anxiety-self-regulatory responses relationship may vary as a function of individuals’ pain acceptance levels. OBJECTIVES: To investigate pain acceptance as a moderator of the pain anxiety-adaptive self-regulatory responses relationship. The secondary objective was to examine whether groups of patients who differed in meeting exercise recommendations also differed in pain-related and self-regulatory responses. METHODS: Adults (mean [± SD] age 49.75±13.88 years) with medically diagnosed arthritis completed online measures of arthritis pain-related variables and self-regulatory responses at baseline, and exercise participation two weeks later. Individuals meeting (n=87) and not meeting (n=49) exercise recommendations were identified. RESULTS: Hierarchical multiple regression analysis revealed that pain acceptance moderated the anxiety-adaptive self-regulatory responses relationship. When pain anxiety was lower, greater pain acceptance was associated with less frequent use of adaptive responses. When anxiety was higher, adaptive responses were used regardless of pain acceptance level. MANOVA findings revealed that participants meeting the recommended exercise dose reported significantly lower pain and pain anxiety, and greater pain acceptance (P<0.05) than those not meeting the dose. CONCLUSIONS: Greater pain acceptance may help individuals to focus their efforts to adapt to their pain anxiety only when it is higher, leaving self-regulatory capacity to cope with additional challenges to exercise adherence (eg, busy schedule). PMID:25621990

  7. Optimal Tikhonov regularization for DEER spectroscopy

    NASA Astrophysics Data System (ADS)

    Edwards, Thomas H.; Stoll, Stefan

    2018-03-01

    Tikhonov regularization is the most commonly used method for extracting distance distributions from experimental double electron-electron resonance (DEER) spectroscopy data. This method requires the selection of a regularization parameter, α , and a regularization operator, L. We analyze the performance of a large set of α selection methods and several regularization operators, using a test set of over half a million synthetic noisy DEER traces. These are generated from distance distributions obtained from in silico double labeling of a protein crystal structure of T4 lysozyme with the spin label MTSSL. We compare the methods and operators based on their ability to recover the model distance distributions from the noisy time traces. The results indicate that several α selection methods perform quite well, among them the Akaike information criterion and the generalized cross validation method with either the first- or second-derivative operator. They perform significantly better than currently utilized L-curve methods.

  8. Retaining both discrete and smooth features in 1D and 2D NMR relaxation and diffusion experiments

    NASA Astrophysics Data System (ADS)

    Reci, A.; Sederman, A. J.; Gladden, L. F.

    2017-11-01

    A new method of regularization of 1D and 2D NMR relaxation and diffusion experiments is proposed and a robust algorithm for its implementation is introduced. The new form of regularization, termed the Modified Total Generalized Variation (MTGV) regularization, offers a compromise between distinguishing discrete and smooth features in the reconstructed distributions. The method is compared to the conventional method of Tikhonov regularization and the recently proposed method of L1 regularization, when applied to simulated data of 1D spin-lattice relaxation, T1, 1D spin-spin relaxation, T2, and 2D T1-T2 NMR experiments. A range of simulated distributions composed of two lognormally distributed peaks were studied. The distributions differed with regard to the variance of the peaks, which were designed to investigate a range of distributions containing only discrete, only smooth or both features in the same distribution. Three different signal-to-noise ratios were studied: 2000, 200 and 20. A new metric is proposed to compare the distributions reconstructed from the different regularization methods with the true distributions. The metric is designed to penalise reconstructed distributions which show artefact peaks. Based on this metric, MTGV regularization performs better than Tikhonov and L1 regularization in all cases except when the distribution is known to only comprise of discrete peaks, in which case L1 regularization is slightly more accurate than MTGV regularization.

  9. Can We Improve Structured Sequence Processing? Exploring the Direct and Indirect Effects of Computerized Training Using a Mediational Model

    PubMed Central

    Smith, Gretchen N. L.; Conway, Christopher M.; Bauernschmidt, Althea; Pisoni, David B.

    2015-01-01

    Recent research suggests that language acquisition may rely on domain-general learning abilities, such as structured sequence processing, which is the ability to extract, encode, and represent structured patterns in a temporal sequence. If structured sequence processing supports language, then it may be possible to improve language function by enhancing this foundational learning ability. The goal of the present study was to use a novel computerized training task as a means to better understand the relationship between structured sequence processing and language function. Participants first were assessed on pre-training tasks to provide baseline behavioral measures of structured sequence processing and language abilities. Participants were then quasi-randomly assigned to either a treatment group involving adaptive structured visuospatial sequence training, a treatment group involving adaptive non-structured visuospatial sequence training, or a control group. Following four days of sequence training, all participants were assessed with the same pre-training measures. Overall comparison of the post-training means revealed no group differences. However, in order to examine the potential relations between sequence training, structured sequence processing, and language ability, we used a mediation analysis that showed two competing effects. In the indirect effect, adaptive sequence training with structural regularities had a positive impact on structured sequence processing performance, which in turn had a positive impact on language processing. This finding not only identifies a potential novel intervention to treat language impairments but also may be the first demonstration that structured sequence processing can be improved and that this, in turn, has an impact on language processing. However, in the direct effect, adaptive sequence training with structural regularities had a direct negative impact on language processing. This unexpected finding suggests that adaptive training with structural regularities might potentially interfere with language processing. Taken together, these findings underscore the importance of pursuing designs that promote a better understanding of the mechanisms underlying training-related changes, so that regimens can be developed that help reduce these types of negative effects while simultaneously maximizing the benefits to outcome measures of interest. PMID:25946222

  10. Can we improve structured sequence processing? Exploring the direct and indirect effects of computerized training using a mediational model.

    PubMed

    Smith, Gretchen N L; Conway, Christopher M; Bauernschmidt, Althea; Pisoni, David B

    2015-01-01

    Recent research suggests that language acquisition may rely on domain-general learning abilities, such as structured sequence processing, which is the ability to extract, encode, and represent structured patterns in a temporal sequence. If structured sequence processing supports language, then it may be possible to improve language function by enhancing this foundational learning ability. The goal of the present study was to use a novel computerized training task as a means to better understand the relationship between structured sequence processing and language function. Participants first were assessed on pre-training tasks to provide baseline behavioral measures of structured sequence processing and language abilities. Participants were then quasi-randomly assigned to either a treatment group involving adaptive structured visuospatial sequence training, a treatment group involving adaptive non-structured visuospatial sequence training, or a control group. Following four days of sequence training, all participants were assessed with the same pre-training measures. Overall comparison of the post-training means revealed no group differences. However, in order to examine the potential relations between sequence training, structured sequence processing, and language ability, we used a mediation analysis that showed two competing effects. In the indirect effect, adaptive sequence training with structural regularities had a positive impact on structured sequence processing performance, which in turn had a positive impact on language processing. This finding not only identifies a potential novel intervention to treat language impairments but also may be the first demonstration that structured sequence processing can be improved and that this, in turn, has an impact on language processing. However, in the direct effect, adaptive sequence training with structural regularities had a direct negative impact on language processing. This unexpected finding suggests that adaptive training with structural regularities might potentially interfere with language processing. Taken together, these findings underscore the importance of pursuing designs that promote a better understanding of the mechanisms underlying training-related changes, so that regimens can be developed that help reduce these types of negative effects while simultaneously maximizing the benefits to outcome measures of interest.

  11. Point target detection utilizing super-resolution strategy for infrared scanning oversampling system

    NASA Astrophysics Data System (ADS)

    Wang, Longguang; Lin, Zaiping; Deng, Xinpu; An, Wei

    2017-11-01

    To improve the resolution of remote sensing infrared images, infrared scanning oversampling system is employed with information amount quadrupled, which contributes to the target detection. Generally the image data from double-line detector of infrared scanning oversampling system is shuffled to a whole oversampled image to be post-processed, whereas the aliasing between neighboring pixels leads to image degradation with a great impact on target detection. This paper formulates a point target detection method utilizing super-resolution (SR) strategy concerning infrared scanning oversampling system, with an accelerated SR strategy proposed to realize fast de-aliasing of the oversampled image and an adaptive MRF-based regularization designed to achieve the preserving and aggregation of target energy. Extensive experiments demonstrate the superior detection performance, robustness and efficiency of the proposed method compared with other state-of-the-art approaches.

  12. CRISPR-Cas Targeting of Host Genes as an Antiviral Strategy.

    PubMed

    Chen, Shuliang; Yu, Xiao; Guo, Deyin

    2018-01-16

    Currently, a new gene editing tool-the Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) associated (Cas) system-is becoming a promising approach for genetic manipulation at the genomic level. This simple method, originating from the adaptive immune defense system in prokaryotes, has been developed and applied to antiviral research in humans. Based on the characteristics of virus-host interactions and the basic rules of nucleic acid cleavage or gene activation of the CRISPR-Cas system, it can be used to target both the virus genome and host factors to clear viral reservoirs and prohibit virus infection or replication. Here, we summarize recent progress of the CRISPR-Cas technology in editing host genes as an antiviral strategy.

  13. Visual communications and image processing '92; Proceedings of the Meeting, Boston, MA, Nov. 18-20, 1992

    NASA Astrophysics Data System (ADS)

    Maragos, Petros

    The topics discussed at the conference include hierarchical image coding, motion analysis, feature extraction and image restoration, video coding, and morphological and related nonlinear filtering. Attention is also given to vector quantization, morphological image processing, fractals and wavelets, architectures for image and video processing, image segmentation, biomedical image processing, and model-based analysis. Papers are presented on affine models for motion and shape recovery, filters for directly detecting surface orientation in an image, tracking of unresolved targets in infrared imagery using a projection-based method, adaptive-neighborhood image processing, and regularized multichannel restoration of color images using cross-validation. (For individual items see A93-20945 to A93-20951)

  14. An iterative algorithm for L1-TV constrained regularization in image restoration

    NASA Astrophysics Data System (ADS)

    Chen, K.; Loli Piccolomini, E.; Zama, F.

    2015-11-01

    We consider the problem of restoring blurred images affected by impulsive noise. The adopted method restores the images by solving a sequence of constrained minimization problems where the data fidelity function is the ℓ1 norm of the residual and the constraint, chosen as the image Total Variation, is automatically adapted to improve the quality of the restored images. Although this approach is general, we report here the case of vectorial images where the blurring model involves contributions from the different image channels (cross channel blur). A computationally convenient extension of the Total Variation function to vectorial images is used and the results reported show that this approach is efficient for recovering nearly optimal images.

  15. Condition Number Regularized Covariance Estimation*

    PubMed Central

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2012-01-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197

  16. Condition Number Regularized Covariance Estimation.

    PubMed

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2013-06-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.

  17. X-ray luminescence computed tomography imaging based on X-ray distribution model and adaptively split Bregman method

    PubMed Central

    Chen, Dongmei; Zhu, Shouping; Cao, Xu; Zhao, Fengjun; Liang, Jimin

    2015-01-01

    X-ray luminescence computed tomography (XLCT) has become a promising imaging technology for biological application based on phosphor nanoparticles. There are mainly three kinds of XLCT imaging systems: pencil beam XLCT, narrow beam XLCT and cone beam XLCT. Narrow beam XLCT can be regarded as a balance between the pencil beam mode and the cone-beam mode in terms of imaging efficiency and image quality. The collimated X-ray beams are assumed to be parallel ones in the traditional narrow beam XLCT. However, we observe that the cone beam X-rays are collimated into X-ray beams with fan-shaped broadening instead of parallel ones in our prototype narrow beam XLCT. Hence we incorporate the distribution of the X-ray beams in the physical model and collected the optical data from only two perpendicular directions to further speed up the scanning time. Meanwhile we propose a depth related adaptive regularized split Bregman (DARSB) method in reconstruction. The simulation experiments show that the proposed physical model and method can achieve better results in the location error, dice coefficient, mean square error and the intensity error than the traditional split Bregman method and validate the feasibility of method. The phantom experiment can obtain the location error less than 1.1 mm and validate that the incorporation of fan-shaped X-ray beams in our model can achieve better results than the parallel X-rays. PMID:26203388

  18. Applying Behavioral Conditioning to Identify Anticipatory Behaviors.

    PubMed

    Krebs, Bethany L; Torres, Erika; Chesney, Charlie; Kantoniemi Moon, Veronica; Watters, Jason V

    2017-01-01

    The ability to predict regular events can be adaptive for nonhuman animals living in an otherwise unpredictable environment. Animals may exhibit behavioral changes preceding a predictable event; such changes reflect anticipatory behavior. Anticipatory behavior is broadly defined as a goal-directed increase in activity preceding a predictable event and can be useful for assessing well being in animals in captivity. Anticipation may look different in different animals, however, necessitating methods to generate and study anticipatory behaviors across species. This article includes a proposed method for generating and describing anticipatory behavior in zoos using behavioral conditioning. The article also includes discussion of case studies of the proposed method with 2 animals at the San Francisco Zoo: a silverback gorilla (Gorilla gorilla gorilla) and a red panda (Ailurus fulgens). The study evidence supports anticipation in both animals. As behavioral conditioning can be used with many animals, the proposed method provides a practical approach for using anticipatory behavior to assess animal well being in zoos.

  19. Dissecting and Culturing Animal Cap Explants.

    PubMed

    Dingwell, Kevin S; Smith, James C

    2018-05-16

    The animal cap explant is a simple but adaptable tool available to developmental biologists. The use of animal cap explants in demonstrating the presence of mesoderm-inducting activity in the Xenopus embryo vegetal pole is one of many elegant examples of their worth. Animal caps respond to a range of growth factors (e.g., Wnts, FGF, TGF-β), making them especially useful for studying signal transduction pathways and gene regulatory networks. Explants are also suitable for examining cell behavior and have provided key insights into the molecular mechanisms controlling vertebrate morphogenesis. In this protocol, we outline two methods to isolate animal cap explants from Xenopus laevis , both of which can be applied easily to Xenopus tropicalis The first method is a standard manual method that can be used in any laboratory equipped with a standard dissecting microscope. For labs planning on dissecting large numbers of explants on a regular basis, a second, high throughput method is described that uses a specialized microcautery surgical instrument. © 2018 Cold Spring Harbor Laboratory Press.

  20. The three-dimensional flight of red-footed boobies: adaptations to foraging in a tropical environment?

    PubMed Central

    Weimerskirch, H.; Le Corre, M.; Ropert-Coudert, Y.; Kato, A.; Marsac, F.

    2005-01-01

    In seabirds a broad variety of morphologies, flight styles and feeding methods exist as an adaptation to optimal foraging in contrasted marine environments for a wide variety of prey types. Because of the low productivity of tropical waters it is expected that specific flight and foraging techniques have been selected there, but very few data are available. By using five different types of high-precision miniaturized logger (global positioning systems, accelerometers, time depth recorders, activity recorders, altimeters) we studied the way a seabird is foraging over tropical waters. Red-footed boobies are foraging in the day, never foraging at night, probably as a result of predation risks. They make extensive use of wind conditions, flying preferentially with crosswinds at median speed of 38 km h−1, reaching highest speeds with tail winds. They spent 66% of the foraging trip in flight, using a flap–glide flight, and gliding 68% of the flight. Travelling at low costs was regularly interrupted by extremely active foraging periods where birds are very frequently touching water for landing, plunge diving or surface diving (30 landings h−1). Dives were shallow (maximum 2.4 m) but frequent (4.5 dives h−1), most being plunge dives. While chasing for very mobile prey like flying fishes, boobies have adopted a very active and specific hunting behaviour, but the use of wind allows them to reduce travelling cost by their extensive use of gliding. During the foraging and travelling phases birds climb regularly to altitudes of 20–50 m to spot prey or congeners. During the final phase of the flight, they climb to high altitudes, up to 500 m, probably to avoid attacks by frigatebirds along the coasts. This study demonstrates the use by boobies of a series of very specific flight and activity patterns that have probably been selected as adaptations to the conditions of tropical waters. PMID:15875570

  1. The three-dimensional flight of red-footed boobies: adaptations to foraging in a tropical environment?

    PubMed

    Weimerskirch, H; Le Corre, M; Ropert-Coudert, Y; Kato, A; Marsac, F

    2005-01-07

    In seabirds a broad variety of morphologies, flight styles and feeding methods exist as an adaptation to optimal foraging in contrasted marine environments for a wide variety of prey types. Because of the low productivity of tropical waters it is expected that specific flight and foraging techniques have been selected there, but very few data are available. By using five different types of high-precision miniaturized logger (global positioning systems, accelerometers, time depth recorders, activity recorders, altimeters) we studied the way a seabird is foraging over tropical waters. Red-footed boobies are foraging in the day, never foraging at night, probably as a result of predation risks. They make extensive use of wind conditions, flying preferentially with crosswinds at median speed of 38 km h(-1), reaching highest speeds with tail winds. They spent 66% of the foraging trip in flight, using a flap-glide flight, and gliding 68% of the flight. Travelling at low costs was regularly interrupted by extremely active foraging periods where birds are very frequently touching water for landing, plunge diving or surface diving (30 landings h(-1)). Dives were shallow (maximum 2.4 m) but frequent (4.5 dives h(-1)), most being plunge dives. While chasing for very mobile prey like flying fishes, boobies have adopted a very active and specific hunting behaviour, but the use of wind allows them to reduce travelling cost by their extensive use of gliding. During the foraging and travelling phases birds climb regularly to altitudes of 20-50 m to spot prey or congeners. During the final phase of the flight, they climb to high altitudes, up to 500 m, probably to avoid attacks by frigatebirds along the coasts. This study demonstrates the use by boobies of a series of very specific flight and activity patterns that have probably been selected as adaptations to the conditions of tropical waters.

  2. Deconvolution of post-adaptive optics images of faint circumstellar environments by means of the inexact Bregman procedure

    NASA Astrophysics Data System (ADS)

    Benfenati, A.; La Camera, A.; Carbillet, M.

    2016-02-01

    Aims: High-dynamic range images of astrophysical objects present some difficulties in their restoration because of the presence of very bright point-wise sources surrounded by faint and smooth structures. We propose a method that enables the restoration of this kind of images by taking these kinds of sources into account and, at the same time, improving the contrast enhancement in the final image. Moreover, the proposed approach can help to detect the position of the bright sources. Methods: The classical variational scheme in the presence of Poisson noise aims to find the minimum of a functional compound of the generalized Kullback-Leibler function and a regularization functional: the latter function is employed to preserve some characteristic in the restored image. The inexact Bregman procedure substitutes the regularization function with its inexact Bregman distance. This proposed scheme allows us to take under control the level of inexactness arising in the computed solution and permits us to employ an overestimation of the regularization parameter (which balances the trade-off between the Kullback-Leibler and the Bregman distance). This aspect is fundamental, since the estimation of this kind of parameter is very difficult in the presence of Poisson noise. Results: The inexact Bregman procedure is tested on a bright unresolved binary star with a faint circumstellar environment. When the sources' position is exactly known, this scheme provides us with very satisfactory results. In case of inexact knowledge of the sources' position, it can in addition give some useful information on the true positions. Finally, the inexact Bregman scheme can be also used when information about the binary star's position concerns a connected region instead of isolated pixels.

  3. An improved robust blind motion de-blurring algorithm for remote sensing images

    NASA Astrophysics Data System (ADS)

    He, Yulong; Liu, Jin; Liang, Yonghui

    2016-10-01

    Shift-invariant motion blur can be modeled as a convolution of the true latent image and the blur kernel with additive noise. Blind motion de-blurring estimates a sharp image from a motion blurred image without the knowledge of the blur kernel. This paper proposes an improved edge-specific motion de-blurring algorithm which proved to be fit for processing remote sensing images. We find that an inaccurate blur kernel is the main factor to the low-quality restored images. To improve image quality, we do the following contributions. For the robust kernel estimation, first, we adapt the multi-scale scheme to make sure that the edge map could be constructed accurately; second, an effective salient edge selection method based on RTV (Relative Total Variation) is used to extract salient structure from texture; third, an alternative iterative method is introduced to perform kernel optimization, in this step, we adopt l1 and l0 norm as the priors to remove noise and ensure the continuity of blur kernel. For the final latent image reconstruction, an improved adaptive deconvolution algorithm based on TV-l2 model is used to recover the latent image; we control the regularization weight adaptively in different region according to the image local characteristics in order to preserve tiny details and eliminate noise and ringing artifacts. Some synthetic remote sensing images are used to test the proposed algorithm, and results demonstrate that the proposed algorithm obtains accurate blur kernel and achieves better de-blurring results.

  4. GIFTed Demons: deformable image registration with local structure-preserving regularization using supervoxels for liver applications

    PubMed Central

    Gleeson, Fergus V.; Brady, Michael; Schnabel, Julia A.

    2018-01-01

    Abstract. Deformable image registration, a key component of motion correction in medical imaging, needs to be efficient and provides plausible spatial transformations that reliably approximate biological aspects of complex human organ motion. Standard approaches, such as Demons registration, mostly use Gaussian regularization for organ motion, which, though computationally efficient, rule out their application to intrinsically more complex organ motions, such as sliding interfaces. We propose regularization of motion based on supervoxels, which provides an integrated discontinuity preserving prior for motions, such as sliding. More precisely, we replace Gaussian smoothing by fast, structure-preserving, guided filtering to provide efficient, locally adaptive regularization of the estimated displacement field. We illustrate the approach by applying it to estimate sliding motions at lung and liver interfaces on challenging four-dimensional computed tomography (CT) and dynamic contrast-enhanced magnetic resonance imaging datasets. The results show that guided filter-based regularization improves the accuracy of lung and liver motion correction as compared to Gaussian smoothing. Furthermore, our framework achieves state-of-the-art results on a publicly available CT liver dataset. PMID:29662918

  5. GIFTed Demons: deformable image registration with local structure-preserving regularization using supervoxels for liver applications.

    PubMed

    Papież, Bartłomiej W; Franklin, James M; Heinrich, Mattias P; Gleeson, Fergus V; Brady, Michael; Schnabel, Julia A

    2018-04-01

    Deformable image registration, a key component of motion correction in medical imaging, needs to be efficient and provides plausible spatial transformations that reliably approximate biological aspects of complex human organ motion. Standard approaches, such as Demons registration, mostly use Gaussian regularization for organ motion, which, though computationally efficient, rule out their application to intrinsically more complex organ motions, such as sliding interfaces. We propose regularization of motion based on supervoxels, which provides an integrated discontinuity preserving prior for motions, such as sliding. More precisely, we replace Gaussian smoothing by fast, structure-preserving, guided filtering to provide efficient, locally adaptive regularization of the estimated displacement field. We illustrate the approach by applying it to estimate sliding motions at lung and liver interfaces on challenging four-dimensional computed tomography (CT) and dynamic contrast-enhanced magnetic resonance imaging datasets. The results show that guided filter-based regularization improves the accuracy of lung and liver motion correction as compared to Gaussian smoothing. Furthermore, our framework achieves state-of-the-art results on a publicly available CT liver dataset.

  6. Shakeout: A New Approach to Regularized Deep Neural Network Training.

    PubMed

    Kang, Guoliang; Li, Jun; Tao, Dacheng

    2018-05-01

    Recent years have witnessed the success of deep neural networks in dealing with a plenty of practical problems. Dropout has played an essential role in many successful deep neural networks, by inducing regularization in the model training. In this paper, we present a new regularized training approach: Shakeout. Instead of randomly discarding units as Dropout does at the training stage, Shakeout randomly chooses to enhance or reverse each unit's contribution to the next layer. This minor modification of Dropout has the statistical trait: the regularizer induced by Shakeout adaptively combines , and regularization terms. Our classification experiments with representative deep architectures on image datasets MNIST, CIFAR-10 and ImageNet show that Shakeout deals with over-fitting effectively and outperforms Dropout. We empirically demonstrate that Shakeout leads to sparser weights under both unsupervised and supervised settings. Shakeout also leads to the grouping effect of the input units in a layer. Considering the weights in reflecting the importance of connections, Shakeout is superior to Dropout, which is valuable for the deep model compression. Moreover, we demonstrate that Shakeout can effectively reduce the instability of the training process of the deep architecture.

  7. 34 CFR 300.39 - Special education.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... hospitals and institutions, and in other settings; and (ii) Instruction in physical education. (2) Special... their parents as a part of the regular education program. (2) Physical education means— (i) The... (ii) Includes special physical education, adapted physical education, movement education, and motor...

  8. Application of Turchin's method of statistical regularization

    NASA Astrophysics Data System (ADS)

    Zelenyi, Mikhail; Poliakova, Mariia; Nozik, Alexander; Khudyakov, Alexey

    2018-04-01

    During analysis of experimental data, one usually needs to restore a signal after it has been convoluted with some kind of apparatus function. According to Hadamard's definition this problem is ill-posed and requires regularization to provide sensible results. In this article we describe an implementation of the Turchin's method of statistical regularization based on the Bayesian approach to the regularization strategy.

  9. Effects of Spinal Cord Injury in Heart Rate Variability After Acute and Chronic Exercise: A Systematic Review.

    PubMed

    Buker, Daniel Bueno; Oyarce, Cristóbal Castillo; Plaza, Raúl Smith

    2018-01-01

    Background: Spinal cord injury (SCI) above T6 is followed by a loss of sympathetic supraspinal control of the heart, disturbing the autonomic balance and increasing cardiovascular risk. Heart rate variability (HRV) is a widely used tool for assessing the cardiac autonomic nervous system and positive adaptations after regular exercise in able-bodied subjects. However, adaptations in SCI subjects are not well known. Objectives: To compare HRV between able-bodied and SCI subjects and analyze the effects of chronic and acute exercise on HRV in the SCI group. Methods: We searched MEDLINE, Embase, Web of Science, SciELO, and Google Scholar databases to July 2016. We selected English and Spanish observational or experimental studies reporting HRV after training or acute exercise in SCI patients. We also included studies comparing HRV in SCI individuals with able-bodied subjects. Animal studies and nontraumatic SCI studies were excluded. We screened 279 articles by title and abstract; of these, we fully reviewed 29 articles. Eighteen articles fulfilled criteria for inclusion in this study. Results: SCI individuals showed lower HRV values in the low frequency band compared to able-bodied subjects. Regular exercise improved HRV in SCI subjects, however time and intensity data were lacking. HRV decreases after an acute bout of exercise on SCI subjects, but recovery kinetics are unknown. Conclusion: HRV is affected following SCI. Able-bodied subjects and SCI individuals have different values of HRV. Acute bouts of exercise change HRV temporarily, and chronic exercise might improve autonomic balance in SCI.

  10. Accelerated dynamic EPR imaging using fast acquisition and compressive recovery.

    PubMed

    Ahmad, Rizwan; Samouilov, Alexandre; Zweier, Jay L

    2016-12-01

    Electron paramagnetic resonance (EPR) allows quantitative imaging of tissue redox status, which provides important information about ischemic syndromes, cancer and other pathologies. For continuous wave EPR imaging, however, poor signal-to-noise ratio and low acquisition efficiency limit its ability to image dynamic processes in vivo including tissue redox, where conditions can change rapidly. Here, we present a data acquisition and processing framework that couples fast acquisition with compressive sensing-inspired image recovery to enable EPR-based redox imaging with high spatial and temporal resolutions. The fast acquisition (FA) allows collecting more, albeit noisier, projections in a given scan time. The composite regularization based processing method, called spatio-temporal adaptive recovery (STAR), not only exploits sparsity in multiple representations of the spatio-temporal image but also adaptively adjusts the regularization strength for each representation based on its inherent level of the sparsity. As a result, STAR adjusts to the disparity in the level of sparsity across multiple representations, without introducing any tuning parameter. Our simulation and phantom imaging studies indicate that a combination of fast acquisition and STAR (FASTAR) enables high-fidelity recovery of volumetric image series, with each volumetric image employing less than 10 s of scan. In addition to image fidelity, the time constants derived from FASTAR also match closely to the ground truth even when a small number of projections are used for recovery. This development will enhance the capability of EPR to study fast dynamic processes that cannot be investigated using existing EPR imaging techniques. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Correction of engineering servicing regularity of transporttechnological machines in operational process

    NASA Astrophysics Data System (ADS)

    Makarova, A. N.; Makarov, E. I.; Zakharov, N. S.

    2018-03-01

    In the article, the issue of correcting engineering servicing regularity on the basis of actual dependability data of cars in operation is considered. The purpose of the conducted research is to increase dependability of transport-technological machines by correcting engineering servicing regularity. The subject of the research is the mechanism of engineering servicing regularity influence on reliability measure. On the basis of the analysis of researches carried out before, a method of nonparametric estimation of car failure measure according to actual time-to-failure data was chosen. A possibility of describing the failure measure dependence on engineering servicing regularity by various mathematical models is considered. It is proven that the exponential model is the most appropriate for that purpose. The obtained results can be used as a separate method of engineering servicing regularity correction with certain operational conditions taken into account, as well as for the technical-economical and economical-stochastic methods improvement. Thus, on the basis of the conducted researches, a method of engineering servicing regularity correction of transport-technological machines in the operational process was developed. The use of that method will allow decreasing the number of failures.

  12. Stochastic calibration and learning in nonstationary hydroeconomic models

    NASA Astrophysics Data System (ADS)

    Maneta, M. P.; Howitt, R.

    2014-05-01

    Concern about water scarcity and adverse climate events over agricultural regions has motivated a number of efforts to develop operational integrated hydroeconomic models to guide adaptation and optimal use of water. Once calibrated, these models are used for water management and analysis assuming they remain valid under future conditions. In this paper, we present and demonstrate a methodology that permits the recursive calibration of economic models of agricultural production from noisy but frequently available data. We use a standard economic calibration approach, namely positive mathematical programming, integrated in a data assimilation algorithm based on the ensemble Kalman filter equations to identify the economic model parameters. A moving average kernel ensures that new and past information on agricultural activity are blended during the calibration process, avoiding loss of information and overcalibration for the conditions of a single year. A regularization constraint akin to the standard Tikhonov regularization is included in the filter to ensure its stability even in the presence of parameters with low sensitivity to observations. The results show that the implementation of the PMP methodology within a data assimilation framework based on the enKF equations is an effective method to calibrate models of agricultural production even with noisy information. The recursive nature of the method incorporates new information as an added value to the known previous observations of agricultural activity without the need to store historical information. The robustness of the method opens the door to the use of new remote sensing algorithms for operational water management.

  13. 3D first-arrival traveltime tomography with modified total variation regularization

    NASA Astrophysics Data System (ADS)

    Jiang, Wenbin; Zhang, Jie

    2018-02-01

    Three-dimensional (3D) seismic surveys have become a major tool in the exploration and exploitation of hydrocarbons. 3D seismic first-arrival traveltime tomography is a robust method for near-surface velocity estimation. A common approach for stabilizing the ill-posed inverse problem is to apply Tikhonov regularization to the inversion. However, the Tikhonov regularization method recovers smooth local structures while blurring the sharp features in the model solution. We present a 3D first-arrival traveltime tomography method with modified total variation (MTV) regularization to preserve sharp velocity contrasts and improve the accuracy of velocity inversion. To solve the minimization problem of the new traveltime tomography method, we decouple the original optimization problem into two following subproblems: a standard traveltime tomography problem with the traditional Tikhonov regularization and a L2 total variation problem. We apply the conjugate gradient method and split-Bregman iterative method to solve these two subproblems, respectively. Our synthetic examples show that the new method produces higher resolution models than the conventional traveltime tomography with Tikhonov regularization. We apply the technique to field data. The stacking section shows significant improvements with static corrections from the MTV traveltime tomography.

  14. Fast Quantitative Susceptibility Mapping with L1-Regularization and Automatic Parameter Selection

    PubMed Central

    Bilgic, Berkin; Fan, Audrey P.; Polimeni, Jonathan R.; Cauley, Stephen F.; Bianciardi, Marta; Adalsteinsson, Elfar; Wald, Lawrence L.; Setsompop, Kawin

    2014-01-01

    Purpose To enable fast reconstruction of quantitative susceptibility maps with Total Variation penalty and automatic regularization parameter selection. Methods ℓ1-regularized susceptibility mapping is accelerated by variable-splitting, which allows closed-form evaluation of each iteration of the algorithm by soft thresholding and FFTs. This fast algorithm also renders automatic regularization parameter estimation practical. A weighting mask derived from the magnitude signal can be incorporated to allow edge-aware regularization. Results Compared to the nonlinear Conjugate Gradient (CG) solver, the proposed method offers 20× speed-up in reconstruction time. A complete pipeline including Laplacian phase unwrapping, background phase removal with SHARP filtering and ℓ1-regularized dipole inversion at 0.6 mm isotropic resolution is completed in 1.2 minutes using Matlab on a standard workstation compared to 22 minutes using the Conjugate Gradient solver. This fast reconstruction allows estimation of regularization parameters with the L-curve method in 13 minutes, which would have taken 4 hours with the CG algorithm. Proposed method also permits magnitude-weighted regularization, which prevents smoothing across edges identified on the magnitude signal. This more complicated optimization problem is solved 5× faster than the nonlinear CG approach. Utility of the proposed method is also demonstrated in functional BOLD susceptibility mapping, where processing of the massive time-series dataset would otherwise be prohibitive with the CG solver. Conclusion Online reconstruction of regularized susceptibility maps may become feasible with the proposed dipole inversion. PMID:24259479

  15. Regularized Dual Averaging Image Reconstruction for Full-Wave Ultrasound Computed Tomography.

    PubMed

    Matthews, Thomas P; Wang, Kun; Li, Cuiping; Duric, Neb; Anastasio, Mark A

    2017-05-01

    Ultrasound computed tomography (USCT) holds great promise for breast cancer screening. Waveform inversion-based image reconstruction methods account for higher order diffraction effects and can produce high-resolution USCT images, but are computationally demanding. Recently, a source encoding technique has been combined with stochastic gradient descent (SGD) to greatly reduce image reconstruction times. However, this method bundles the stochastic data fidelity term with the deterministic regularization term. This limitation can be overcome by replacing SGD with a structured optimization method, such as the regularized dual averaging method, that exploits knowledge of the composition of the cost function. In this paper, the dual averaging method is combined with source encoding techniques to improve the effectiveness of regularization while maintaining the reduced reconstruction times afforded by source encoding. It is demonstrated that each iteration can be decomposed into a gradient descent step based on the data fidelity term and a proximal update step corresponding to the regularization term. Furthermore, the regularization term is never explicitly differentiated, allowing nonsmooth regularization penalties to be naturally incorporated. The wave equation is solved by the use of a time-domain method. The effectiveness of this approach is demonstrated through computer simulation and experimental studies. The results suggest that the dual averaging method can produce images with less noise and comparable resolution to those obtained by the use of SGD.

  16. Efficient integration method for fictitious domain approaches

    NASA Astrophysics Data System (ADS)

    Duczek, Sascha; Gabbert, Ulrich

    2015-10-01

    In the current article, we present an efficient and accurate numerical method for the integration of the system matrices in fictitious domain approaches such as the finite cell method (FCM). In the framework of the FCM, the physical domain is embedded in a geometrically larger domain of simple shape which is discretized using a regular Cartesian grid of cells. Therefore, a spacetree-based adaptive quadrature technique is normally deployed to resolve the geometry of the structure. Depending on the complexity of the structure under investigation this method accounts for most of the computational effort. To reduce the computational costs for computing the system matrices an efficient quadrature scheme based on the divergence theorem (Gauß-Ostrogradsky theorem) is proposed. Using this theorem the dimension of the integral is reduced by one, i.e. instead of solving the integral for the whole domain only its contour needs to be considered. In the current paper, we present the general principles of the integration method and its implementation. The results to several two-dimensional benchmark problems highlight its properties. The efficiency of the proposed method is compared to conventional spacetree-based integration techniques.

  17. Spatial resolution properties of motion-compensated tomographic image reconstruction methods.

    PubMed

    Chun, Se Young; Fessler, Jeffrey A

    2012-07-01

    Many motion-compensated image reconstruction (MCIR) methods have been proposed to correct for subject motion in medical imaging. MCIR methods incorporate motion models to improve image quality by reducing motion artifacts and noise. This paper analyzes the spatial resolution properties of MCIR methods and shows that nonrigid local motion can lead to nonuniform and anisotropic spatial resolution for conventional quadratic regularizers. This undesirable property is akin to the known effects of interactions between heteroscedastic log-likelihoods (e.g., Poisson likelihood) and quadratic regularizers. This effect may lead to quantification errors in small or narrow structures (such as small lesions or rings) of reconstructed images. This paper proposes novel spatial regularization design methods for three different MCIR methods that account for known nonrigid motion. We develop MCIR regularization designs that provide approximately uniform and isotropic spatial resolution and that match a user-specified target spatial resolution. Two-dimensional PET simulations demonstrate the performance and benefits of the proposed spatial regularization design methods.

  18. A LSQR-type method provides a computationally efficient automated optimal choice of regularization parameter in diffuse optical tomography.

    PubMed

    Prakash, Jaya; Yalavarthy, Phaneendra K

    2013-03-01

    Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time.

  19. 29 CFR 778.209 - Method of inclusion of bonus in regular rate.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 3 2012-07-01 2012-07-01 false Method of inclusion of bonus in regular rate. 778.209 Section 778.209 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF... COMPENSATION Payments That May Be Excluded From the âRegular Rateâ Bonuses § 778.209 Method of inclusion of...

  20. 29 CFR 778.209 - Method of inclusion of bonus in regular rate.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 3 2013-07-01 2013-07-01 false Method of inclusion of bonus in regular rate. 778.209 Section 778.209 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF... COMPENSATION Payments That May Be Excluded From the âRegular Rateâ Bonuses § 778.209 Method of inclusion of...

  1. 29 CFR 778.209 - Method of inclusion of bonus in regular rate.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 3 2014-07-01 2014-07-01 false Method of inclusion of bonus in regular rate. 778.209 Section 778.209 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF... COMPENSATION Payments That May Be Excluded From the âRegular Rateâ Bonuses § 778.209 Method of inclusion of...

  2. 29 CFR 778.209 - Method of inclusion of bonus in regular rate.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Method of inclusion of bonus in regular rate. 778.209 Section 778.209 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF... COMPENSATION Payments That May Be Excluded From the âRegular Rateâ Bonuses § 778.209 Method of inclusion of...

  3. 29 CFR 778.209 - Method of inclusion of bonus in regular rate.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 3 2011-07-01 2011-07-01 false Method of inclusion of bonus in regular rate. 778.209 Section 778.209 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF... COMPENSATION Payments That May Be Excluded From the âRegular Rateâ Bonuses § 778.209 Method of inclusion of...

  4. Joint Optimization of Fluence Field Modulation and Regularization in Task-Driven Computed Tomography

    PubMed Central

    Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.

    2017-01-01

    Purpose This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d′) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM. PMID:28626290

  5. Joint optimization of fluence field modulation and regularization in task-driven computed tomography

    NASA Astrophysics Data System (ADS)

    Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.

    2017-03-01

    Purpose: This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods: We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d') across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results: The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions: The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.

  6. Low complexity Reed-Solomon-based low-density parity-check design for software defined optical transmission system based on adaptive puncturing decoding algorithm

    NASA Astrophysics Data System (ADS)

    Pan, Xiaolong; Liu, Bo; Zheng, Jianglong; Tian, Qinghua

    2016-08-01

    We propose and demonstrate a low complexity Reed-Solomon-based low-density parity-check (RS-LDPC) code with adaptive puncturing decoding algorithm for elastic optical transmission system. Partial received codes and the relevant column in parity-check matrix can be punctured to reduce the calculation complexity by adaptive parity-check matrix during decoding process. The results show that the complexity of the proposed decoding algorithm is reduced by 30% compared with the regular RS-LDPC system. The optimized code rate of the RS-LDPC code can be obtained after five times iteration.

  7. Multiple graph regularized protein domain ranking.

    PubMed

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2012-11-19

    Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  8. The hypergraph regularity method and its applications

    PubMed Central

    Rödl, V.; Nagle, B.; Skokan, J.; Schacht, M.; Kohayakawa, Y.

    2005-01-01

    Szemerédi's regularity lemma asserts that every graph can be decomposed into relatively few random-like subgraphs. This random-like behavior enables one to find and enumerate subgraphs of a given isomorphism type, yielding the so-called counting lemma for graphs. The combined application of these two lemmas is known as the regularity method for graphs and has proved useful in graph theory, combinatorial geometry, combinatorial number theory, and theoretical computer science. Here, we report on recent advances in the regularity method for k-uniform hypergraphs, for arbitrary k ≥ 2. This method, purely combinatorial in nature, gives alternative proofs of density theorems originally due to E. Szemerédi, H. Furstenberg, and Y. Katznelson. Further results in extremal combinatorics also have been obtained with this approach. The two main components of the regularity method for k-uniform hypergraphs, the regularity lemma and the counting lemma, have been obtained recently: Rödl and Skokan (based on earlier work of Frankl and Rödl) generalized Szemerédi's regularity lemma to k-uniform hypergraphs, and Nagle, Rödl, and Schacht succeeded in proving a counting lemma accompanying the Rödl–Skokan hypergraph regularity lemma. The counting lemma is proved by reducing the counting problem to a simpler one previously investigated by Kohayakawa, Rödl, and Skokan. Similar results were obtained independently by W. T. Gowers, following a different approach. PMID:15919821

  9. Multiple graph regularized protein domain ranking

    PubMed Central

    2012-01-01

    Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. PMID:23157331

  10. Harnessing the Prokaryotic Adaptive Immune System as a Eukaryotic Antiviral Defense

    PubMed Central

    Price, Aryn A.; Grakoui, Arash; Weiss, David S.

    2016-01-01

    Clustered, regularly interspaced, short palindromic repeats - CRISPR associated (CRISPR-Cas) systems are sequence specific RNA-directed endonuclease complexes that bind and cleave nucleic acids. These systems evolved within prokaryotes as adaptive immune defenses to target and degrade nucleic acids derived from bacteriophages and other foreign genetic elements. The antiviral function of these systems has now been exploited to combat eukaryotic viruses throughout the viral life cycle. Here we discuss current advances in CRISPR-Cas9 technology as a eukaryotic antiviral defense. PMID:26852268

  11. [Dominant structural and functional adaptations of distant chemosensory systems in phylogenesis of Gastropoda].

    PubMed

    Zaĭtseva, O V

    2000-08-01

    The invasion of land by gastropods independent and repeated in the course of their evolution, was shown to be accompanied by appearance of organisationally similar olfactory tentacular organs and special integrative centres. The majority of primary and secondary water gastropods in different phylogenetic groups had a different, more primitive organisation of the tentacular sensory system as compared to the terrestrial species. Regularities of the phylogenetic adaptations of mollusks to the habitat media and lifestyle are discussed.

  12. Local/non-local regularized image segmentation using graph-cuts: application to dynamic and multispectral MRI.

    PubMed

    Hanson, Erik A; Lundervold, Arvid

    2013-11-01

    Multispectral, multichannel, or time series image segmentation is important for image analysis in a wide range of applications. Regularization of the segmentation is commonly performed using local image information causing the segmented image to be locally smooth or piecewise constant. A new spatial regularization method, incorporating non-local information, was developed and tested. Our spatial regularization method applies to feature space classification in multichannel images such as color images and MR image sequences. The spatial regularization involves local edge properties, region boundary minimization, as well as non-local similarities. The method is implemented in a discrete graph-cut setting allowing fast computations. The method was tested on multidimensional MRI recordings from human kidney and brain in addition to simulated MRI volumes. The proposed method successfully segment regions with both smooth and complex non-smooth shapes with a minimum of user interaction.

  13. Image deblurring based on nonlocal regularization with a non-convex sparsity constraint

    NASA Astrophysics Data System (ADS)

    Zhu, Simiao; Su, Zhenming; Li, Lian; Yang, Yi

    2018-04-01

    In recent years, nonlocal regularization methods for image restoration (IR) have drawn more and more attention due to the promising results obtained when compared to the traditional local regularization methods. Despite the success of this technique, in order to obtain computational efficiency, a convex regularizing functional is exploited in most existing methods, which is equivalent to imposing a convex prior on the nonlocal difference operator output. However, our conducted experiment illustrates that the empirical distribution of the output of the nonlocal difference operator especially in the seminal work of Kheradmand et al. should be characterized with an extremely heavy-tailed distribution rather than a convex distribution. Therefore, in this paper, we propose a nonlocal regularization-based method with a non-convex sparsity constraint for image deblurring. Finally, an effective algorithm is developed to solve the corresponding non-convex optimization problem. The experimental results demonstrate the effectiveness of the proposed method.

  14. Word Recognition Reflects Dimension-Based Statistical Learning

    ERIC Educational Resources Information Center

    Idemaru, Kaori; Holt, Lori L.

    2011-01-01

    Speech processing requires sensitivity to long-term regularities of the native language yet demands listeners to flexibly adapt to perturbations that arise from talker idiosyncrasies such as nonnative accent. The present experiments investigate whether listeners exhibit "dimension-based statistical learning" of correlations between acoustic…

  15. 75 FR 63438 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-15

    ...: National Oceanic and Atmospheric Administration (NOAA). Title: Conflict of Interest Disclosure for.... Form Number(s): NA. Type of Request: Regular submission (renewal of a currently approved information... adapt the National Academy of Sciences (NAS) policy for evaluating conflicts of interest when selecting...

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Z.; Department of Applied Mathematics and Mechanics, University of Science and Technology Beijing, Beijing 100083; Lin, P.

    In this paper, we investigate numerically a diffuse interface model for the Navier–Stokes equation with fluid–fluid interface when the fluids have different densities [48]. Under minor reformulation of the system, we show that there is a continuous energy law underlying the system, assuming that all variables have reasonable regularities. It is shown in the literature that an energy law preserving method will perform better for multiphase problems. Thus for the reformulated system, we design a C{sup 0} finite element method and a special temporal scheme where the energy law is preserved at the discrete level. Such a discrete energy lawmore » (almost the same as the continuous energy law) for this variable density two-phase flow model has never been established before with C{sup 0} finite element. A Newton method is introduced to linearise the highly non-linear system of our discretization scheme. Some numerical experiments are carried out using the adaptive mesh to investigate the scenario of coalescing and rising drops with differing density ratio. The snapshots for the evolution of the interface together with the adaptive mesh at different times are presented to show that the evolution, including the break-up/pinch-off of the drop, can be handled smoothly by our numerical scheme. The discrete energy functional for the system is examined to show that the energy law at the discrete level is preserved by our scheme.« less

  17. TEMPERAMENTAL ADAPTABILITY, PERSISTENCE, AND REGULARITY: PARENTAL RATINGS OF NORWEGIAN INFANTS AGED 6 TO 12 MONTHS, WITH SOME IMPLICATIONS FOR PREVENTIVE PRACTICE.

    PubMed

    Olafsen, Kåre S; Ulvund, Stein Erik; Torgersen, Anne Mari; Wentzel-Larsen, Tore; Smith, Lars; Moe, Vibeke

    2018-03-01

    There is a need for standardized measures of infant temperament to strengthen current practices in prevention and early intervention. The present study provides Norwegian data on the Cameron-Rice Infant Temperament Questionnaire (CRITQ; J.R. Cameron & D.C. Rice, 1986a), which comprises 46 items and is used within a U.S. health maintenance organization. The CRITQ was filled out by mothers and fathers at 6 and again at 12 months as part of a longitudinal study of mental health during the first years of life (the "Little in Norway" study, N = 1,041 families enrolled; V. Moe & L. Smith, 2010). Results showed that internal consistencies were comparable with U.S. The temperament dimensions of persistence, adaptability, and regularity had acceptable or close-to-acceptable reliabilities in the U.S. study as well as in this study, and also were unifactorial in confirmatory factor analysis. These dimensions are the focus in this article. Findings concerning parents' differential ratings of their infants on the three dimensions are reported, as is the stability of parents' ratings of temperament from 6 to 12 months. In addition, results on the relation between temperament and parenting stress are presented. The study suggests that temperamental adaptability, persistence, and regularity may be relevant when assessing infant behavior, and may be applied in systematic prevention trials for families with infants. The inclusion of concepts related to individual differences in response tendencies and regulatory efforts may broaden the understanding of parent-infant transactions, and thus enrich prevention and sensitizing interventions with the aim of assisting infants' development. © 2018 Michigan Association for Infant Mental Health.

  18. Cerebellum, temporal predictability and the updating of a mental model.

    PubMed

    Kotz, Sonja A; Stockert, Anika; Schwartze, Michael

    2014-12-19

    We live in a dynamic and changing environment, which necessitates that we adapt to and efficiently respond to changes of stimulus form ('what') and stimulus occurrence ('when'). Consequently, behaviour is optimal when we can anticipate both the 'what' and 'when' dimensions of a stimulus. For example, to perceive a temporally expected stimulus, a listener needs to establish a fairly precise internal representation of its external temporal structure, a function ascribed to classical sensorimotor areas such as the cerebellum. Here we investigated how patients with cerebellar lesions and healthy matched controls exploit temporal regularity during auditory deviance processing. We expected modulations of the N2b and P3b components of the event-related potential in response to deviant tones, and also a stronger P3b response when deviant tones are embedded in temporally regular compared to irregular tone sequences. We further tested to what degree structural damage to the cerebellar temporal processing system affects the N2b and P3b responses associated with voluntary attention to change detection and the predictive adaptation of a mental model of the environment, respectively. Results revealed that healthy controls and cerebellar patients display an increased N2b response to deviant tones independent of temporal context. However, while healthy controls showed the expected enhanced P3b response to deviant tones in temporally regular sequences, the P3b response in cerebellar patients was significantly smaller in these sequences. The current data provide evidence that structural damage to the cerebellum affects the predictive adaptation to the temporal structure of events and the updating of a mental model of the environment under voluntary attention. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  19. Cerebellum, temporal predictability and the updating of a mental model

    PubMed Central

    Kotz, Sonja A.; Stockert, Anika; Schwartze, Michael

    2014-01-01

    We live in a dynamic and changing environment, which necessitates that we adapt to and efficiently respond to changes of stimulus form (‘what’) and stimulus occurrence (‘when’). Consequently, behaviour is optimal when we can anticipate both the ‘what’ and ‘when’ dimensions of a stimulus. For example, to perceive a temporally expected stimulus, a listener needs to establish a fairly precise internal representation of its external temporal structure, a function ascribed to classical sensorimotor areas such as the cerebellum. Here we investigated how patients with cerebellar lesions and healthy matched controls exploit temporal regularity during auditory deviance processing. We expected modulations of the N2b and P3b components of the event-related potential in response to deviant tones, and also a stronger P3b response when deviant tones are embedded in temporally regular compared to irregular tone sequences. We further tested to what degree structural damage to the cerebellar temporal processing system affects the N2b and P3b responses associated with voluntary attention to change detection and the predictive adaptation of a mental model of the environment, respectively. Results revealed that healthy controls and cerebellar patients display an increased N2b response to deviant tones independent of temporal context. However, while healthy controls showed the expected enhanced P3b response to deviant tones in temporally regular sequences, the P3b response in cerebellar patients was significantly smaller in these sequences. The current data provide evidence that structural damage to the cerebellum affects the predictive adaptation to the temporal structure of events and the updating of a mental model of the environment under voluntary attention. PMID:25385781

  20. The persistence of the attentional bias to regularities in a changing environment.

    PubMed

    Yu, Ru Qi; Zhao, Jiaying

    2015-10-01

    The environment often is stable, but some aspects may change over time. The challenge for the visual system is to discover and flexibly adapt to the changes. We examined how attention is shifted in the presence of changes in the underlying structure of the environment. In six experiments, observers viewed four simultaneous streams of objects while performing a visual search task. In the first half of each experiment, the stream in the structured location contained regularities, the shapes in the random location were randomized, and gray squares appeared in two neutral locations. In the second half, the stream in the structured or the random location may change. In the first half of all experiments, visual search was facilitated in the structured location, suggesting that attention was consistently biased toward regularities. In the second half, this bias persisted in the structured location when no change occurred (Experiment 1), when the regularities were removed (Experiment 2), or when new regularities embedded in the original or novel stimuli emerged in the previously random location (Experiments 3 and 6). However, visual search was numerically but no longer reliably faster in the structured location when the initial regularities were removed and new regularities were introduced in the previously random location (Experiment 4), or when novel random stimuli appeared in the random location (Experiment 5). This suggests that the attentional bias was weakened. Overall, the results demonstrate that the attentional bias to regularities was persistent but also sensitive to changes in the environment.

  1. A multiplicative regularization for force reconstruction

    NASA Astrophysics Data System (ADS)

    Aucejo, M.; De Smet, O.

    2017-02-01

    Additive regularizations, such as Tikhonov-like approaches, are certainly the most popular methods for reconstructing forces acting on a structure. These approaches require, however, the knowledge of a regularization parameter, that can be numerically computed using specific procedures. Unfortunately, these procedures are generally computationally intensive. For this particular reason, it could be of primary interest to propose a method able to proceed without defining any regularization parameter beforehand. In this paper, a multiplicative regularization is introduced for this purpose. By construction, the regularized solution has to be calculated in an iterative manner. In doing so, the amount of regularization is automatically adjusted throughout the resolution process. Validations using synthetic and experimental data highlight the ability of the proposed approach in providing consistent reconstructions.

  2. Regularization techniques for backward--in--time evolutionary PDE problems

    NASA Astrophysics Data System (ADS)

    Gustafsson, Jonathan; Protas, Bartosz

    2007-11-01

    Backward--in--time evolutionary PDE problems have applications in the recently--proposed retrograde data assimilation. We consider the terminal value problem for the Kuramoto--Sivashinsky equation (KSE) in a 1D periodic domain as our model system. The KSE, proposed as a model for interfacial and combustion phenomena, is also often adopted as a toy model for hydrodynamic turbulence because of its multiscale and chaotic dynamics. Backward--in--time problems are typical examples of ill-posed problem, where disturbances are amplified exponentially during the backward march. Regularization is required to solve such problems efficiently and we consider approaches in which the original ill--posed problem is approximated with a less ill--posed problem obtained by adding a regularization term to the original equation. While such techniques are relatively well--understood for linear problems, they less understood in the present nonlinear setting. We consider regularization terms with fixed magnitudes and also explore a novel approach in which these magnitudes are adapted dynamically using simple concepts from the Control Theory.

  3. 3D exemplar-based random walks for tooth segmentation from cone-beam computed tomography images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pei, Yuru, E-mail: peiyuru@cis.pku.edu.cn; Ai, Xin

    Purpose: Tooth segmentation is an essential step in acquiring patient-specific dental geometries from cone-beam computed tomography (CBCT) images. Tooth segmentation from CBCT images is still a challenging task considering the comparatively low image quality caused by the limited radiation dose, as well as structural ambiguities from intercuspation and nearby alveolar bones. The goal of this paper is to present and discuss the latest accomplishments in semisupervised tooth segmentation with adaptive 3D shape constraints. Methods: The authors propose a 3D exemplar-based random walk method of tooth segmentation from CBCT images. The proposed method integrates semisupervised label propagation and regularization by 3Dmore » exemplar registration. To begin with, the pure random walk method is to get an initial segmentation of the teeth, which tends to be erroneous because of the structural ambiguity of CBCT images. And then, as an iterative refinement, the authors conduct a regularization by using 3D exemplar registration, as well as label propagation by random walks with soft constraints, to improve the tooth segmentation. In the first stage of the iteration, 3D exemplars with well-defined topologies are adapted to fit the tooth contours, which are obtained from the random walks based segmentation. The soft constraints on voxel labeling are defined by shape-based foreground dentine probability acquired by the exemplar registration, as well as the appearance-based probability from a support vector machine (SVM) classifier. In the second stage, the labels of the volume-of-interest (VOI) are updated by the random walks with soft constraints. The two stages are optimized iteratively. Instead of the one-shot label propagation in the VOI, an iterative refinement process can achieve a reliable tooth segmentation by virtue of exemplar-based random walks with adaptive soft constraints. Results: The proposed method was applied for tooth segmentation of twenty clinically captured CBCT images. Three metrics, including the Dice similarity coefficient (DSC), the Jaccard similarity coefficient (JSC), and the mean surface deviation (MSD), were used to quantitatively analyze the segmentation of anterior teeth including incisors and canines, premolars, and molars. The segmentation of the anterior teeth achieved a DSC up to 98%, a JSC of 97%, and an MSD of 0.11 mm compared with manual segmentation. For the premolars, the average values of DSC, JSC, and MSD were 98%, 96%, and 0.12 mm, respectively. The proposed method yielded a DSC of 95%, a JSC of 89%, and an MSD of 0.26 mm for molars. Aside from the interactive definition of label priors by the user, automatic tooth segmentation can be achieved in an average of 1.18 min. Conclusions: The proposed technique enables an efficient and reliable tooth segmentation from CBCT images. This study makes it clinically practical to segment teeth from CBCT images, thus facilitating pre- and interoperative uses of dental morphologies in maxillofacial and orthodontic treatments.« less

  4. Image denoising for real-time MRI.

    PubMed

    Klosowski, Jakob; Frahm, Jens

    2017-03-01

    To develop an image noise filter suitable for MRI in real time (acquisition and display), which preserves small isolated details and efficiently removes background noise without introducing blur, smearing, or patch artifacts. The proposed method extends the nonlocal means algorithm to adapt the influence of the original pixel value according to a simple measure for patch regularity. Detail preservation is improved by a compactly supported weighting kernel that closely approximates the commonly used exponential weight, while an oracle step ensures efficient background noise removal. Denoising experiments were conducted on real-time images of healthy subjects reconstructed by regularized nonlinear inversion from radial acquisitions with pronounced undersampling. The filter leads to a signal-to-noise ratio (SNR) improvement of at least 60% without noticeable artifacts or loss of detail. The method visually compares to more complex state-of-the-art filters as the block-matching three-dimensional filter and in certain cases better matches the underlying noise model. Acceleration of the computation to more than 100 complex frames per second using graphics processing units is straightforward. The sensitivity of nonlocal means to small details can be significantly increased by the simple strategies presented here, which allows partial restoration of SNR in iteratively reconstructed images without introducing a noticeable time delay or image artifacts. Magn Reson Med 77:1340-1352, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  5. [Legg-Calvé-Perthes disease (LCPD). Principles of diagnosis and treatment].

    PubMed

    Manig, M

    2013-10-01

    The clinical course of Legg-Calvé-Perthes disease (LCPD) is variable. Diagnosis, nonsurgical and surgical methods of treatment have evolved over many decades, from abduction casts and braces to advanced surgical containment methods which are now the mainstay of treatment. This article presents a general view and a critical evaluation of the literature. The main prognostic factors are patient age at the onset of LCPD, the range of motion and the extent of the necrotic process according to the classification of Herring and Catterall. The main aims of surgical and nonsurgical treatment of LCPD are to prevent prearthrotic deformity of the femoral head, relief of symptoms, containment of the femoral head and restoration of congruence of the hip joint. Each patient needs to be evaluated individually. Every child must receive an adapted treatment and continued follow-up at regular intervals.

  6. Parameter selection in limited data cone-beam CT reconstruction using edge-preserving total variation algorithms

    NASA Astrophysics Data System (ADS)

    Lohvithee, Manasavee; Biguri, Ander; Soleimani, Manuchehr

    2017-12-01

    There are a number of powerful total variation (TV) regularization methods that have great promise in limited data cone-beam CT reconstruction with an enhancement of image quality. These promising TV methods require careful selection of the image reconstruction parameters, for which there are no well-established criteria. This paper presents a comprehensive evaluation of parameter selection in a number of major TV-based reconstruction algorithms. An appropriate way of selecting the values for each individual parameter has been suggested. Finally, a new adaptive-weighted projection-controlled steepest descent (AwPCSD) algorithm is presented, which implements the edge-preserving function for CBCT reconstruction with limited data. The proposed algorithm shows significant robustness compared to three other existing algorithms: ASD-POCS, AwASD-POCS and PCSD. The proposed AwPCSD algorithm is able to preserve the edges of the reconstructed images better with fewer sensitive parameters to tune.

  7. A Fast, Efficient Domain Adaptation Technique for Cross-Domain Electroencephalography(EEG)-Based Emotion Recognition

    PubMed Central

    Chai, Xin; Wang, Qisong; Zhao, Yongping; Li, Yongqiang; Liu, Dan; Liu, Xin; Bai, Ou

    2017-01-01

    Electroencephalography (EEG)-based emotion recognition is an important element in psychiatric health diagnosis for patients. However, the underlying EEG sensor signals are always non-stationary if they are sampled from different experimental sessions or subjects. This results in the deterioration of the classification performance. Domain adaptation methods offer an effective way to reduce the discrepancy of marginal distribution. However, for EEG sensor signals, both marginal and conditional distributions may be mismatched. In addition, the existing domain adaptation strategies always require a high level of additional computation. To address this problem, a novel strategy named adaptive subspace feature matching (ASFM) is proposed in this paper in order to integrate both the marginal and conditional distributions within a unified framework (without any labeled samples from target subjects). Specifically, we develop a linear transformation function which matches the marginal distributions of the source and target subspaces without a regularization term. This significantly decreases the time complexity of our domain adaptation procedure. As a result, both marginal and conditional distribution discrepancies between the source domain and unlabeled target domain can be reduced, and logistic regression (LR) can be applied to the new source domain in order to train a classifier for use in the target domain, since the aligned source domain follows a distribution which is similar to that of the target domain. We compare our ASFM method with six typical approaches using a public EEG dataset with three affective states: positive, neutral, and negative. Both offline and online evaluations were performed. The subject-to-subject offline experimental results demonstrate that our component achieves a mean accuracy and standard deviation of 80.46% and 6.84%, respectively, as compared with a state-of-the-art method, the subspace alignment auto-encoder (SAAE), which achieves values of 77.88% and 7.33% on average, respectively. For the online analysis, the average classification accuracy and standard deviation of ASFM in the subject-to-subject evaluation for all the 15 subjects in a dataset was 75.11% and 7.65%, respectively, gaining a significant performance improvement compared to the best baseline LR which achieves 56.38% and 7.48%, respectively. The experimental results confirm the effectiveness of the proposed method relative to state-of-the-art methods. Moreover, computational efficiency of the proposed ASFM method is much better than standard domain adaptation; if the numbers of training samples and test samples are controlled within certain range, it is suitable for real-time classification. It can be concluded that ASFM is a useful and effective tool for decreasing domain discrepancy and reducing performance degradation across subjects and sessions in the field of EEG-based emotion recognition. PMID:28467371

  8. A Fast, Efficient Domain Adaptation Technique for Cross-Domain Electroencephalography(EEG)-Based Emotion Recognition.

    PubMed

    Chai, Xin; Wang, Qisong; Zhao, Yongping; Li, Yongqiang; Liu, Dan; Liu, Xin; Bai, Ou

    2017-05-03

    Electroencephalography (EEG)-based emotion recognition is an important element in psychiatric health diagnosis for patients. However, the underlying EEG sensor signals are always non-stationary if they are sampled from different experimental sessions or subjects. This results in the deterioration of the classification performance. Domain adaptation methods offer an effective way to reduce the discrepancy of marginal distribution. However, for EEG sensor signals, both marginal and conditional distributions may be mismatched. In addition, the existing domain adaptation strategies always require a high level of additional computation. To address this problem, a novel strategy named adaptive subspace feature matching (ASFM) is proposed in this paper in order to integrate both the marginal and conditional distributions within a unified framework (without any labeled samples from target subjects). Specifically, we develop a linear transformation function which matches the marginal distributions of the source and target subspaces without a regularization term. This significantly decreases the time complexity of our domain adaptation procedure. As a result, both marginal and conditional distribution discrepancies between the source domain and unlabeled target domain can be reduced, and logistic regression (LR) can be applied to the new source domain in order to train a classifier for use in the target domain, since the aligned source domain follows a distribution which is similar to that of the target domain. We compare our ASFM method with six typical approaches using a public EEG dataset with three affective states: positive, neutral, and negative. Both offline and online evaluations were performed. The subject-to-subject offline experimental results demonstrate that our component achieves a mean accuracy and standard deviation of 80.46% and 6.84%, respectively, as compared with a state-of-the-art method, the subspace alignment auto-encoder (SAAE), which achieves values of 77.88% and 7.33% on average, respectively. For the online analysis, the average classification accuracy and standard deviation of ASFM in the subject-to-subject evaluation for all the 15 subjects in a dataset was 75.11% and 7.65%, respectively, gaining a significant performance improvement compared to the best baseline LR which achieves 56.38% and 7.48%, respectively. The experimental results confirm the effectiveness of the proposed method relative to state-of-the-art methods. Moreover, computational efficiency of the proposed ASFM method is much better than standard domain adaptation; if the numbers of training samples and test samples are controlled within certain range, it is suitable for real-time classification. It can be concluded that ASFM is a useful and effective tool for decreasing domain discrepancy and reducing performance degradation across subjects and sessions in the field of EEG-based emotion recognition.

  9. Regularized Generalized Canonical Correlation Analysis

    ERIC Educational Resources Information Center

    Tenenhaus, Arthur; Tenenhaus, Michel

    2011-01-01

    Regularized generalized canonical correlation analysis (RGCCA) is a generalization of regularized canonical correlation analysis to three or more sets of variables. It constitutes a general framework for many multi-block data analysis methods. It combines the power of multi-block data analysis methods (maximization of well identified criteria) and…

  10. Green Jobs: Definition and Method of Appraisal of Chemical and Biological Risks

    PubMed Central

    Cheneval, Erwan; Busque, Marc-Antoine; Ostiguy, Claude; Lavoie, Jacques; Bourbonnais, Robert; Labrèche, France; Bakhiyi, Bouchra; Zayed, Joseph

    2016-01-01

    In the wake of sustainable development, green jobs are developing rapidly, changing the work environment. However a green job is not automatically a safe job. The aim of the study was to define green jobs, and to establish a preliminary risk assessment of chemical substances and biological agents for workers in Quebec. An operational definition was developed, along with criteria and sustainable development principles to discriminate green jobs from regular jobs. The potential toxicity or hazard associated with their chemical and biological exposures was assessed, and the workers’ exposure appraised using an expert assessment method. A control banding approach was then used to assess risks for workers in selected green jobs. A double entry model allowed us to set priorities in terms of chemical or biological risk. Among jobs that present the highest risk potential, several are related to waste management. The developed method is flexible and could be adapted to better appraise the risks that workers are facing or to propose control measures. PMID:26718400

  11. CRISPR-Cas Targeting of Host Genes as an Antiviral Strategy

    PubMed Central

    Chen, Shuliang; Yu, Xiao; Guo, Deyin

    2018-01-01

    Currently, a new gene editing tool—the Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) associated (Cas) system—is becoming a promising approach for genetic manipulation at the genomic level. This simple method, originating from the adaptive immune defense system in prokaryotes, has been developed and applied to antiviral research in humans. Based on the characteristics of virus-host interactions and the basic rules of nucleic acid cleavage or gene activation of the CRISPR-Cas system, it can be used to target both the virus genome and host factors to clear viral reservoirs and prohibit virus infection or replication. Here, we summarize recent progress of the CRISPR-Cas technology in editing host genes as an antiviral strategy. PMID:29337866

  12. Adaptive Sliding Mode Control of Dynamic Systems Using Double Loop Recurrent Neural Network Structure.

    PubMed

    Fei, Juntao; Lu, Cheng

    2018-04-01

    In this paper, an adaptive sliding mode control system using a double loop recurrent neural network (DLRNN) structure is proposed for a class of nonlinear dynamic systems. A new three-layer RNN is proposed to approximate unknown dynamics with two different kinds of feedback loops where the firing weights and output signal calculated in the last step are stored and used as the feedback signals in each feedback loop. Since the new structure has combined the advantages of internal feedback NN and external feedback NN, it can acquire the internal state information while the output signal is also captured, thus the new designed DLRNN can achieve better approximation performance compared with the regular NNs without feedback loops or the regular RNNs with a single feedback loop. The new proposed DLRNN structure is employed in an equivalent controller to approximate the unknown nonlinear system dynamics, and the parameters of the DLRNN are updated online by adaptive laws to get favorable approximation performance. To investigate the effectiveness of the proposed controller, the designed adaptive sliding mode controller with the DLRNN is applied to a -axis microelectromechanical system gyroscope to control the vibrating dynamics of the proof mass. Simulation results demonstrate that the proposed methodology can achieve good tracking property, and the comparisons of the approximation performance between radial basis function NN, RNN, and DLRNN show that the DLRNN can accurately estimate the unknown dynamics with a fast speed while the internal states of DLRNN are more stable.

  13. CRISPR-Cas: Adapting to change.

    PubMed

    Jackson, Simon A; McKenzie, Rebecca E; Fagerlund, Robert D; Kieper, Sebastian N; Fineran, Peter C; Brouns, Stan J J

    2017-04-07

    Bacteria and archaea are engaged in a constant arms race to defend against the ever-present threats of viruses and invasion by mobile genetic elements. The most flexible weapons in the prokaryotic defense arsenal are the CRISPR-Cas adaptive immune systems. These systems are capable of selective identification and neutralization of foreign DNA and/or RNA. CRISPR-Cas systems rely on stored genetic memories to facilitate target recognition. Thus, to keep pace with a changing pool of hostile invaders, the CRISPR memory banks must be regularly updated with new information through a process termed CRISPR adaptation. In this Review, we outline the recent advances in our understanding of the molecular mechanisms governing CRISPR adaptation. Specifically, the conserved protein machinery Cas1-Cas2 is the cornerstone of adaptive immunity in a range of diverse CRISPR-Cas systems. Copyright © 2017, American Association for the Advancement of Science.

  14. How does playing adapted sports affect quality of life of people with mobility limitations? Results from a mixed-method sequential explanatory study.

    PubMed

    Côté-Leclerc, Félix; Boileau Duchesne, Gabrielle; Bolduc, Patrick; Gélinas-Lafrenière, Amélie; Santerre, Corinne; Desrosiers, Johanne; Levasseur, Mélanie

    2017-01-25

    Occupations, including physical activity, are a strong determinant of health. However, mobility limitations can restrict opportunities to perform these occupations, which may affect quality of life. Some people will turn to adapted sports to meet their need to be involved in occupations. Little is known, however, about how participation in adapted sports affects the quality of life of people with mobility limitations. This study thus aimed to explore the influence of adapted sports on quality of life in adult wheelchair users. A mixed-method sequential explanatory design was used, including a quantitative and a qualitative component with a clinical research design. A total of 34 wheelchair users aged 18 to 62, who regularly played adapted sports, completed the Quality of Life Index (/30). Their scores were compared to those obtained by people of similar age without limitations (general population). Ten of the wheelchair users also participated in individual semi-structured interviews exploring their perceptions regarding how sports-related experiences affected their quality of life. The participants were 9 women and 25 men with paraplegia, the majority of whom worked and played an individual adapted sport (athletics, tennis or rugby) at the international or national level. People with mobility limitations who participated in adapted sports had a quality of life comparable to the group without limitations (21.9 ± 3.3 vs 22.3 ± 2.9 respectively), except for poorer family-related quality of life (21.0 ± 5.3 vs 24.1 ± 4.9 respectively). Based on the interviews, participants reported that the positive effect of adapted sports on the quality of life of people with mobility limitations operates mainly through the following: personal factors (behavior-related abilities and health), social participation (in general and through interpersonal relationships), and environmental factors (society's perceptions and support from the environment). Some contextual factors, such as resources and the accessibility of organizations and training facilities, are important and contributed indirectly to quality of life. Negative aspects, such as performance-related stress and injury, also have an effect. People with mobility limitations playing adapted sports and people without limitations have a similar quality of life. Participation in adapted sports was identified as having positive effects on self-esteem, self-efficacy, sense of belonging, participation in meaningful activities, society's attitude towards people with mobility limitations, and physical well-being. However, participants stated that this involvement, especially at higher levels, had a negative impact on their social life.

  15. Improving public transportation systems with self-organization: A headway-based model and regulation of passenger alighting and boarding.

    PubMed

    Carreón, Gustavo; Gershenson, Carlos; Pineda, Luis A

    2017-01-01

    The equal headway instability-the fact that a configuration with regular time intervals between vehicles tends to be volatile-is a common regulation problem in public transportation systems. An unsatisfactory regulation results in low efficiency and possible collapses of the service. Computational simulations have shown that self-organizing methods can regulate the headway adaptively beyond the theoretical optimum. In this work, we develop a computer simulation for metro systems fed with real data from the Mexico City Metro to test the current regulatory method with a novel self-organizing approach. The current model considers overall system's data such as minimum and maximum waiting times at stations, while the self-organizing method regulates the headway in a decentralized manner using local information such as the passenger's inflow and the positions of neighboring trains. The simulation shows that the self-organizing method improves the performance over the current one as it adapts to environmental changes at the timescale they occur. The correlation between the simulation of the current model and empirical observations carried out in the Mexico City Metro provides a base to calculate the expected performance of the self-organizing method in case it is implemented in the real system. We also performed a pilot study at the Balderas station to regulate the alighting and boarding of passengers through guide signs on platforms. The analysis of empirical data shows a delay reduction of the waiting time of trains at stations. Finally, we provide recommendations to improve public transportation systems.

  16. Improving public transportation systems with self-organization: A headway-based model and regulation of passenger alighting and boarding

    PubMed Central

    Gershenson, Carlos; Pineda, Luis A.

    2017-01-01

    The equal headway instability—the fact that a configuration with regular time intervals between vehicles tends to be volatile—is a common regulation problem in public transportation systems. An unsatisfactory regulation results in low efficiency and possible collapses of the service. Computational simulations have shown that self-organizing methods can regulate the headway adaptively beyond the theoretical optimum. In this work, we develop a computer simulation for metro systems fed with real data from the Mexico City Metro to test the current regulatory method with a novel self-organizing approach. The current model considers overall system’s data such as minimum and maximum waiting times at stations, while the self-organizing method regulates the headway in a decentralized manner using local information such as the passenger’s inflow and the positions of neighboring trains. The simulation shows that the self-organizing method improves the performance over the current one as it adapts to environmental changes at the timescale they occur. The correlation between the simulation of the current model and empirical observations carried out in the Mexico City Metro provides a base to calculate the expected performance of the self-organizing method in case it is implemented in the real system. We also performed a pilot study at the Balderas station to regulate the alighting and boarding of passengers through guide signs on platforms. The analysis of empirical data shows a delay reduction of the waiting time of trains at stations. Finally, we provide recommendations to improve public transportation systems. PMID:29287120

  17. Haloarcula hispanica CRISPR authenticates PAM of a target sequence to prime discriminative adaptation

    PubMed Central

    Li, Ming; Wang, Rui; Xiang, Hua

    2014-01-01

    The prokaryotic immune system CRISPR/Cas (Clustered Regularly Interspaced Short Palindromic Repeats/CRISPR-associated genes) adapts to foreign invaders by acquiring their short deoxyribonucleic acid (DNA) fragments as spacers, which guide subsequent interference to foreign nucleic acids based on sequence matching. The adaptation mechanism avoiding acquiring ‘self’ DNA fragments is poorly understood. In Haloarcula hispanica, we previously showed that CRISPR adaptation requires being primed by a pre-existing spacer partially matching the invader DNA. Here, we further demonstrate that flanking a fully-matched target sequence, a functional PAM (protospacer adjacent motif) is still required to prime adaptation. Interestingly, interference utilizes only four PAM sequences, whereas adaptation-priming tolerates as many as 23 PAM sequences. This relaxed PAM selectivity explains how adaptation-priming maximizes its tolerance of PAM mutations (that escape interference) while avoiding mis-targeting the spacer DNA within CRISPR locus. We propose that the primed adaptation, which hitches and cooperates with the interference pathway, distinguishes target from non-target by CRISPR ribonucleic acid guidance and PAM recognition. PMID:24803673

  18. TH-AB-BRA-02: Automated Triplet Beam Orientation Optimization for MRI-Guided Co-60 Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, D; Thomas, D; Cao, M

    2016-06-15

    Purpose: MRI guided Co-60 provides daily and intrafractional MRI soft tissue imaging for improved target tracking and adaptive radiotherapy. To remedy the low output limitation, the system uses three Co-60 sources at 120° apart, but using all three sources in planning is considerably unintuitive. We automate the beam orientation optimization using column generation, and then solve a novel fluence map optimization (FMO) problem while regularizing the number of MLC segments. Methods: Three patients—1 prostate (PRT), 1 lung (LNG), and 1 head-and-neck boost plan (H&NBoost)—were evaluated. The beamlet dose for 180 equally spaced coplanar beams under 0.35 T magnetic field wasmore » calculated using Monte Carlo. The 60 triplets were selected utilizing the column generation algorithm. The FMO problem was formulated using an L2-norm minimization with anisotropic total variation (TV) regularization term, which allows for control over the number of MLC segments. Our Fluence Regularized and Optimized Selection of Triplets (FROST) plans were compared against the clinical treatment plans (CLN) produced by an experienced dosimetrist. Results: The mean PTV D95, D98, and D99 differ by −0.02%, +0.12%, and +0.44% of the prescription dose between planning methods, showing same PTV dose coverage. The mean PTV homogeneity (D95/D5) was at 0.9360 (FROST) and 0.9356 (CLN). R50 decreased by 0.07 with FROST. On average, FROST reduced Dmax and Dmean of OARs by 6.56% and 5.86% of the prescription dose. The manual CLN planning required iterative trial and error runs which is very time consuming, while FROST required minimal human intervention. Conclusions: MRI guided Co-60 therapy needs the output of all sources yet suffers from unintuitive and laborious manual beam selection processes. Automated triplet orientation optimization is shown essential to overcome the difficulty and improves the dosimetry. A novel FMO with regularization provides additional controls over the number of MLC segments and treatment time. Varian Medical Systems; NIH grant R01CA188300; NIH grant R43CA183390.« less

  19. SU-E-T-278: Realization of Dose Verification Tool for IMRT Plan Based On DPM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Jinfeng; Cao, Ruifen; Dai, Yumei

    Purpose: To build a Monte Carlo dose verification tool for IMRT Plan by implementing a irradiation source model into DPM code. Extend the ability of DPM to calculate any incident angles and irregular-inhomogeneous fields. Methods: With the virtual source and the energy spectrum which unfolded from the accelerator measurement data,combined with optimized intensity maps to calculate the dose distribution of the irradiation irregular-inhomogeneous field. The irradiation source model of accelerator was substituted by a grid-based surface source. The contour and the intensity distribution of the surface source were optimized by ARTS (Accurate/Advanced Radiotherapy System) optimization module based on the tumormore » configuration. The weight of the emitter was decided by the grid intensity. The direction of the emitter was decided by the combination of the virtual source and the emitter emitting position. The photon energy spectrum unfolded from the accelerator measurement data was adjusted by compensating the contaminated electron source. For verification, measured data and realistic clinical IMRT plan were compared with DPM dose calculation. Results: The regular field was verified by comparing with the measured data. It was illustrated that the differences were acceptable (<2% inside the field, 2–3mm in the penumbra). The dose calculation of irregular field by DPM simulation was also compared with that of FSPB (Finite Size Pencil Beam) and the passing rate of gamma analysis was 95.1% for peripheral lung cancer. The regular field and the irregular rotational field were all within the range of permitting error. The computing time of regular fields were less than 2h, and the test of peripheral lung cancer was 160min. Through parallel processing, the adapted DPM could complete the calculation of IMRT plan within half an hour. Conclusion: The adapted parallelized DPM code with irradiation source model is faster than classic Monte Carlo codes. Its computational accuracy and speed satisfy the clinical requirement, and it is expectable to be a Monte Carlo dose verification tool for IMRT Plan. Strategic Priority Research Program of the China Academy of Science(XDA03040000); National Natural Science Foundation of China (81101132)« less

  20. Transfer of Training from Virtual to Real Baseball Batting

    PubMed Central

    Gray, Rob

    2017-01-01

    The use of virtual environments (VE) for training perceptual-motors skills in sports continues to be a rapidly growing area. However, there is a dearth of research that has examined whether training in sports simulation transfers to the real task. In this study, the transfer of perceptual-motor skills trained in an adaptive baseball batting VE to real baseball performance was investigated. Eighty participants were assigned equally to groups undertaking adaptive hitting training in the VE, extra sessions of batting practice in the VE, extra sessions of real batting practice, and a control condition involving no additional training to the players’ regular practice. Training involved two 45 min sessions per week for 6 weeks. Performance on a batting test in the VE, in an on-field test of batting, and on a pitch recognition test was measured pre- and post-training. League batting statistics in the season following training and the highest level of competition reached in the following 5 years were also analyzed. For the majority of performance measures, the adaptive VE training group showed a significantly greater improvement from pre-post training as compared to the other groups. In addition, players in this group had superior batting statistics in league play and reached higher levels of competition. Training in a VE can be used to improve real, on-field performance especially when designers take advantage of simulation to provide training methods (e.g., adaptive training) that do not simply recreate the real training situation. PMID:29326627

  1. Adult lactose digestion status and effects on disease.

    PubMed

    Szilagyi, Andrew

    2015-04-01

    Adult assimilation of lactose divides humans into dominant lactase-persistent and recessive nonpersistent phenotypes. To review three medical parameters of lactose digestion, namely: the changing concept of lactose intolerance; the possible impact on diseases of microbial adaptation in lactase-nonpersistent populations; and the possibility that the evolution of lactase has influenced some disease pattern distributions. A PubMed, Google Scholar and manual review of articles were used to provide a narrative review of the topic. The concept of lactose intolerance is changing and merging with food intolerances. Microbial adaptation to regular lactose consumption in lactase-nonpersistent individuals is supported by limited evidence. There is evidence suggestive of a relationship among geographical distributions of latitude, sunhine exposure and lactase proportional distributions worldwide. The definition of lactose intolerance has shifted away from association with lactose maldigestion. Lactose sensitivity is described equally in lactose digesters and maldigesters. The important medical consequence of withholding dairy foods could have a detrimental impact on several diseases; in addition, microbial adaptation in lactase-nonpersistent populations may alter risk for some diseases. There is suggestive evidence that the emergence of lactase persistence, together with human migrations before and after the emergence of lactase persistence, have impacted modern-day diseases. Lactose maldigestion and lactose intolerance are not synonymous. Withholding dairy foods is a poor method to treat lactose intolerance. Further epidemiological work could shed light on the possible effects of microbial adaptation in lactose maldigesters. The evolutionary impact of lactase may be still ongoing.

  2. Effect of adapted karate training on quality of life and body balance in 50-year-old men

    PubMed Central

    Marie-Ludivine, Chateau-Degat; Papouin, Gérard; Saint-Val, Philippe; Lopez, Antonio

    2010-01-01

    Background Aging is associated with a decrease in physical skills, sometimes accompanied by a change in quality of life (QOL). Long-term martial arts practice has been proposed as an avenue to counter these deleterious effects. The general purpose of this pilot study was to identify the effects of an adapted karate training program on QOL, depression, and motor skills in 50-year-old men. Methods and design Fifteen 50-year-old men were enrolled in a one-year prospective experiment. Participants practiced adapted karate training for 90 minutes three times a week. Testing sessions, involving completion of the MOS 36-item Short Form Health Survey (SF36) and Beck Depression Inventory, as well as motor and effort evaluation, were done at baseline, and six and 12 months. Results Compared with baseline, participants had better Beck Depression Inventory scores after one year of karate training (P < 0.01) and better perception of their physical health (P < 0.01), but not on the mental dimension (P < 0.49). They also improved their reaction time scores for the nondominant hand and sway parameters in the eyes-closed position (P < 0.01). Conclusion Regular long-term karate practice had favorable effects on mood, perception of physical health confirmed by better postural control, and improved performance on objective physical testing. Adapted karate training would be an interesting option for maintaining physical activity in aging. PMID:24198552

  3. Sensor network based solar forecasting using a local vector autoregressive ridge framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, J.; Yoo, S.; Heiser, J.

    2016-04-04

    The significant improvements and falling costs of photovoltaic (PV) technology make solar energy a promising resource, yet the cloud induced variability of surface solar irradiance inhibits its effective use in grid-tied PV generation. Short-term irradiance forecasting, especially on the minute scale, is critically important for grid system stability and auxiliary power source management. Compared to the trending sky imaging devices, irradiance sensors are inexpensive and easy to deploy but related forecasting methods have not been well researched. The prominent challenge of applying classic time series models on a network of irradiance sensors is to address their varying spatio-temporal correlations duemore » to local changes in cloud conditions. We propose a local vector autoregressive framework with ridge regularization to forecast irradiance without explicitly determining the wind field or cloud movement. By using local training data, our learned forecast model is adaptive to local cloud conditions and by using regularization, we overcome the risk of overfitting from the limited training data. Our systematic experimental results showed an average of 19.7% RMSE and 20.2% MAE improvement over the benchmark Persistent Model for 1-5 minute forecasts on a comprehensive 25-day dataset.« less

  4. Water Level Controls on Sap Flux of Canopy Species in Black Ash Wetlands

    Treesearch

    Joseph Shannon; Matthew Van Grinsven; Joshua Davis; Nicholas Bolton; Nam Noh; Thomas Pypker; Randall Kolka

    2018-01-01

    Black ash (Fraxinus nigra Marsh.) exhibits canopy dominance in regularly inundated wetlands, suggesting advantageous adaptation. Black ash mortality due to emerald ash borer (Agrilus planipennis Fairmaire) will alter canopy composition and site hydrology. Retention of these forested wetlands requires understanding black ash...

  5. Does Resistance Training Stimulate Cardiac Muscle Hypertrophy?

    ERIC Educational Resources Information Center

    Bloomer, Richard J.

    2003-01-01

    Reviews the literature on the left ventricular structural adaptations induced by resistance/strength exercise, focusing on human work, particularly well-trained strength athletes engaged in regular, moderate- to high-intensity resistance training (RT). The article discusses both genders and examines the use of anabolic-androgenic steroids in…

  6. Diasporic Communication: Cultural Deviance and Accommodation among Tibetan Exiles in India

    ERIC Educational Resources Information Center

    Dorjee, Tenzin; Giles, Howard; Barker, Valerie

    2011-01-01

    Diasporic communities around the world regularly encounter challenges of preserving their identities and communication practices while adapting to their host social-cultural environment. Grounded in communication accommodation theory (CAT) and informed by recent research on deviance, this study investigated the relationships between Tibetan…

  7. Sport for All. Exercise and Health.

    ERIC Educational Resources Information Center

    Astrand, Per-Olof

    This booklet is divided into seven sections that include the following topics: (a) physical performance, (b) adaptation to inactivity and activity, (c) physiological and medical motives for regular physical activity, (d) training, (e) physical fitness for everyday life, and (f) testing physical fitness and condition. Section one discusses energy…

  8. Combining kernel matrix optimization and regularization to improve particle size distribution retrieval

    NASA Astrophysics Data System (ADS)

    Ma, Qian; Xia, Houping; Xu, Qiang; Zhao, Lei

    2018-05-01

    A new method combining Tikhonov regularization and kernel matrix optimization by multi-wavelength incidence is proposed for retrieving particle size distribution (PSD) in an independent model with improved accuracy and stability. In comparison to individual regularization or multi-wavelength least squares, the proposed method exhibited better anti-noise capability, higher accuracy and stability. While standard regularization typically makes use of the unit matrix, it is not universal for different PSDs, particularly for Junge distributions. Thus, a suitable regularization matrix was chosen by numerical simulation, with the second-order differential matrix found to be appropriate for most PSD types.

  9. Marine Controlled-Source Electromagnetic 2D Inversion for synthetic models.

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Li, Y.

    2016-12-01

    We present a 2D inverse algorithm for frequency domain marine controlled-source electromagnetic (CSEM) data, which is based on the regularized Gauss-Newton approach. As a forward solver, our parallel adaptive finite element forward modeling program is employed. It is a self-adaptive, goal-oriented grid refinement algorithm in which a finite element analysis is performed on a sequence of refined meshes. The mesh refinement process is guided by a dual error estimate weighting to bias refinement towards elements that affect the solution at the EM receiver locations. With the use of the direct solver (MUMPS), we can effectively compute the electromagnetic fields for multi-sources and parametric sensitivities. We also implement the parallel data domain decomposition approach of Key and Ovall (2011), with the goal of being able to compute accurate responses in parallel for complicated models and a full suite of data parameters typical of offshore CSEM surveys. All minimizations are carried out by using the Gauss-Newton algorithm and model perturbations at each iteration step are obtained by using the Inexact Conjugate Gradient iteration method. Synthetic test inversions are presented.

  10. Resource Optimization Techniques and Security Levels for Wireless Sensor Networks Based on the ARSy Framework.

    PubMed

    Parenreng, Jumadi Mabe; Kitagawa, Akio

    2018-05-17

    Wireless Sensor Networks (WSNs) with limited battery, central processing units (CPUs), and memory resources are a widely implemented technology for early warning detection systems. The main advantage of WSNs is their ability to be deployed in areas that are difficult to access by humans. In such areas, regular maintenance may be impossible; therefore, WSN devices must utilize their limited resources to operate for as long as possible, but longer operations require maintenance. One method of maintenance is to apply a resource adaptation policy when a system reaches a critical threshold. This study discusses the application of a security level adaptation model, such as an ARSy Framework, for using resources more efficiently. A single node comprising a Raspberry Pi 3 Model B and a DS18B20 temperature sensor were tested in a laboratory under normal and stressful conditions. The result shows that under normal conditions, the system operates approximately three times longer than under stressful conditions. Maintaining the stability of the resources also enables the security level of a network's data output to stay at a high or medium level.

  11. Robust estimation of adaptive tensors of curvature by tensor voting.

    PubMed

    Tong, Wai-Shun; Tang, Chi-Keung

    2005-03-01

    Although curvature estimation from a given mesh or regularly sampled point set is a well-studied problem, it is still challenging when the input consists of a cloud of unstructured points corrupted by misalignment error and outlier noise. Such input is ubiquitous in computer vision. In this paper, we propose a three-pass tensor voting algorithm to robustly estimate curvature tensors, from which accurate principal curvatures and directions can be calculated. Our quantitative estimation is an improvement over the previous two-pass algorithm, where only qualitative curvature estimation (sign of Gaussian curvature) is performed. To overcome misalignment errors, our improved method automatically corrects input point locations at subvoxel precision, which also rejects outliers that are uncorrectable. To adapt to different scales locally, we define the RadiusHit of a curvature tensor to quantify estimation accuracy and applicability. Our curvature estimation algorithm has been proven with detailed quantitative experiments, performing better in a variety of standard error metrics (percentage error in curvature magnitudes, absolute angle difference in curvature direction) in the presence of a large amount of misalignment noise.

  12. Intelligent Control of a Sensor-Actuator System via Kernelized Least-Squares Policy Iteration

    PubMed Central

    Liu, Bo; Chen, Sanfeng; Li, Shuai; Liang, Yongsheng

    2012-01-01

    In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL), for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI). Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling. KLSPI introduce kernel trick into the LSPI framework for Reinforcement Learning, often achieving faster convergence and providing automatic feature selection via various kernel sparsification approaches. In this approach, policies are computed in a low-dimensional subspace generated by projecting the high-dimensional features onto a set of random basis. We first show how Random Projections constitute an efficient sparsification technique and how our method often converges faster than regular LSPI, while at lower computational costs. Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD). Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces. PMID:22736969

  13. Subspace Clustering via Learning an Adaptive Low-Rank Graph.

    PubMed

    Yin, Ming; Xie, Shengli; Wu, Zongze; Zhang, Yun; Gao, Junbin

    2018-08-01

    By using a sparse representation or low-rank representation of data, the graph-based subspace clustering has recently attracted considerable attention in computer vision, given its capability and efficiency in clustering data. However, the graph weights built using the representation coefficients are not the exact ones as the traditional definition is in a deterministic way. The two steps of representation and clustering are conducted in an independent manner, thus an overall optimal result cannot be guaranteed. Furthermore, it is unclear how the clustering performance will be affected by using this graph. For example, the graph parameters, i.e., the weights on edges, have to be artificially pre-specified while it is very difficult to choose the optimum. To this end, in this paper, a novel subspace clustering via learning an adaptive low-rank graph affinity matrix is proposed, where the affinity matrix and the representation coefficients are learned in a unified framework. As such, the pre-computed graph regularizer is effectively obviated and better performance can be achieved. Experimental results on several famous databases demonstrate that the proposed method performs better against the state-of-the-art approaches, in clustering.

  14. Resource Optimization Techniques and Security Levels for Wireless Sensor Networks Based on the ARSy Framework

    PubMed Central

    Kitagawa, Akio

    2018-01-01

    Wireless Sensor Networks (WSNs) with limited battery, central processing units (CPUs), and memory resources are a widely implemented technology for early warning detection systems. The main advantage of WSNs is their ability to be deployed in areas that are difficult to access by humans. In such areas, regular maintenance may be impossible; therefore, WSN devices must utilize their limited resources to operate for as long as possible, but longer operations require maintenance. One method of maintenance is to apply a resource adaptation policy when a system reaches a critical threshold. This study discusses the application of a security level adaptation model, such as an ARSy Framework, for using resources more efficiently. A single node comprising a Raspberry Pi 3 Model B and a DS18B20 temperature sensor were tested in a laboratory under normal and stressful conditions. The result shows that under normal conditions, the system operates approximately three times longer than under stressful conditions. Maintaining the stability of the resources also enables the security level of a network’s data output to stay at a high or medium level. PMID:29772773

  15. Regularization techniques on least squares non-uniform fast Fourier transform.

    PubMed

    Gibiino, Fabio; Positano, Vincenzo; Landini, Luigi; Santarelli, Maria Filomena

    2013-05-01

    Non-Cartesian acquisition strategies are widely used in MRI to dramatically reduce the acquisition time while at the same time preserving the image quality. Among non-Cartesian reconstruction methods, the least squares non-uniform fast Fourier transform (LS_NUFFT) is a gridding method based on a local data interpolation kernel that minimizes the worst-case approximation error. The interpolator is chosen using a pseudoinverse matrix. As the size of the interpolation kernel increases, the inversion problem may become ill-conditioned. Regularization methods can be adopted to solve this issue. In this study, we compared three regularization methods applied to LS_NUFFT. We used truncated singular value decomposition (TSVD), Tikhonov regularization and L₁-regularization. Reconstruction performance was evaluated using the direct summation method as reference on both simulated and experimental data. We also evaluated the processing time required to calculate the interpolator. First, we defined the value of the interpolator size after which regularization is needed. Above this value, TSVD obtained the best reconstruction. However, for large interpolator size, the processing time becomes an important constraint, so an appropriate compromise between processing time and reconstruction quality should be adopted. Copyright © 2013 John Wiley & Sons, Ltd.

  16. X-ray computed tomography using curvelet sparse regularization.

    PubMed

    Wieczorek, Matthias; Frikel, Jürgen; Vogel, Jakob; Eggl, Elena; Kopp, Felix; Noël, Peter B; Pfeiffer, Franz; Demaret, Laurent; Lasser, Tobias

    2015-04-01

    Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method's strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.

  17. Coarse-graining time series data: Recurrence plot of recurrence plots and its application for music

    NASA Astrophysics Data System (ADS)

    Fukino, Miwa; Hirata, Yoshito; Aihara, Kazuyuki

    2016-02-01

    We propose a nonlinear time series method for characterizing two layers of regularity simultaneously. The key of the method is using the recurrence plots hierarchically, which allows us to preserve the underlying regularities behind the original time series. We demonstrate the proposed method with musical data. The proposed method enables us to visualize both the local and the global musical regularities or two different features at the same time. Furthermore, the determinism scores imply that the proposed method may be useful for analyzing emotional response to the music.

  18. Coarse-graining time series data: Recurrence plot of recurrence plots and its application for music.

    PubMed

    Fukino, Miwa; Hirata, Yoshito; Aihara, Kazuyuki

    2016-02-01

    We propose a nonlinear time series method for characterizing two layers of regularity simultaneously. The key of the method is using the recurrence plots hierarchically, which allows us to preserve the underlying regularities behind the original time series. We demonstrate the proposed method with musical data. The proposed method enables us to visualize both the local and the global musical regularities or two different features at the same time. Furthermore, the determinism scores imply that the proposed method may be useful for analyzing emotional response to the music.

  19. Regularization of soft-X-ray imaging in the DIII-D tokamak

    DOE PAGES

    Wingen, A.; Shafer, M. W.; Unterberg, E. A.; ...

    2015-03-02

    We developed an image inversion scheme for the soft X-ray imaging system (SXRIS) diagnostic at the DIII-D tokamak in order to obtain the local soft X-ray emission at a poloidal cross-section from the spatially line-integrated image taken by the SXRIS camera. The scheme uses the Tikhonov regularization method since the inversion problem is generally ill-posed. The regularization technique uses the generalized singular value decomposition to determine a solution that depends on a free regularization parameter. The latter has to be chosen carefully, and the so called {\\it L-curve} method to find the optimum regularization parameter is outlined. A representative testmore » image is used to study the properties of the inversion scheme with respect to inversion accuracy, amount/strength of regularization, image noise and image resolution. Moreover, the optimum inversion parameters are identified, while the L-curve method successfully computes the optimum regularization parameter. Noise is found to be the most limiting issue, but sufficient regularization is still possible at noise to signal ratios up to 10%-15%. Finally, the inversion scheme is applied to measured SXRIS data and the line-integrated SXRIS image is successfully inverted.« less

  20. A novel approach of ensuring layout regularity correct by construction in advanced technologies

    NASA Astrophysics Data System (ADS)

    Ahmed, Shafquat Jahan; Vaderiya, Yagnesh; Gupta, Radhika; Parthasarathy, Chittoor; Marin, Jean-Claude; Robert, Frederic

    2017-03-01

    In advanced technology nodes, layout regularity has become a mandatory prerequisite to create robust designs less sensitive to variations in manufacturing process in order to improve yield and minimizing electrical variability. In this paper we describe a method for designing regular full custom layouts based on design and process co-optimization. The method includes various design rule checks that can be used on-the-fly during leaf-cell layout development. We extract a Layout Regularity Index (LRI) from the layouts based on the jogs, alignments and pitches used in the design for any given metal layer. Regularity Index of a layout is the direct indicator of manufacturing yield and is used to compare the relative health of different layout blocks in terms of process friendliness. The method has been deployed for 28nm and 40nm technology nodes for Memory IP and is being extended to other IPs (IO, standard-cell). We have quantified the gain of layout regularity with the deployed method on printability and electrical characteristics by process-variation (PV) band simulation analysis and have achieved up-to 5nm reduction in PV band.

  1. Development of a Countermeasure to Enhance Postflight Locomotor Adaptability

    NASA Technical Reports Server (NTRS)

    Bloomberg, Jacob J.

    2006-01-01

    Astronauts returning from space flight experience locomotor dysfunction following their return to Earth. Our laboratory is currently developing a gait adaptability training program that is designed to facilitate recovery of locomotor function following a return to a gravitational environment. The training program exploits the ability of the sensorimotor system to generalize from exposure to multiple adaptive challenges during training so that the gait control system essentially learns to learn and therefore can reorganize more rapidly when faced with a novel adaptive challenge. We have previously confirmed that subjects participating in adaptive generalization training programs using a variety of visuomotor distortions can enhance their ability to adapt to a novel sensorimotor environment. Importantly, this increased adaptability was retained even one month after completion of the training period. Adaptive generalization has been observed in a variety of other tasks requiring sensorimotor transformations including manual control tasks and reaching (Bock et al., 2001, Seidler, 2003) and obstacle avoidance during walking (Lam and Dietz, 2004). Taken together, the evidence suggests that a training regimen exposing crewmembers to variation in locomotor conditions, with repeated transitions among states, may enhance their ability to learn how to reassemble appropriate locomotor patterns upon return from microgravity. We believe exposure to this type of training will extend crewmembers locomotor behavioral repertoires, facilitating the return of functional mobility after long duration space flight. Our proposed training protocol will compel subjects to develop new behavioral solutions under varying sensorimotor demands. Over time subjects will learn to create appropriate locomotor solution more rapidly enabling acquisition of mobility sooner after long-duration space flight. Our laboratory is currently developing adaptive generalization training procedures and the associated flight hardware to implement such a training program during regular inflight treadmill operations. A visual display system will provide variation in visual flow patterns during treadmill exercise. Crewmembers will be exposed to a virtual scene that can translate and rotate in six-degrees-of freedom during their regular treadmill exercise period. Associated ground based studies are focused on determining optimal combinations of sensory manipulations (visual flow, body loading and support surface variation) and training schedules that will produce the greatest potential for adaptive flexibility in gait function during exposure to challenging and novel environments. An overview of our progress in these areas will be discussed during the presentation.

  2. Expanding the Universe of "Astronomy on Tap" Public Outreach Events

    NASA Astrophysics Data System (ADS)

    Rice, Emily L.; Levine, Brian; Livermore, Rachael C.; Silverman, Jeffrey M.; LaMassa, Stephanie M.; Tyndall, Amy; Muna, Demitri; Garofali, Kristen; Morris, Brett; Byler, Nell; Fyhrie, Adalyn; Rehnberg, Morgan; Hart, Quyen N.; Connelly, Jennifer L.; Silvia, Devin W.; Morrison, Sarah J.; Agarwal, Bhaskar; Tremblay, Grant; Schwamb, Megan E.

    2016-01-01

    Astronomy on Tap (AoT, astronomyontap.org) is free public outreach event featuring engaging science presentations in bars, often combined with music, games, and prizes, to encourage a fun, interactive atmosphere. AoT events feature several short astronomy-related presentations primarily by local professional scientists, but also by visiting scientists, students, educators, amatuer astronomers, writers, and artists. Events are held in social venues (bars, coffee shops, art galleries, etc.) in order to bring science directly to the public in a relaxed, informal atmosphere. With this we hope to engage a more diverse audience than typical lectures at academic and cultural institutions and to develop enthusiasm for science among voting, tax-paying adults. The flexible format and content of an AoT event is easy to adapt and expand based on the priorities, resources, and interests of local organizers. The social nature of AoT events provides important professional development and networking opportunities in science communication. Since the first New York City event in April 2013, Astronomy on Tap has expanded to more than ten cities globally, including monthly events in NYC, Austin, Seattle, and Tucson; semi-regular events in Columbus, New Haven, Santiago, Toronto, and Denver; occasional (so far) events in Rochester (NY), Baltimore, Lansing, and Washington, DC; and one-off events in Chicago and Taipei. Several venues regularly attract audiences of over 200 people. We have received media coverage online, in print, and occasionally even on radio and television. In this poster we describe the overarching goals and characteristics of AoT events, distinct adaptations of various locations, resources we have developed, and the methods we use to coordinate among the worldwide local organizers.

  3. Sinogram-based adaptive iterative reconstruction for sparse view x-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Trinca, D.; Zhong, Y.; Wang, Y.-Z.; Mamyrbayev, T.; Libin, E.

    2016-10-01

    With the availability of more powerful computing processors, iterative reconstruction algorithms have recently been successfully implemented as an approach to achieving significant dose reduction in X-ray CT. In this paper, we propose an adaptive iterative reconstruction algorithm for X-ray CT, that is shown to provide results comparable to those obtained by proprietary algorithms, both in terms of reconstruction accuracy and execution time. The proposed algorithm is thus provided for free to the scientific community, for regular use, and for possible further optimization.

  4. A Dictionary Learning Approach with Overlap for the Low Dose Computed Tomography Reconstruction and Its Vectorial Application to Differential Phase Tomography

    PubMed Central

    Mirone, Alessandro; Brun, Emmanuel; Coan, Paola

    2014-01-01

    X-ray based Phase-Contrast Imaging (PCI) techniques have been demonstrated to enhance the visualization of soft tissues in comparison to conventional imaging methods. Nevertheless the delivered dose as reported in the literature of biomedical PCI applications often equals or exceeds the limits prescribed in clinical diagnostics. The optimization of new computed tomography strategies which include the development and implementation of advanced image reconstruction procedures is thus a key aspect. In this scenario, we implemented a dictionary learning method with a new form of convex functional. This functional contains in addition to the usual sparsity inducing and fidelity terms, a new term which forces similarity between overlapping patches in the superimposed regions. The functional depends on two free regularization parameters: a coefficient multiplying the sparsity-inducing norm of the patch basis functions coefficients, and a coefficient multiplying the norm of the differences between patches in the overlapping regions. The solution is found by applying the iterative proximal gradient descent method with FISTA acceleration. The gradient is computed by calculating projection of the solution and its error backprojection at each iterative step. We study the quality of the solution, as a function of the regularization parameters and noise, on synthetic data for which the solution is a-priori known. We apply the method on experimental data in the case of Differential Phase Tomography. For this case we use an original approach which consists in using vectorial patches, each patch having two components: one per each gradient component. The resulting algorithm, implemented in the European Synchrotron Radiation Facility tomography reconstruction code PyHST, has proven to be efficient and well-adapted to strongly reduce the required dose and the number of projections in medical tomography. PMID:25531987

  5. A dictionary learning approach with overlap for the low dose computed tomography reconstruction and its vectorial application to differential phase tomography.

    PubMed

    Mirone, Alessandro; Brun, Emmanuel; Coan, Paola

    2014-01-01

    X-ray based Phase-Contrast Imaging (PCI) techniques have been demonstrated to enhance the visualization of soft tissues in comparison to conventional imaging methods. Nevertheless the delivered dose as reported in the literature of biomedical PCI applications often equals or exceeds the limits prescribed in clinical diagnostics. The optimization of new computed tomography strategies which include the development and implementation of advanced image reconstruction procedures is thus a key aspect. In this scenario, we implemented a dictionary learning method with a new form of convex functional. This functional contains in addition to the usual sparsity inducing and fidelity terms, a new term which forces similarity between overlapping patches in the superimposed regions. The functional depends on two free regularization parameters: a coefficient multiplying the sparsity-inducing L1 norm of the patch basis functions coefficients, and a coefficient multiplying the L2 norm of the differences between patches in the overlapping regions. The solution is found by applying the iterative proximal gradient descent method with FISTA acceleration. The gradient is computed by calculating projection of the solution and its error backprojection at each iterative step. We study the quality of the solution, as a function of the regularization parameters and noise, on synthetic data for which the solution is a-priori known. We apply the method on experimental data in the case of Differential Phase Tomography. For this case we use an original approach which consists in using vectorial patches, each patch having two components: one per each gradient component. The resulting algorithm, implemented in the European Synchrotron Radiation Facility tomography reconstruction code PyHST, has proven to be efficient and well-adapted to strongly reduce the required dose and the number of projections in medical tomography.

  6. 3D exemplar-based random walks for tooth segmentation from cone-beam computed tomography images.

    PubMed

    Pei, Yuru; Ai, Xingsheng; Zha, Hongbin; Xu, Tianmin; Ma, Gengyu

    2016-09-01

    Tooth segmentation is an essential step in acquiring patient-specific dental geometries from cone-beam computed tomography (CBCT) images. Tooth segmentation from CBCT images is still a challenging task considering the comparatively low image quality caused by the limited radiation dose, as well as structural ambiguities from intercuspation and nearby alveolar bones. The goal of this paper is to present and discuss the latest accomplishments in semisupervised tooth segmentation with adaptive 3D shape constraints. The authors propose a 3D exemplar-based random walk method of tooth segmentation from CBCT images. The proposed method integrates semisupervised label propagation and regularization by 3D exemplar registration. To begin with, the pure random walk method is to get an initial segmentation of the teeth, which tends to be erroneous because of the structural ambiguity of CBCT images. And then, as an iterative refinement, the authors conduct a regularization by using 3D exemplar registration, as well as label propagation by random walks with soft constraints, to improve the tooth segmentation. In the first stage of the iteration, 3D exemplars with well-defined topologies are adapted to fit the tooth contours, which are obtained from the random walks based segmentation. The soft constraints on voxel labeling are defined by shape-based foreground dentine probability acquired by the exemplar registration, as well as the appearance-based probability from a support vector machine (SVM) classifier. In the second stage, the labels of the volume-of-interest (VOI) are updated by the random walks with soft constraints. The two stages are optimized iteratively. Instead of the one-shot label propagation in the VOI, an iterative refinement process can achieve a reliable tooth segmentation by virtue of exemplar-based random walks with adaptive soft constraints. The proposed method was applied for tooth segmentation of twenty clinically captured CBCT images. Three metrics, including the Dice similarity coefficient (DSC), the Jaccard similarity coefficient (JSC), and the mean surface deviation (MSD), were used to quantitatively analyze the segmentation of anterior teeth including incisors and canines, premolars, and molars. The segmentation of the anterior teeth achieved a DSC up to 98%, a JSC of 97%, and an MSD of 0.11 mm compared with manual segmentation. For the premolars, the average values of DSC, JSC, and MSD were 98%, 96%, and 0.12 mm, respectively. The proposed method yielded a DSC of 95%, a JSC of 89%, and an MSD of 0.26 mm for molars. Aside from the interactive definition of label priors by the user, automatic tooth segmentation can be achieved in an average of 1.18 min. The proposed technique enables an efficient and reliable tooth segmentation from CBCT images. This study makes it clinically practical to segment teeth from CBCT images, thus facilitating pre- and interoperative uses of dental morphologies in maxillofacial and orthodontic treatments.

  7. C library for topological study of the electronic charge density.

    PubMed

    Vega, David; Aray, Yosslen; Rodríguez, Jesús

    2012-12-05

    The topological study of the electronic charge density is useful to obtain information about the kinds of bonds (ionic or covalent) and the atom charges on a molecule or crystal. For this study, it is necessary to calculate, at every space point, the electronic density and its electronic density derivatives values up to second order. In this work, a grid-based method for these calculations is described. The library, implemented for three dimensions, is based on a multidimensional Lagrange interpolation in a regular grid; by differentiating the resulting polynomial, the gradient vector, the Hessian matrix and the Laplacian formulas were obtained for every space point. More complex functions such as the Newton-Raphson method (to find the critical points, where the gradient is null) and the Cash-Karp Runge-Kutta method (used to make the gradient paths) were programmed. As in some crystals, the unit cell has angles different from 90°, the described library includes linear transformations to correct the gradient and Hessian when the grid is distorted (inclined). Functions were also developed to handle grid containing files (grd from DMol® program, CUBE from Gaussian® program and CHGCAR from VASP® program). Each one of these files contains the data for a molecular or crystal electronic property (such as charge density, spin density, electrostatic potential, and others) in a three-dimensional (3D) grid. The library can be adapted to make the topological study in any regular 3D grid by modifying the code of these functions. Copyright © 2012 Wiley Periodicals, Inc.

  8. Adapted Physical Education Program. 1968 Report.

    ERIC Educational Resources Information Center

    Pittsburgh Public Schools, PA. Office of Research.

    A program was introduced in 1965 to provide individualized physical education for students in grades 1 through 12 who could not participate in regular physical education programs. Twenty-one schools and 1,640 students with a variety of conditions participated. The most frequent limitations of participants were low physical fitness, overweight, and…

  9. Organizational Development Interventions for Enhancing Creativity in the Workplace.

    ERIC Educational Resources Information Center

    Basadur, Min

    1997-01-01

    Evaluates traditional organizational development approaches to crises in commitment and adaptability, and presents a new approach to organizational development based on organizational creativity. Discusses the need to encourage employees to master new thinking skills and create an infrastructure that ensures these skills will be used regularly.…

  10. Mainstreaming: Sharing Ideas, Strategies, Materials, Techniques.

    ERIC Educational Resources Information Center

    Hillside School, Cushing, OK.

    The manual provides teaching approaches based on a model of least to highest modification of instruction, which may be used for a continuum of special education placements ranging from regular classroom through hospital settings. The first section on adaptive techniques (requiring the least modification) includes suggestions to adjust time for…

  11. Improving Middle School Science Learning Using Diagrammatic Reasoning

    ERIC Educational Resources Information Center

    Cromley, Jennifer G.; Weisberg, Steven M.; Dai, Ting; Newcombe, Nora S.; Schunn, Christian D.; Massey, Christine; Merlino, F. Joseph

    2016-01-01

    We explored whether existing curricula can be adapted to increase students' skills at comprehending and learning from diagrams using an intervention delivered by regular middle-school teachers in intact classrooms. Ninety-two teachers in three states implemented our modified materials over six curricular units from two publishers: "Holt"…

  12. Physiological Adaptations to Chronic Endurance Exercise Training in Patients with Coronary Artery Disease.

    ERIC Educational Resources Information Center

    Physician and Sportsmedicine, 1987

    1987-01-01

    In a roundtable format, five doctors explore the reasons why regular physical activity should continue to play a significant role in the rehabilitation of patients with coronary artery disease. Endurance exercise training improves aerobic capacity, reduces blood pressure, and decreases risk. (Author/MT)

  13. Bayesian prestack seismic inversion with a self-adaptive Huber-Markov random-field edge protection scheme

    NASA Astrophysics Data System (ADS)

    Tian, Yu-Kun; Zhou, Hui; Chen, Han-Ming; Zou, Ya-Ming; Guan, Shou-Jun

    2013-12-01

    Seismic inversion is a highly ill-posed problem, due to many factors such as the limited seismic frequency bandwidth and inappropriate forward modeling. To obtain a unique solution, some smoothing constraints, e.g., the Tikhonov regularization are usually applied. The Tikhonov method can maintain a global smooth solution, but cause a fuzzy structure edge. In this paper we use Huber-Markov random-field edge protection method in the procedure of inverting three parameters, P-velocity, S-velocity and density. The method can avoid blurring the structure edge and resist noise. For the parameter to be inverted, the Huber-Markov random-field constructs a neighborhood system, which further acts as the vertical and lateral constraints. We use a quadratic Huber edge penalty function within the layer to suppress noise and a linear one on the edges to avoid a fuzzy result. The effectiveness of our method is proved by inverting the synthetic data without and with noises. The relationship between the adopted constraints and the inversion results is analyzed as well.

  14. FOREWORD: Tackling inverse problems in a Banach space environment: from theory to applications Tackling inverse problems in a Banach space environment: from theory to applications

    NASA Astrophysics Data System (ADS)

    Schuster, Thomas; Hofmann, Bernd; Kaltenbacher, Barbara

    2012-10-01

    Inverse problems can usually be modelled as operator equations in infinite-dimensional spaces with a forward operator acting between Hilbert or Banach spaces—a formulation which quite often also serves as the basis for defining and analyzing solution methods. The additional amount of structure and geometric interpretability provided by the concept of an inner product has rendered these methods amenable to a convergence analysis, a fact which has led to a rigorous and comprehensive study of regularization methods in Hilbert spaces over the last three decades. However, for numerous problems such as x-ray diffractometry, certain inverse scattering problems and a number of parameter identification problems in PDEs, the reasons for using a Hilbert space setting seem to be based on conventions rather than an appropriate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, non-Hilbertian regularization and data fidelity terms incorporating a priori information on solution and noise, such as general Lp-norms, TV-type norms, or the Kullback-Leibler divergence, have recently become very popular. These facts have motivated intensive investigations on regularization methods in Banach spaces, a topic which has emerged as a highly active research field within the area of inverse problems. Meanwhile some of the most well-known regularization approaches, such as Tikhonov-type methods requiring the solution of extremal problems, and iterative ones like the Landweber method, the Gauss-Newton method, as well as the approximate inverse method, have been investigated for linear and nonlinear operator equations in Banach spaces. Convergence with rates has been proven and conditions on the solution smoothness and on the structure of nonlinearity have been formulated. Still, beyond the existing results a large number of challenging open questions have arisen, due to the more involved handling of general Banach spaces and the larger variety of concrete instances with special properties. The aim of this special section is to provide a forum for highly topical ongoing work in the area of regularization in Banach spaces, its numerics and its applications. Indeed, we have been lucky enough to obtain a number of excellent papers both from colleagues who have previously been contributing to this topic and from researchers entering the field due to its relevance in practical inverse problems. We would like to thank all contributers for enabling us to present a high quality collection of papers on topics ranging from various aspects of regularization via efficient numerical solution to applications in PDE models. We give a brief overview of the contributions included in this issue (here ordered alphabetically by first author). In their paper, Iterative regularization with general penalty term—theory and application to L1 and TV regularization, Radu Bot and Torsten Hein provide an extension of the Landweber iteration for linear operator equations in Banach space to general operators in place of the inverse duality mapping, which corresponds to the use of general regularization functionals in variational regularization. The L∞ topology in data space corresponds to the frequently occuring situation of uniformly distributed data noise. A numerically efficient solution of the resulting Tikhonov regularization problem via a Moreau-Yosida appriximation and a semismooth Newton method, along with a δ-free regularization parameter choice rule, is the topic of the paper L∞ fitting for inverse problems with uniform noise by Christian Clason. Extension of convergence rates results from classical source conditions to their generalization via variational inequalities with a priori and a posteriori stopping rules is the main contribution of the paper Regularization of linear ill-posed problems by the augmented Lagrangian method and variational inequalities by Klaus Frick and Markus Grasmair, again in the context of some iterative method. A powerful tool for proving convergence rates of Tikhonov type but also other regularization methods in Banach spaces are assumptions of the type of variational inequalities that combine conditions on solution smoothness (i.e., source conditions in the Hilbert space case) and nonlinearity of the forward operator. In Parameter choice in Banach space regularization under variational inequalities, Bernd Hofmann and Peter Mathé provide results with general error measures and especially study the question of regularization parameter choice. Daijun Jiang, Hui Feng, and Jun Zou consider an application of Banach space ideas in the context of an application problem in their paper Convergence rates of Tikhonov regularizations for parameter identifiation in a parabolic-elliptic system, namely the identification of a distributed diffusion coefficient in a coupled elliptic-parabolic system. In particular, they show convergence rates of Lp-H1 (variational) regularization for the application under consideration via the use and verification of certain source and nonlinearity conditions. In computational practice, the Lp norm with p close to one is often used as a substitute for the actually sparsity promoting L1 norm. In Norm sensitivity of sparsity regularization with respect to p, Kamil S Kazimierski, Peter Maass and Robin Strehlow consider the question of how sensitive the Tikhonov regularized solution is with respect to p. They do so by computing the derivative via the implicit function theorem, particularly at the crucial value, p=1. Another iterative regularization method in Banach space is considered by Qinian Jin and Linda Stals in Nonstationary iterated Tikhonov regularization for ill-posed problems in Banach spaces. Using a variational formulation and under some smoothness and convexity assumption on the preimage space, they extend the convergence analysis of the well-known iterative Tikhonov method for linear problems in Hilbert space to a more general Banach space framework. Systems of linear or nonlinear operators can be efficiently treated by cyclic iterations, thus several variants of gradient and Newton-type Kaczmarz methods have already been studied in the Hilbert space setting. Antonio Leitão and M Marques Alves in their paper On Landweber---Kaczmarz methods for regularizing systems of ill-posed equations in Banach spaces carry out an extension to Banach spaces for the fundamental Landweber version. The impact of perturbations in the evaluation of the forward operator and its derivative on the convergence behaviour of regularization methods is a practically and highly relevant issue. It is treated in the paper Convergence rates analysis of Tikhonov regularization for nonlinear ill-posed problems with noisy operators by Shuai Lu and Jens Flemming for variational regularization of nonlinear problems in Banach spaces. In The approximate inverse in action: IV. Semi-discrete equations in a Banach space setting, Thomas Schuster, Andreas Rieder and Frank Schöpfer extend the concept of approximate inverse to the practically and highly relevant situation of finitely many measurements and a general smooth and convex Banach space as preimage space. They devise two approaches for computing the reconstruction kernels required in the method and provide convergence and regularization results. Frank Werner and Thorsten Hohage in Convergence rates in expectation for Tikhonov-type regularization of inverse problems with Poisson data prove convergence rates results for variational regularization with general convex regularization term and the Kullback-Leibler distance as data fidelity term by combining a new result on Poisson distributed data with a deterministic rates analysis. Finally, we would like to thank the Inverse Problems team, especially Joanna Evangelides and Chris Wileman, for their extraordinary smooth and productive cooperation, as well as Alfred K Louis for his kind support of our initiative.

  15. Analysis of the iteratively regularized Gauss-Newton method under a heuristic rule

    NASA Astrophysics Data System (ADS)

    Jin, Qinian; Wang, Wei

    2018-03-01

    The iteratively regularized Gauss-Newton method is one of the most prominent regularization methods for solving nonlinear ill-posed inverse problems when the data is corrupted by noise. In order to produce a useful approximate solution, this iterative method should be terminated properly. The existing a priori and a posteriori stopping rules require accurate information on the noise level, which may not be available or reliable in practical applications. In this paper we propose a heuristic selection rule for this regularization method, which requires no information on the noise level. By imposing certain conditions on the noise, we derive a posteriori error estimates on the approximate solutions under various source conditions. Furthermore, we establish a convergence result without using any source condition. Numerical results are presented to illustrate the performance of our heuristic selection rule.

  16. Method of locating related items in a geometric space for data mining

    DOEpatents

    Hendrickson, B.A.

    1999-07-27

    A method for locating related items in a geometric space transforms relationships among items to geometric locations. The method locates items in the geometric space so that the distance between items corresponds to the degree of relatedness. The method facilitates communication of the structure of the relationships among the items. The method is especially beneficial for communicating databases with many items, and with non-regular relationship patterns. Examples of such databases include databases containing items such as scientific papers or patents, related by citations or keywords. A computer system adapted for practice of the present invention can include a processor, a storage subsystem, a display device, and computer software to direct the location and display of the entities. The method comprises assigning numeric values as a measure of similarity between each pairing of items. A matrix is constructed, based on the numeric values. The eigenvectors and eigenvalues of the matrix are determined. Each item is located in the geometric space at coordinates determined from the eigenvectors and eigenvalues. Proper construction of the matrix and proper determination of coordinates from eigenvectors can ensure that distance between items in the geometric space is representative of the numeric value measure of the items' similarity. 12 figs.

  17. Method of locating related items in a geometric space for data mining

    DOEpatents

    Hendrickson, Bruce A.

    1999-01-01

    A method for locating related items in a geometric space transforms relationships among items to geometric locations. The method locates items in the geometric space so that the distance between items corresponds to the degree of relatedness. The method facilitates communication of the structure of the relationships among the items. The method is especially beneficial for communicating databases with many items, and with non-regular relationship patterns. Examples of such databases include databases containing items such as scientific papers or patents, related by citations or keywords. A computer system adapted for practice of the present invention can include a processor, a storage subsystem, a display device, and computer software to direct the location and display of the entities. The method comprises assigning numeric values as a measure of similarity between each pairing of items. A matrix is constructed, based on the numeric values. The eigenvectors and eigenvalues of the matrix are determined. Each item is located in the geometric space at coordinates determined from the eigenvectors and eigenvalues. Proper construction of the matrix and proper determination of coordinates from eigenvectors can ensure that distance between items in the geometric space is representative of the numeric value measure of the items' similarity.

  18. Domain Adaptation for Alzheimer’s Disease Diagnostics

    PubMed Central

    Wachinger, Christian; Reuter, Martin

    2016-01-01

    With the increasing prevalence of Alzheimer’s disease, research focuses on the early computer-aided diagnosis of dementia with the goal to understand the disease process, determine risk and preserving factors, and explore preventive therapies. By now, large amounts of data from multi-site studies have been made available for developing, training, and evaluating automated classifiers. Yet, their translation to the clinic remains challenging, in part due to their limited generalizability across different datasets. In this work, we describe a compact classification approach that mitigates overfitting by regularizing the multinomial regression with the mixed ℓ1/ℓ2 norm. We combine volume, thickness, and anatomical shape features from MRI scans to characterize neuroanatomy for the three-class classification of Alzheimer’s disease, mild cognitive impairment and healthy controls. We demonstrate high classification accuracy via independent evaluation within the scope of the CADDementia challenge. We, furthermore, demonstrate that variations between source and target datasets can substantially influence classification accuracy. The main contribution of this work addresses this problem by proposing an approach for supervised domain adaptation based on instance weighting. Integration of this method into our classifier allows us to assess different strategies for domain adaptation. Our results demonstrate (i) that training on only the target training set yields better results than the naïve combination (union) of source and target training sets, and (ii) that domain adaptation with instance weighting yields the best classification results, especially if only a small training component of the target dataset is available. These insights imply that successful deployment of systems for computer-aided diagnostics to the clinic depends not only on accurate classifiers that avoid overfitting, but also on a dedicated domain adaptation strategy. PMID:27262241

  19. ColDICE: A parallel Vlasov–Poisson solver using moving adaptive simplicial tessellation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sousbie, Thierry, E-mail: tsousbie@gmail.com; Department of Physics, The University of Tokyo, Tokyo 113-0033; Research Center for the Early Universe, School of Science, The University of Tokyo, Tokyo 113-0033

    2016-09-15

    Resolving numerically Vlasov–Poisson equations for initially cold systems can be reduced to following the evolution of a three-dimensional sheet evolving in six-dimensional phase-space. We describe a public parallel numerical algorithm consisting in representing the phase-space sheet with a conforming, self-adaptive simplicial tessellation of which the vertices follow the Lagrangian equations of motion. The algorithm is implemented both in six- and four-dimensional phase-space. Refinement of the tessellation mesh is performed using the bisection method and a local representation of the phase-space sheet at second order relying on additional tracers created when needed at runtime. In order to preserve in the bestmore » way the Hamiltonian nature of the system, refinement is anisotropic and constrained by measurements of local Poincaré invariants. Resolution of Poisson equation is performed using the fast Fourier method on a regular rectangular grid, similarly to particle in cells codes. To compute the density projected onto this grid, the intersection of the tessellation and the grid is calculated using the method of Franklin and Kankanhalli [65–67] generalised to linear order. As preliminary tests of the code, we study in four dimensional phase-space the evolution of an initially small patch in a chaotic potential and the cosmological collapse of a fluctuation composed of two sinusoidal waves. We also perform a “warm” dark matter simulation in six-dimensional phase-space that we use to check the parallel scaling of the code.« less

  20. Damage identification method for continuous girder bridges based on spatially-distributed long-gauge strain sensing under moving loads

    NASA Astrophysics Data System (ADS)

    Wu, Bitao; Wu, Gang; Yang, Caiqian; He, Yi

    2018-05-01

    A novel damage identification method for concrete continuous girder bridges based on spatially-distributed long-gauge strain sensing is presented in this paper. First, the variation regularity of the long-gauge strain influence line of continuous girder bridges which changes with the location of vehicles on the bridge is studied. According to this variation regularity, a calculation method for the distribution regularity of the area of long-gauge strain history is investigated. Second, a numerical simulation of damage identification based on the distribution regularity of the area of long-gauge strain history is conducted, and the results indicate that this method is effective for identifying damage and is not affected by the speed, axle number and weight of vehicles. Finally, a real bridge test on a highway is conducted, and the experimental results also show that this method is very effective for identifying damage in continuous girder bridges, and the local element stiffness distribution regularity can be revealed at the same time. This identified information is useful for maintaining of continuous girder bridges on highways.

  1. Personalized Preventive Medicine: Genetics and the Response to Regular Exercise in Preventive Interventions

    PubMed Central

    Bouchard, Claude; Antunes-Correa, Ligia M.; Ashley, Euan A.; Franklin, Nina; Hwang, Paul M.; Mattsson, C. Mikael; Negrao, Carlos E.; Phillips, Shane A.; Sarzynski, Mark A.; Wang, Ping-yuan; Wheeler, Matthew T.

    2014-01-01

    Regular exercise and a physically active lifestyle have favorable effects on health. Several issues related to this theme are addressed in this report. A comment on the requirements of personalized exercise medicine and in-depth biological profiling along with the opportunities that they offer is presented. This is followed by a brief overview of the evidence for the contributions of genetic differences to the ability to benefit from regular exercise. Subsequently, studies showing that mutations in TP53 influence exercise capacity in mice and humans are succinctly described. The evidence for effects of exercise on endothelial function in health and disease also is covered. Finally, changes in cardiac and skeletal muscle in response to exercise and their implications for patients with cardiac disease are summarized. Innovative research strategies are needed to define the molecular mechanisms involved in adaptation to exercise and to translate them into useful clinical and public health applications. PMID:25559061

  2. Episodic foresight deficits in regular, but not recreational, cannabis users.

    PubMed

    Mercuri, Kimberly; Terrett, Gill; Henry, Julie D; Curran, H Valerie; Elliott, Morgan; Rendell, Peter G

    2018-06-01

    Cannabis use is associated with a range of neurocognitive deficits, including impaired episodic memory. However, no study to date has assessed whether these difficulties extend to episodic foresight, a core component of which is the ability to mentally travel into one's personal future. This is a particularly surprising omission given that episodic memory is considered to be critical to engage episodic foresight. In the present study, we provide the first test of how episodic foresight is affected in the context of differing levels of cannabis use, and the degree to which performance on a measure of this construct is related to episodic memory. Fifty-seven regular cannabis users (23 recreational, 34 regular) and 57 controls were assessed using an adapted version of the Autobiographical Interview. The results showed that regular-users exhibited greater impairment of episodic foresight and episodic memory than both recreational-users and cannabis-naïve controls. These data therefore show for the first time that cannabis-related disruption of cognitive functioning extends to the capacity for episodic foresight, and they are discussed in relation to their potential implications for functional outcomes in this group.

  3. High Resolution DNS of Turbulent Flows using an Adaptive, Finite Volume Method

    NASA Astrophysics Data System (ADS)

    Trebotich, David

    2014-11-01

    We present a new computational capability for high resolution simulation of incompressible viscous flows. Our approach is based on cut cell methods where an irregular geometry such as a bluff body is intersected with a rectangular Cartesian grid resulting in cut cells near the boundary. In the cut cells we use a conservative discretization based on a discrete form of the divergence theorem to approximate fluxes for elliptic and hyperbolic terms in the Navier-Stokes equations. Away from the boundary the method reduces to a finite difference method. The algorithm is implemented in the Chombo software framework which supports adaptive mesh refinement and massively parallel computations. The code is scalable to 200,000 + processor cores on DOE supercomputers, resulting in DNS studies at unprecedented scale and resolution. For flow past a cylinder in transition (Re = 300) we observe a number of secondary structures in the far wake in 2D where the wake is over 120 cylinder diameters in length. These are compared with the more regularized wake structures in 3D at the same scale. For flow past a sphere (Re = 600) we resolve an arrowhead structure in the velocity in the near wake. The effectiveness of AMR is further highlighted in a simulation of turbulent flow (Re = 6000) in the contraction of an oil well blowout preventer. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Applied Mathematics program under Contract Number DE-AC02-05-CH11231.

  4. High sustained +Gz acceleration: physiological adaptation to high-G tolerance

    NASA Technical Reports Server (NTRS)

    Convertino, V. A.

    1998-01-01

    Since the early 1940s, a significant volume of research has been conducted in an effort to describe the impact of acute exposures to high-G acceleration on cardiovascular mechanisms responsible to maintaining cerebral perfusion and conscious in high performance aircraft pilots during aerial combat maneuvers. The value of understanding hemodynamic characteristics that underlie G-induced loss of consciousness has been instrumental in the evolution of optimal technology development (e.g., G-suits, positive pressure breathing, COMBAT EDGE, etc.) and pilot training (e.g., anti-G straining maneuvers). Although the emphasis of research has been placed on the development of protection against acute high +Gz acceleration effects, recent observations suggest that adaptation of cardiovascular mechanism associated with blood pressure regulation may contribute to a protective 'G-training' effect. Regular training at high G enhances G tolerance in humans, rats, guinea pigs, and dogs while prolonged layoff from exposure in high G profiles (G-layoff) can result in reduced G endurance. It seems probable that adaptations in physiological functions following chronically-repeated high G exposure (G training) or G-layoff could have significant impacts on performance during sustained high-G acceleration since protective technology such as G-suits and anit-G straining maneuvers are applied consistently during these periods of training. The purpose of this paper is to present a review of new data from three experiments that support the notion that repeated exposure on a regular basis to high sustained +Gz acceleration induces significant physiological adaptations which are associated with improved blood pressure regulation and subsequent protection of cerebral perfusion during orthostatic challenges.

  5. Regularized matrix regression

    PubMed Central

    Zhou, Hua; Li, Lexin

    2014-01-01

    Summary Modern technologies are producing a wealth of data with complex structures. For instance, in two-dimensional digital imaging, flow cytometry and electroencephalography, matrix-type covariates frequently arise when measurements are obtained for each combination of two underlying variables. To address scientific questions arising from those data, new regression methods that take matrices as covariates are needed, and sparsity or other forms of regularization are crucial owing to the ultrahigh dimensionality and complex structure of the matrix data. The popular lasso and related regularization methods hinge on the sparsity of the true signal in terms of the number of its non-zero coefficients. However, for the matrix data, the true signal is often of, or can be well approximated by, a low rank structure. As such, the sparsity is frequently in the form of low rank of the matrix parameters, which may seriously violate the assumption of the classical lasso. We propose a class of regularized matrix regression methods based on spectral regularization. A highly efficient and scalable estimation algorithm is developed, and a degrees-of-freedom formula is derived to facilitate model selection along the regularization path. Superior performance of the method proposed is demonstrated on both synthetic and real examples. PMID:24648830

  6. Nonsmooth, nonconvex regularizers applied to linear electromagnetic inverse problems

    NASA Astrophysics Data System (ADS)

    Hidalgo-Silva, H.; Gomez-Trevino, E.

    2017-12-01

    Tikhonov's regularization method is the standard technique applied to obtain models of the subsurface conductivity distribution from electric or electromagnetic measurements by solving UT (m) = | F (m) - d |2 + λ P(m). The second term correspond to the stabilizing functional, with P (m) = | ∇ m |2 the usual approach, and λ the regularization parameter. Due to the roughness penalizer inclusion, the model developed by Tikhonov's algorithm tends to smear discontinuities, a feature that may be undesirable. An important requirement for the regularizer is to allow the recovery of edges, and smooth the homogeneous parts. As is well known, Total Variation (TV) is now the standard approach to meet this requirement. Recently, Wang et.al. proved convergence for alternating direction method of multipliers in nonconvex, nonsmooth optimization. In this work we present a study of several algorithms for model recovering of Geosounding data based on Infimal Convolution, and also on hybrid, TV and second order TV and nonsmooth, nonconvex regularizers, observing their performance on synthetic and real data. The algorithms are based on Bregman iteration and Split Bregman method, and the geosounding method is the low-induction numbers magnetic dipoles. Non-smooth regularizers are considered using the Legendre-Fenchel transform.

  7. Flux Calculation Using CARIBIC DOAS Aircraft Measurements: SO2 Emission of Norilsk

    NASA Technical Reports Server (NTRS)

    Walter, D.; Heue, K.-P.; Rauthe-Schoech, A.; Brenninkmeijer, C. A. M.; Lamsal, L. N.; Krotkov, N. A.; Platt, U.

    2012-01-01

    Based on a case-study of the nickel smelter in Norilsk (Siberia), the retrieval of trace gas fluxes using airborne remote sensing is discussed. A DOAS system onboard an Airbus 340 detected large amounts of SO2 and NO2 near Norilsk during a regular passenger flight within the CARIBIC project. The remote sensing data were combined with ECMWF wind data to estimate the SO2 output of the Norilsk industrial complex to be around 1 Mt per year, which is in agreement with independent estimates. This value is compared to results using data from satellite remote sensing (GOME, OMI). The validity of the assumptions underlying our estimate is discussed, including the adaptation of this method to other gases and sources like the NO2 emissions of large industries or cities.

  8. Fractional-order TV-L2 model for image denoising

    NASA Astrophysics Data System (ADS)

    Chen, Dali; Sun, Shenshen; Zhang, Congrong; Chen, YangQuan; Xue, Dingyu

    2013-10-01

    This paper proposes a new fractional order total variation (TV) denoising method, which provides a much more elegant and effective way of treating problems of the algorithm implementation, ill-posed inverse, regularization parameter selection and blocky effect. Two fractional order TV-L2 models are constructed for image denoising. The majorization-minimization (MM) algorithm is used to decompose these two complex fractional TV optimization problems into a set of linear optimization problems which can be solved by the conjugate gradient algorithm. The final adaptive numerical procedure is given. Finally, we report experimental results which show that the proposed methodology avoids the blocky effect and achieves state-of-the-art performance. In addition, two medical image processing experiments are presented to demonstrate the validity of the proposed methodology.

  9. A two-component Matched Interface and Boundary (MIB) regularization for charge singularity in implicit solvation

    NASA Astrophysics Data System (ADS)

    Geng, Weihua; Zhao, Shan

    2017-12-01

    We present a new Matched Interface and Boundary (MIB) regularization method for treating charge singularity in solvated biomolecules whose electrostatics are described by the Poisson-Boltzmann (PB) equation. In a regularization method, by decomposing the potential function into two or three components, the singular component can be analytically represented by the Green's function, while other components possess a higher regularity. Our new regularization combines the efficiency of two-component schemes with the accuracy of the three-component schemes. Based on this regularization, a new MIB finite difference algorithm is developed for solving both linear and nonlinear PB equations, where the nonlinearity is handled by using the inexact-Newton's method. Compared with the existing MIB PB solver based on a three-component regularization, the present algorithm is simpler to implement by circumventing the work to solve a boundary value Poisson equation inside the molecular interface and to compute related interface jump conditions numerically. Moreover, the new MIB algorithm becomes computationally less expensive, while maintains the same second order accuracy. This is numerically verified by calculating the electrostatic potential and solvation energy on the Kirkwood sphere on which the analytical solutions are available and on a series of proteins with various sizes.

  10. Consistency-based rectification of nonrigid registrations

    PubMed Central

    Gass, Tobias; Székely, Gábor; Goksel, Orcun

    2015-01-01

    Abstract. We present a technique to rectify nonrigid registrations by improving their group-wise consistency, which is a widely used unsupervised measure to assess pair-wise registration quality. While pair-wise registration methods cannot guarantee any group-wise consistency, group-wise approaches typically enforce perfect consistency by registering all images to a common reference. However, errors in individual registrations to the reference then propagate, distorting the mean and accumulating in the pair-wise registrations inferred via the reference. Furthermore, the assumption that perfect correspondences exist is not always true, e.g., for interpatient registration. The proposed consistency-based registration rectification (CBRR) method addresses these issues by minimizing the group-wise inconsistency of all pair-wise registrations using a regularized least-squares algorithm. The regularization controls the adherence to the original registration, which is additionally weighted by the local postregistration similarity. This allows CBRR to adaptively improve consistency while locally preserving accurate pair-wise registrations. We show that the resulting registrations are not only more consistent, but also have lower average transformation error when compared to known transformations in simulated data. On clinical data, we show improvements of up to 50% target registration error in breathing motion estimation from four-dimensional MRI and improvements in atlas-based segmentation quality of up to 65% in terms of mean surface distance in three-dimensional (3-D) CT. Such improvement was observed consistently using different registration algorithms, dimensionality (two-dimensional/3-D), and modalities (MRI/CT). PMID:26158083

  11. Advances in the U.S. Navy Non-hydrostatic Unified Model of the Atmosphere (NUMA): LES as a Stabilization Methodology for High-Order Spectral Elements in the Simulation of Deep Convection

    NASA Astrophysics Data System (ADS)

    Marras, Simone; Giraldo, Frank

    2015-04-01

    The prediction of extreme weather sufficiently ahead of its occurrence impacts society as a whole and coastal communities specifically (e.g. Hurricane Sandy that impacted the eastern seaboard of the U.S. in the fall of 2012). With the final goal of solving hurricanes at very high resolution and numerical accuracy, we have been developing the Non-hydrostatic Unified Model of the Atmosphere (NUMA) to solve the Euler and Navier-Stokes equations by arbitrary high-order element-based Galerkin methods on massively parallel computers. NUMA is a unified model with respect to the following criteria: (a) it is based on unified numerics in that element-based Galerkin methods allow the user to choose between continuous (spectral elements, CG) or discontinuous Galerkin (DG) methods and from a large spectrum of time integrators, (b) it is unified across scales in that it can solve flow in limited-area mode (flow in a box) or in global mode (flow on the sphere). NUMA is the dynamical core that powers the U.S. Naval Research Laboratory's next-generation global weather prediction system NEPTUNE (Navy's Environmental Prediction sysTem Utilizing the NUMA corE). Because the solution of the Euler equations by high order methods is prone to instabilities that must be damped in some way, we approach the problem of stabilization via an adaptive Large Eddy Simulation (LES) scheme meant to treat such instabilities by modeling the sub-grid scale features of the flow. The novelty of our effort lies in the extension to high order spectral elements for low Mach number stratified flows of a method that was originally designed for low order, adaptive finite elements in the high Mach number regime [1]. The Euler equations are regularized by means of a dynamically adaptive stress tensor that is proportional to the residual of the unperturbed equations. Its effect is close to none where the solution is sufficiently smooth, whereas it increases elsewhere, with a direct contribution to the stabilization of the otherwise oscillatory solution. As a first step toward the Large Eddy Simulation of a hurricane, we verify the model via a high-order and high resolution idealized simulation of deep convection on the sphere. References [1] M. Nazarov and J. Hoffman (2013) Residual-based artificial viscosity for simulation of turbulent compressible flow using adaptive finite element methods Int. J. Numer. Methods Fluids, 71:339-357

  12. Multipole Vortex Blobs (MVB): Symplectic Geometry and Dynamics.

    PubMed

    Holm, Darryl D; Jacobs, Henry O

    2017-01-01

    Vortex blob methods are typically characterized by a regularization length scale, below which the dynamics are trivial for isolated blobs. In this article, we observe that the dynamics need not be trivial if one is willing to consider distributional derivatives of Dirac delta functionals as valid vorticity distributions. More specifically, a new singular vortex theory is presented for regularized Euler fluid equations of ideal incompressible flow in the plane. We determine the conditions under which such regularized Euler fluid equations may admit vorticity singularities which are stronger than delta functions, e.g., derivatives of delta functions. We also describe the symplectic geometry associated with these augmented vortex structures, and we characterize the dynamics as Hamiltonian. Applications to the design of numerical methods similar to vortex blob methods are also discussed. Such findings illuminate the rich dynamics which occur below the regularization length scale and enlighten our perspective on the potential for regularized fluid models to capture multiscale phenomena.

  13. Solving ill-posed control problems by stabilized finite element methods: an alternative to Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Burman, Erik; Hansbo, Peter; Larson, Mats G.

    2018-03-01

    Tikhonov regularization is one of the most commonly used methods for the regularization of ill-posed problems. In the setting of finite element solutions of elliptic partial differential control problems, Tikhonov regularization amounts to adding suitably weighted least squares terms of the control variable, or derivatives thereof, to the Lagrangian determining the optimality system. In this note we show that the stabilization methods for discretely ill-posed problems developed in the setting of convection-dominated convection-diffusion problems, can be highly suitable for stabilizing optimal control problems, and that Tikhonov regularization will lead to less accurate discrete solutions. We consider some inverse problems for Poisson’s equation as an illustration and derive new error estimates both for the reconstruction of the solution from the measured data and reconstruction of the source term from the measured data. These estimates include both the effect of the discretization error and error in the measurements.

  14. Clinical abnormalities, early intervention program of Down syndrome children: Queen Sirikit National Institute of Child Health experience.

    PubMed

    Fuengfoo, Adidsuda; Sakulnoom, Kim

    2014-06-01

    Queen Sirikit National Institute of Child Health is a tertiary institute of children in Thailand, where early intervention programs have been provided since 1990 by multidisciplinary approach especially in Down syndrome children. This aim of the present study is to follow the impact of early intervention on the outcome of Down syndrome children. The school attendance number of Down syndrome children was compared between regular early intervention and non-regular early intervention. The present study group consists of 210 Down syndrome children who attended early intervention programs at Queen Sirikit National Institute of Child Health between June 2008 and January 2012. Data include clinical features, school attendance developmental quotient (DQ) at 3 years of age using Capute Scales Cognitive Adaptive Test/Scale (CAT/CLAMS). Developmental milestones have been recorded as to the time of appearance of gross motor, fine motor, language, personal-social development compared to those non-regular intervention patients. Of 210 Down syndrome children, 117 were boys and 93 were girls. About 87% received regular intervention, 68% attended speech training. Mean DQ at 3 years of age was 65. Of the 184 children who still did follow-up at developmental department, 124 children (59%) attended school: mainstream school children 78 (63%) and special school children 46 (37%). The mean age at entrance to school was 5.8 ± 1.4 years. The school attendance was correlated with maternal education and regular early intervention attendance. Regular early intervention starts have proven to have a positive effect on development. The school attendance number of Down syndrome children receiving regular early intervention was statistically and significantly higher than the number of Down syndrome children receiving non-regular early intervention was. School attendance correlated with maternal education and attended regularly early intervention. Regular early intervention together with maternal education are contributing factors influencing school attendance in Down syndrome children in the present study

  15. Beyond Special and Regular Schooling? An Inclusive Education Reform Agenda

    ERIC Educational Resources Information Center

    Slee, Roger

    2008-01-01

    Following Edward Said's (2001) observations on traveling theories this paper considers the origins of inclusive education as a field of education research and policy that is in jeopardy of being undermined by its broadening popularity, institutional adoption and subsequent adaptations. Schools were not an invention for all and subsequently the…

  16. Adapting Physical Education: A Guide for Individualizing Physical Education Programs.

    ERIC Educational Resources Information Center

    Buckanavage, Robert, Ed.; And Others

    Guidelines are presented for organizing programs and modifying activities in physical education programs for children with a wide range of physical and emotional disabilities. The guidelines should result in a program that allows students to work to their maximum potential within the framework of regular physical education classes. In planning the…

  17. Integrating Refugee Children.

    ERIC Educational Resources Information Center

    Vershok, Anna

    2003-01-01

    Notes that the Moscow Center for the Integration and Education of Refugee Children attempts to provide real schooling for refugee children displaced from the Chechen Republic. Suggests main task was to prepare the children for regular school, and to help them adapt to both their new school and their new home, Moscow. Hopes this experience may help…

  18. Familiar Technology Promotes Academic Success for Students with Exceptional Learning Needs

    ERIC Educational Resources Information Center

    Wilson, Carolyn H.; Brice, Costeena; Carter, Emanuel I.; Fleming, Jeffery C.; Hay, Dontia D.; Hicks, John D.; Picot, Ebony; Taylor, Aashja M.; Weaver, Jessica

    2011-01-01

    Children with exceptional learning needs find it very difficult to retain content information from the regular curriculum. Many content teachers also find it difficult to adapt curriculum to the learning needs of these exception children within the confines of the classroom and without any assistance. Although many schools are equipped with…

  19. Discovery of "Escherichia coli" CRISPR Sequences in an Undergraduate Laboratory

    ERIC Educational Resources Information Center

    Militello, Kevin T.; Lazatin, Justine C.

    2017-01-01

    Clustered regularly interspaced short palindromic repeats (CRISPRs) represent a novel type of adaptive immune system found in eubacteria and archaebacteria. CRISPRs have recently generated a lot of attention due to their unique ability to catalog foreign nucleic acids, their ability to destroy foreign nucleic acids in a mechanism that shares some…

  20. SEVENTH GRADE MATHEMATICS FOR THE ACADEMICALLY TALENTED, TEACHERS' GUIDE.

    ERIC Educational Resources Information Center

    HORN, R.A.

    MATERIALS FOR BOTH ENRICHED AND ACCELERATED MATHEMATICS COURSES ARE GIVEN. THE PRESENTATION OF THE MATERIALS IS INTENDED FOR EASY ADAPTATION OF MODIFICATION TO MEET THE NEEDS OF MOST MATHEMATICS CLASSES. SUGGESTIONS AND SUPPLEMENTARY REFERENCES IN EACH UNIT ARE OFFERED AS AIDS TO THE MATHEMATICS TEACHER IN REGULAR OR ACCELERATED CLASS. UNITS…

  1. Differentiation and the Twice-Exceptional Student

    ERIC Educational Resources Information Center

    Franklin-Rohr, Cheryl

    2012-01-01

    Tier 1, the first level of instruction in the RtI (Response to Intervention) framework, is designed to meet the needs of 80% of students within the regular classroom. The only way to accomplish this goal is to use differentiation. Differentiation is not a singular process; it is rather a complicated process of adapting instructional strategies so…

  2. Providing Physical Activity for Students with Intellectual Disabilities: The Motivate, Adapt, and Play Program

    ERIC Educational Resources Information Center

    Davis, Kathy; Hodson, Patricia; Zhang, Guili; Boswell, Boni; Decker, Jim

    2010-01-01

    Research has shown that regular physical activity helps to prevent major health problems, such as heart disease, obesity, and diabetes. However, little research has been conducted on classroom-based physical activity programs for students with disabilities. In North Carolina, the Healthy Active Children Policy was implemented in 2006, requiring…

  3. Adaptation of Timing Behavior to a Regular Change in Criterion

    PubMed Central

    Sanabria, Federico; Oldenburg, Liliana

    2013-01-01

    This study examined how operant behavior adapted to an abrupt but regular change in the timing of reinforcement. Pigeons were trained on a fixed interval (FI) 15-s schedule of reinforcement during half of each experimental session, and on an FI 45-s (Experiment 1), FI 60-s (Experiment 2), or extinction schedule (Experiment 3) during the other half. FI performance was well characterized by a mixture of two gamma-shaped distributions of responses. When a longer FI schedule was in effect in the first half of the session (Experiment 1), a constant interference by the shorter FI was observed. When a shorter FI schedule was in effect in the first half of the session (Experiments 1, 2, and 3), the transition between schedules involved a decline in responding and a progressive rightward shift in the mode of the response distribution initially centered around the short FI. These findings are discussed in terms of the constraints they impose to quantitative models of timing, and in relation to the implications for information-based models of associative learning. PMID:23962672

  4. Pearson correlation estimation for irregularly sampled time series

    NASA Astrophysics Data System (ADS)

    Rehfeld, K.; Marwan, N.; Heitzig, J.; Kurths, J.

    2012-04-01

    Many applications in the geosciences call for the joint and objective analysis of irregular time series. For automated processing, robust measures of linear and nonlinear association are needed. Up to now, the standard approach would have been to reconstruct the time series on a regular grid, using linear or spline interpolation. Interpolation, however, comes with systematic side-effects, as it increases the auto-correlation in the time series. We have searched for the best method to estimate Pearson correlation for irregular time series, i.e. the one with the lowest estimation bias and variance. We adapted a kernel-based approach, using Gaussian weights. Pearson correlation is calculated, in principle, as a mean over products of previously centralized observations. In the regularly sampled case, observations in both time series were observed at the same time and thus the allocation of measurement values into pairs of products is straightforward. In the irregularly sampled case, however, measurements were not necessarily observed at the same time. Now, the key idea of the kernel-based method is to calculate weighted means of products, with the weight depending on the time separation between the observations. If the lagged correlation function is desired, the weights depend on the absolute difference between observation time separation and the estimation lag. To assess the applicability of the approach we used extensive simulations to determine the extent of interpolation side-effects with increasing irregularity of time series. We compared different approaches, based on (linear) interpolation, the Lomb-Scargle Fourier Transform, the sinc kernel and the Gaussian kernel. We investigated the role of kernel bandwidth and signal-to-noise ratio in the simulations. We found that the Gaussian kernel approach offers significant advantages and low Root-Mean Square Errors for regular, slightly irregular and very irregular time series. We therefore conclude that it is a good (linear) similarity measure that is appropriate for irregular time series with skewed inter-sampling time distributions.

  5. s-SMOOTH: Sparsity and Smoothness Enhanced EEG Brain Tomography

    PubMed Central

    Li, Ying; Qin, Jing; Hsin, Yue-Loong; Osher, Stanley; Liu, Wentai

    2016-01-01

    EEG source imaging enables us to reconstruct current density in the brain from the electrical measurements with excellent temporal resolution (~ ms). The corresponding EEG inverse problem is an ill-posed one that has infinitely many solutions. This is due to the fact that the number of EEG sensors is usually much smaller than that of the potential dipole locations, as well as noise contamination in the recorded signals. To obtain a unique solution, regularizations can be incorporated to impose additional constraints on the solution. An appropriate choice of regularization is critically important for the reconstruction accuracy of a brain image. In this paper, we propose a novel Sparsity and SMOOthness enhanced brain TomograpHy (s-SMOOTH) method to improve the reconstruction accuracy by integrating two recently proposed regularization techniques: Total Generalized Variation (TGV) regularization and ℓ1−2 regularization. TGV is able to preserve the source edge and recover the spatial distribution of the source intensity with high accuracy. Compared to the relevant total variation (TV) regularization, TGV enhances the smoothness of the image and reduces staircasing artifacts. The traditional TGV defined on a 2D image has been widely used in the image processing field. In order to handle 3D EEG source images, we propose a voxel-based Total Generalized Variation (vTGV) regularization that extends the definition of second-order TGV from 2D planar images to 3D irregular surfaces such as cortex surface. In addition, the ℓ1−2 regularization is utilized to promote sparsity on the current density itself. We demonstrate that ℓ1−2 regularization is able to enhance sparsity and accelerate computations than ℓ1 regularization. The proposed model is solved by an efficient and robust algorithm based on the difference of convex functions algorithm (DCA) and the alternating direction method of multipliers (ADMM). Numerical experiments using synthetic data demonstrate the advantages of the proposed method over other state-of-the-art methods in terms of total reconstruction accuracy, localization accuracy and focalization degree. The application to the source localization of event-related potential data further demonstrates the performance of the proposed method in real-world scenarios. PMID:27965529

  6. Robust and sparse correlation matrix estimation for the analysis of high-dimensional genomics data.

    PubMed

    Serra, Angela; Coretto, Pietro; Fratello, Michele; Tagliaferri, Roberto; Stegle, Oliver

    2018-02-15

    Microarray technology can be used to study the expression of thousands of genes across a number of different experimental conditions, usually hundreds. The underlying principle is that genes sharing similar expression patterns, across different samples, can be part of the same co-expression system, or they may share the same biological functions. Groups of genes are usually identified based on cluster analysis. Clustering methods rely on the similarity matrix between genes. A common choice to measure similarity is to compute the sample correlation matrix. Dimensionality reduction is another popular data analysis task which is also based on covariance/correlation matrix estimates. Unfortunately, covariance/correlation matrix estimation suffers from the intrinsic noise present in high-dimensional data. Sources of noise are: sampling variations, presents of outlying sample units, and the fact that in most cases the number of units is much larger than the number of genes. In this paper, we propose a robust correlation matrix estimator that is regularized based on adaptive thresholding. The resulting method jointly tames the effects of the high-dimensionality, and data contamination. Computations are easy to implement and do not require hand tunings. Both simulated and real data are analyzed. A Monte Carlo experiment shows that the proposed method is capable of remarkable performances. Our correlation metric is more robust to outliers compared with the existing alternatives in two gene expression datasets. It is also shown how the regularization allows to automatically detect and filter spurious correlations. The same regularization is also extended to other less robust correlation measures. Finally, we apply the ARACNE algorithm on the SyNTreN gene expression data. Sensitivity and specificity of the reconstructed network is compared with the gold standard. We show that ARACNE performs better when it takes the proposed correlation matrix estimator as input. The R software is available at https://github.com/angy89/RobustSparseCorrelation. aserra@unisa.it or robtag@unisa.it. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  7. Generalizations of Tikhonov's regularized method of least squares to non-Euclidean vector norms

    NASA Astrophysics Data System (ADS)

    Volkov, V. V.; Erokhin, V. I.; Kakaev, V. V.; Onufrei, A. Yu.

    2017-09-01

    Tikhonov's regularized method of least squares and its generalizations to non-Euclidean norms, including polyhedral, are considered. The regularized method of least squares is reduced to mathematical programming problems obtained by "instrumental" generalizations of the Tikhonov lemma on the minimal (in a certain norm) solution of a system of linear algebraic equations with respect to an unknown matrix. Further studies are needed for problems concerning the development of methods and algorithms for solving reduced mathematical programming problems in which the objective functions and admissible domains are constructed using polyhedral vector norms.

  8. Firing patterns in the adaptive exponential integrate-and-fire model.

    PubMed

    Naud, Richard; Marcille, Nicolas; Clopath, Claudia; Gerstner, Wulfram

    2008-11-01

    For simulations of large spiking neuron networks, an accurate, simple and versatile single-neuron modeling framework is required. Here we explore the versatility of a simple two-equation model: the adaptive exponential integrate-and-fire neuron. We show that this model generates multiple firing patterns depending on the choice of parameter values, and present a phase diagram describing the transition from one firing type to another. We give an analytical criterion to distinguish between continuous adaption, initial bursting, regular bursting and two types of tonic spiking. Also, we report that the deterministic model is capable of producing irregular spiking when stimulated with constant current, indicating low-dimensional chaos. Lastly, the simple model is fitted to real experiments of cortical neurons under step current stimulation. The results provide support for the suitability of simple models such as the adaptive exponential integrate-and-fire neuron for large network simulations.

  9. Principles of exercise physiology: responses to acute exercise and long-term adaptations to training.

    PubMed

    Rivera-Brown, Anita M; Frontera, Walter R

    2012-11-01

    Physical activity and fitness are associated with a lower prevalence of chronic diseases, such as heart disease, cancer, high blood pressure, and diabetes. This review discusses the body's response to an acute bout of exercise and long-term physiological adaptations to exercise training with an emphasis on endurance exercise. An overview is provided of skeletal muscle actions, muscle fiber types, and the major metabolic pathways involved in energy production. The importance of adequate fluid intake during exercise sessions to prevent impairments induced by dehydration on endurance exercise, muscular power, and strength is discussed. Physiological adaptations that result from regular exercise training such as increases in cardiorespiratory capacity and strength are mentioned. The review emphasizes the cardiovascular and metabolic adaptations that lead to improvements in maximal oxygen capacity. Copyright © 2012 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.

  10. An Excel‐based implementation of the spectral method of action potential alternans analysis

    PubMed Central

    Pearman, Charles M.

    2014-01-01

    Abstract Action potential (AP) alternans has been well established as a mechanism of arrhythmogenesis and sudden cardiac death. Proper interpretation of AP alternans requires a robust method of alternans quantification. Traditional methods of alternans analysis neglect higher order periodicities that may have greater pro‐arrhythmic potential than classical 2:1 alternans. The spectral method of alternans analysis, already widely used in the related study of microvolt T‐wave alternans, has also been used to study AP alternans. Software to meet the specific needs of AP alternans analysis is not currently available in the public domain. An AP analysis tool is implemented here, written in Visual Basic for Applications and using Microsoft Excel as a shell. This performs a sophisticated analysis of alternans behavior allowing reliable distinction of alternans from random fluctuations, quantification of alternans magnitude, and identification of which phases of the AP are most affected. In addition, the spectral method has been adapted to allow detection and quantification of higher order regular oscillations. Analysis of action potential morphology is also performed. A simple user interface enables easy import, analysis, and export of collated results. PMID:25501439

  11. Nested Conjugate Gradient Algorithm with Nested Preconditioning for Non-linear Image Restoration.

    PubMed

    Skariah, Deepak G; Arigovindan, Muthuvel

    2017-06-19

    We develop a novel optimization algorithm, which we call Nested Non-Linear Conjugate Gradient algorithm (NNCG), for image restoration based on quadratic data fitting and smooth non-quadratic regularization. The algorithm is constructed as a nesting of two conjugate gradient (CG) iterations. The outer iteration is constructed as a preconditioned non-linear CG algorithm; the preconditioning is performed by the inner CG iteration that is linear. The inner CG iteration, which performs preconditioning for outer CG iteration, itself is accelerated by an another FFT based non-iterative preconditioner. We prove that the method converges to a stationary point for both convex and non-convex regularization functionals. We demonstrate experimentally that proposed method outperforms the well-known majorization-minimization method used for convex regularization, and a non-convex inertial-proximal method for non-convex regularization functional.

  12. Direct Regularized Estimation of Retinal Vascular Oxygen Tension Based on an Experimental Model

    PubMed Central

    Yildirim, Isa; Ansari, Rashid; Yetik, I. Samil; Shahidi, Mahnaz

    2014-01-01

    Phosphorescence lifetime imaging is commonly used to generate oxygen tension maps of retinal blood vessels by classical least squares (LS) estimation method. A spatial regularization method was later proposed and provided improved results. However, both methods obtain oxygen tension values from the estimates of intermediate variables, and do not yield an optimum estimate of oxygen tension values, due to their nonlinear dependence on the ratio of intermediate variables. In this paper, we provide an improved solution by devising a regularized direct least squares (RDLS) method that exploits available knowledge in studies that provide models of oxygen tension in retinal arteries and veins, unlike the earlier regularized LS approach where knowledge about intermediate variables is limited. The performance of the proposed RDLS method is evaluated by investigating and comparing the bias, variance, oxygen tension maps, 1-D profiles of arterial oxygen tension, and mean absolute error with those of earlier methods, and its superior performance both quantitatively and qualitatively is demonstrated. PMID:23732915

  13. AutoLens: Automated Modeling of a Strong Lens's Light, Mass and Source

    NASA Astrophysics Data System (ADS)

    Nightingale, J. W.; Dye, S.; Massey, Richard J.

    2018-05-01

    This work presents AutoLens, the first entirely automated modeling suite for the analysis of galaxy-scale strong gravitational lenses. AutoLens simultaneously models the lens galaxy's light and mass whilst reconstructing the extended source galaxy on an adaptive pixel-grid. The method's approach to source-plane discretization is amorphous, adapting its clustering and regularization to the intrinsic properties of the lensed source. The lens's light is fitted using a superposition of Sersic functions, allowing AutoLens to cleanly deblend its light from the source. Single component mass models representing the lens's total mass density profile are demonstrated, which in conjunction with light modeling can detect central images using a centrally cored profile. Decomposed mass modeling is also shown, which can fully decouple a lens's light and dark matter and determine whether the two component are geometrically aligned. The complexity of the light and mass models are automatically chosen via Bayesian model comparison. These steps form AutoLens's automated analysis pipeline, such that all results in this work are generated without any user-intervention. This is rigorously tested on a large suite of simulated images, assessing its performance on a broad range of lens profiles, source morphologies and lensing geometries. The method's performance is excellent, with accurate light, mass and source profiles inferred for data sets representative of both existing Hubble imaging and future Euclid wide-field observations.

  14. A Kullback-Leibler approach for 3D reconstruction of spectral CT data corrupted by Poisson noise

    NASA Astrophysics Data System (ADS)

    Hohweiller, Tom; Ducros, Nicolas; Peyrin, Françoise; Sixou, Bruno

    2017-09-01

    While standard computed tomography (CT) data do not depend on energy, spectral computed tomography (SPCT) acquire energy-resolved data, which allows material decomposition of the object of interest. Decompo- sitions in the projection domain allow creating projection mass density (PMD) per materials. From decomposed projections, a tomographic reconstruction creates 3D material density volume. The decomposition is made pos- sible by minimizing a cost function. The variational approach is preferred since this is an ill-posed non-linear inverse problem. Moreover, noise plays a critical role when decomposing data. That is why in this paper, a new data fidelity term is used to take into account of the photonic noise. In this work two data fidelity terms were investigated: a weighted least squares (WLS) term, adapted to Gaussian noise, and the Kullback-Leibler distance (KL), adapted to Poisson noise. A regularized Gauss-Newton algorithm minimizes the cost function iteratively. Both methods decompose materials from a numerical phantom of a mouse. Soft tissues and bones are decomposed in the projection domain; then a tomographic reconstruction creates a 3D material density volume for each material. Comparing relative errors, KL is shown to outperform WLS for low photon counts, in 2D and 3D. This new method could be of particular interest when low-dose acquisitions are performed.

  15. Estimation of High-Dimensional Graphical Models Using Regularized Score Matching

    PubMed Central

    Lin, Lina; Drton, Mathias; Shojaie, Ali

    2017-01-01

    Graphical models are widely used to model stochastic dependences among large collections of variables. We introduce a new method of estimating undirected conditional independence graphs based on the score matching loss, introduced by Hyvärinen (2005), and subsequently extended in Hyvärinen (2007). The regularized score matching method we propose applies to settings with continuous observations and allows for computationally efficient treatment of possibly non-Gaussian exponential family models. In the well-explored Gaussian setting, regularized score matching avoids issues of asymmetry that arise when applying the technique of neighborhood selection, and compared to existing methods that directly yield symmetric estimates, the score matching approach has the advantage that the considered loss is quadratic and gives piecewise linear solution paths under ℓ1 regularization. Under suitable irrepresentability conditions, we show that ℓ1-regularized score matching is consistent for graph estimation in sparse high-dimensional settings. Through numerical experiments and an application to RNAseq data, we confirm that regularized score matching achieves state-of-the-art performance in the Gaussian case and provides a valuable tool for computationally efficient estimation in non-Gaussian graphical models. PMID:28638498

  16. Krylov subspace iterative methods for boundary element method based near-field acoustic holography.

    PubMed

    Valdivia, Nicolas; Williams, Earl G

    2005-02-01

    The reconstruction of the acoustic field for general surfaces is obtained from the solution of a matrix system that results from a boundary integral equation discretized using boundary element methods. The solution to the resultant matrix system is obtained using iterative regularization methods that counteract the effect of noise on the measurements. These methods will not require the calculation of the singular value decomposition, which can be expensive when the matrix system is considerably large. Krylov subspace methods are iterative methods that have the phenomena known as "semi-convergence," i.e., the optimal regularization solution is obtained after a few iterations. If the iteration is not stopped, the method converges to a solution that generally is totally corrupted by errors on the measurements. For these methods the number of iterations play the role of the regularization parameter. We will focus our attention to the study of the regularizing properties from the Krylov subspace methods like conjugate gradients, least squares QR and the recently proposed Hybrid method. A discussion and comparison of the available stopping rules will be included. A vibrating plate is considered as an example to validate our results.

  17. Frontostriatal and behavioral adaptations to daily sugar-sweetened beverage intake: a randomized controlled trial.

    PubMed

    Burger, Kyle S

    2017-03-01

    Background: Current obesity theories suggest that the repeated intake of highly palatable high-sugar foods causes adaptions in the striatum, parietal lobe, and prefrontal and visual cortices in the brain that may serve to perpetuate consumption in a feed-forward manner. However, the data for humans are cross-sectional and observational, leaving little ability to determine the temporal precedence of repeated consumption on brain response. Objective: We tested the impact of regular sugar-sweetened beverage intake on brain and behavioral responses to beverage stimuli. Design: We performed an experiment with 20 healthy-weight individuals who were randomly assigned to consume 1 of 2 sugar-sweetened beverages daily for 21 d, underwent 2 functional MRI sessions, and completed behavioral and explicit hedonic assessments. Results: Consistent with preclinical experiments, daily beverage consumption resulted in decreases in dorsal striatal response during receipt of the consumed beverage ( r = -0.46) and decreased ventromedial prefrontal response during logo-elicited anticipation ( r = -0.44). This decrease in the prefrontal response correlated with increases in behavioral disinhibition toward the logo of the consumed beverage ( r = 0.54; P = 0.02). Daily beverage consumption also increased precuneus response to both juice logos compared with a tasteless control ( r = 0.45), suggesting a more generalized effect toward beverage cues. Last, the repeated consumption of 1 beverage resulted in an explicit hedonic devaluation of a similar nonconsumed beverage ( P < 0.001). Conclusions: Analogous to previous reports, these initial results provide convergent data for a role of regular sugar-sweetened beverage intake in altering neurobehavioral responses to the regularly consumed beverage that may also extend to other beverage stimuli. Future research is required to provide evidence of replication in a larger sample and to establish whether the neurobehavioral adaptations observed herein are specific to high-sugar and/or nonnutritive-sweetened beverages or more generally related to the repeated consumption of any type of food. This trial was registered at clinicaltrials.gov as NCT02624206. © 2017 American Society for Nutrition.

  18. Frontostriatal and behavioral adaptations to daily sugar-sweetened beverage intake: a randomized controlled trial123

    PubMed Central

    2017-01-01

    Background: Current obesity theories suggest that the repeated intake of highly palatable high-sugar foods causes adaptions in the striatum, parietal lobe, and prefrontal and visual cortices in the brain that may serve to perpetuate consumption in a feed-forward manner. However, the data for humans are cross-sectional and observational, leaving little ability to determine the temporal precedence of repeated consumption on brain response. Objective: We tested the impact of regular sugar-sweetened beverage intake on brain and behavioral responses to beverage stimuli. Design: We performed an experiment with 20 healthy-weight individuals who were randomly assigned to consume 1 of 2 sugar-sweetened beverages daily for 21 d, underwent 2 functional MRI sessions, and completed behavioral and explicit hedonic assessments. Results: Consistent with preclinical experiments, daily beverage consumption resulted in decreases in dorsal striatal response during receipt of the consumed beverage (r = −0.46) and decreased ventromedial prefrontal response during logo-elicited anticipation (r = −0.44). This decrease in the prefrontal response correlated with increases in behavioral disinhibition toward the logo of the consumed beverage (r = 0.54; P = 0.02). Daily beverage consumption also increased precuneus response to both juice logos compared with a tasteless control (r = 0.45), suggesting a more generalized effect toward beverage cues. Last, the repeated consumption of 1 beverage resulted in an explicit hedonic devaluation of a similar nonconsumed beverage (P < 0.001). Conclusions: Analogous to previous reports, these initial results provide convergent data for a role of regular sugar-sweetened beverage intake in altering neurobehavioral responses to the regularly consumed beverage that may also extend to other beverage stimuli. Future research is required to provide evidence of replication in a larger sample and to establish whether the neurobehavioral adaptations observed herein are specific to high-sugar and/or nonnutritive-sweetened beverages or more generally related to the repeated consumption of any type of food. This trial was registered at clinicaltrials.gov as NCT02624206. PMID:28179221

  19. Probabilistic assessment of uncertain adaptive hybrid composites

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Singhal, Surendra N.; Chamis, Christos C.

    1994-01-01

    Adaptive composite structures using actuation materials, such as piezoelectric fibers, were assessed probabilistically utilizing intraply hybrid composite mechanics in conjunction with probabilistic composite structural analysis. Uncertainties associated with the actuation material as well as the uncertainties in the regular (traditional) composite material properties were quantified and considered in the assessment. Static and buckling analyses were performed for rectangular panels with various boundary conditions and different control arrangements. The probability density functions of the structural behavior, such as maximum displacement and critical buckling load, were computationally simulated. The results of the assessment indicate that improved design and reliability can be achieved with actuation material.

  20. DIY: "Do Imaging Yourself" - Conventional microscopes as powerful tools for in vivo investigation.

    PubMed

    Antunes, Maísa Mota; Carvalho, Érika de; Menezes, Gustavo Batista

    2018-01-01

    Intravital imaging has been increasingly employed in cell biology studies and it is becoming one of the most powerful tools for in vivo investigation. Although some protocols can be extremely complex, most intravital imaging procedures can be performed using basic surgery and animal maintenance techniques. More importantly, regular confocal microscopes - the same that are used for imaging immunofluorescence slides - can also acquire high quality intravital images and movies after minor adaptations. Here we propose minimal adaptations in stock microscopes that allow major improvements in different fields of scientific investigation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. The ADaptation and Anticipation Model (ADAM) of sensorimotor synchronization

    PubMed Central

    van der Steen, M. C. (Marieke); Keller, Peter E.

    2013-01-01

    A constantly changing environment requires precise yet flexible timing of movements. Sensorimotor synchronization (SMS)—the temporal coordination of an action with events in a predictable external rhythm—is a fundamental human skill that contributes to optimal sensory-motor control in daily life. A large body of research related to SMS has focused on adaptive error correction mechanisms that support the synchronization of periodic movements (e.g., finger taps) with events in regular pacing sequences. The results of recent studies additionally highlight the importance of anticipatory mechanisms that support temporal prediction in the context of SMS with sequences that contain tempo changes. To investigate the role of adaptation and anticipatory mechanisms in SMS we introduce ADAM: an ADaptation and Anticipation Model. ADAM combines reactive error correction processes (adaptation) with predictive temporal extrapolation processes (anticipation) inspired by the computational neuroscience concept of internal models. The combination of simulations and experimental manipulations based on ADAM creates a novel and promising approach for exploring adaptation and anticipation in SMS. The current paper describes the conceptual basis and architecture of ADAM. PMID:23772211

  2. Are HIV and reproductive health services adapted to the needs of female sex workers? Results of a policy and situational analysis in Tete, Mozambique.

    PubMed

    Lafort, Yves; Jocitala, Osvaldo; Candrinho, Balthazar; Greener, Letitia; Beksinska, Mags; Smit, Jenni A; Chersich, Matthew; Delva, Wim

    2016-07-26

    In the context of an implementation research project aiming at improving use of HIV and sexual and reproductive health (SRH) services for female sex workers (FSWs), a broad situational analysis was conducted in Tete, Mozambique, assessing if services are adapted to the needs of FSWs. Methods comprised (1) a policy analysis including a review of national guidelines and interviews with policy makers, and (2) health facility assessments at 6 public and 1 private health facilities, and 1 clinic specifically targeting FSWs, consisting of an audit checklist, interviews with 18 HIV/SRH care providers and interviews of 99 HIV/SRH care users. There exist national guidelines for most HIV/SRH care services, but none provides guidance for care adapted to the needs of high-risk women such as FSWs. The Ministry of Health recently initiated the process of establishing guidelines for attendance of key populations, including FSWs, at public health facilities. Policy makers have different views on the best approach for providing services to FSWs-integrated in the general health services or through parallel services for key populations-and there exists no national strategy. The most important provider of HIV/SRH services in the study area is the government. Most basic services are widely available, with the exception of certain family planning methods, cervical cancer screening, services for victims of sexual and gender-based violence, and termination of pregnancy (TOP). The public facilities face serious limitations in term of space, staff, equipment, regular supplies and adequate provider practices. A stand-alone clinic targeting key populations offers a limited range of services to the FSW population in part of the area. Private clinics offer only a few services, at commercial prices. There is a need to improve the availability of quality HIV/SRH services in general and to FSWs specifically, and to develop guidelines for care adapted to the needs of FSWs. Access for FSWs can be improved by either expanding the range of services and the coverage of the targeted clinic and/or by improving access to adapted care at the public health services and ensure a minimum standard of quality.

  3. Sex Partnership and Self-Efficacy Influence Depression in Chinese Transgender Women: A Cross-Sectional Study

    PubMed Central

    Yang, Xiaoshi; Wang, Lie; Hao, Chun; Gu, Yuan; Song, Wei; Wang, Jian; Chang, Margaret M.; Zhao, Qun

    2015-01-01

    Background Transgender women often suffer from transition-related discrimination and loss of social support due to their gender transition, which may pose considerable psychological challenges and may lead to a high prevalence of depression in this population. Increased self-efficacy may combat the adverse effects of gender transition on depression. However, few available studies have investigated the protective effect of self-efficacy on depression among transgender women, and there is a scarcity of research describing the mental health of Chinese transgender women. This study aims to describe the prevalence of depression among Chinese transgender women and to explore the associated factors. Methods A cross-sectional study was conducted in Shenyang, Liaoning Province of China by convenience sampling from January 2014 to July 2014. Two hundred and nine Chinese transgender women were interviewed face-to-face with questionnaires that covered topics including the Zung Self-Rating Depression Scale (SDS), demographic characteristics, transition status, sex partnership, perceived transgender-related discrimination, the Multidimensional Scale of Perceived Social Support (MSPSS) and the adapted General Self-efficacy Scale (GSES). A hierarchical multiple regression analysis was performed to explore the factors associated with SDS scores. Results The prevalence of depression among transgender women was 45.35%. Transgender women with regular partners or casual partners exhibited higher SDS scores than those without regular partners or casual partners. Regression analyses showed that sex partnership explained most (16.6%) of the total variance in depression scores. Self-efficacy was negatively associated with depression. Conclusions Chinese transgender women experienced high levels of depression. Depression was best predicted by whether transgender women had a regular partner or a casual partner rather than transgender-related discrimination and transition status. Moreover, self-efficacy had positive effects on attenuating depression due to gender transition. Therefore, interventions should focus on improving the sense of self-efficacy among these women to enable them to cope with depression and to determine risky sex partnership characteristics, especially for regular and casual partners. PMID:26367265

  4. Context-aware adaptive spelling in motor imagery BCI

    NASA Astrophysics Data System (ADS)

    Perdikis, S.; Leeb, R.; Millán, J. d. R.

    2016-06-01

    Objective. This work presents a first motor imagery-based, adaptive brain-computer interface (BCI) speller, which is able to exploit application-derived context for improved, simultaneous classifier adaptation and spelling. Online spelling experiments with ten able-bodied users evaluate the ability of our scheme, first, to alleviate non-stationarity of brain signals for restoring the subject’s performances, second, to guide naive users into BCI control avoiding initial offline BCI calibration and, third, to outperform regular unsupervised adaptation. Approach. Our co-adaptive framework combines the BrainTree speller with smooth-batch linear discriminant analysis adaptation. The latter enjoys contextual assistance through BrainTree’s language model to improve online expectation-maximization maximum-likelihood estimation. Main results. Our results verify the possibility to restore single-sample classification and BCI command accuracy, as well as spelling speed for expert users. Most importantly, context-aware adaptation performs significantly better than its unsupervised equivalent and similar to the supervised one. Although no significant differences are found with respect to the state-of-the-art PMean approach, the proposed algorithm is shown to be advantageous for 30% of the users. Significance. We demonstrate the possibility to circumvent supervised BCI recalibration, saving time without compromising the adaptation quality. On the other hand, we show that this type of classifier adaptation is not as efficient for BCI training purposes.

  5. Context-aware adaptive spelling in motor imagery BCI.

    PubMed

    Perdikis, S; Leeb, R; Millán, J D R

    2016-06-01

    This work presents a first motor imagery-based, adaptive brain-computer interface (BCI) speller, which is able to exploit application-derived context for improved, simultaneous classifier adaptation and spelling. Online spelling experiments with ten able-bodied users evaluate the ability of our scheme, first, to alleviate non-stationarity of brain signals for restoring the subject's performances, second, to guide naive users into BCI control avoiding initial offline BCI calibration and, third, to outperform regular unsupervised adaptation. Our co-adaptive framework combines the BrainTree speller with smooth-batch linear discriminant analysis adaptation. The latter enjoys contextual assistance through BrainTree's language model to improve online expectation-maximization maximum-likelihood estimation. Our results verify the possibility to restore single-sample classification and BCI command accuracy, as well as spelling speed for expert users. Most importantly, context-aware adaptation performs significantly better than its unsupervised equivalent and similar to the supervised one. Although no significant differences are found with respect to the state-of-the-art PMean approach, the proposed algorithm is shown to be advantageous for 30% of the users. We demonstrate the possibility to circumvent supervised BCI recalibration, saving time without compromising the adaptation quality. On the other hand, we show that this type of classifier adaptation is not as efficient for BCI training purposes.

  6. SU-E-J-208: Fast and Accurate Auto-Segmentation of Abdominal Organs at Risk for Online Adaptive Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gupta, V; Wang, Y; Romero, A

    2014-06-01

    Purpose: Various studies have demonstrated that online adaptive radiotherapy by real-time re-optimization of the treatment plan can improve organs-at-risk (OARs) sparing in the abdominal region. Its clinical implementation, however, requires fast and accurate auto-segmentation of OARs in CT scans acquired just before each treatment fraction. Autosegmentation is particularly challenging in the abdominal region due to the frequently observed large deformations. We present a clinical validation of a new auto-segmentation method that uses fully automated non-rigid registration for propagating abdominal OAR contours from planning to daily treatment CT scans. Methods: OARs were manually contoured by an expert panel to obtain groundmore » truth contours for repeat CT scans (3 per patient) of 10 patients. For the non-rigid alignment, we used a new non-rigid registration method that estimates the deformation field by optimizing local normalized correlation coefficient with smoothness regularization. This field was used to propagate planning contours to repeat CTs. To quantify the performance of the auto-segmentation, we compared the propagated and ground truth contours using two widely used metrics- Dice coefficient (Dc) and Hausdorff distance (Hd). The proposed method was benchmarked against translation and rigid alignment based auto-segmentation. Results: For all organs, the auto-segmentation performed better than the baseline (translation) with an average processing time of 15 s per fraction CT. The overall improvements ranged from 2% (heart) to 32% (pancreas) in Dc, and 27% (heart) to 62% (spinal cord) in Hd. For liver, kidneys, gall bladder, stomach, spinal cord and heart, Dc above 0.85 was achieved. Duodenum and pancreas were the most challenging organs with both showing relatively larger spreads and medians of 0.79 and 2.1 mm for Dc and Hd, respectively. Conclusion: Based on the achieved accuracy and computational time we conclude that the investigated auto-segmentation method overcomes an important hurdle to the clinical implementation of online adaptive radiotherapy. Partial funding for this work was provided by Accuray Incorporated as part of a research collaboration with Erasmus MC Cancer Institute.« less

  7. Solving the hypersingular boundary integral equation in three-dimensional acoustics using a regularization relationship.

    PubMed

    Yan, Zai You; Hung, Kin Chew; Zheng, Hui

    2003-05-01

    Regularization of the hypersingular integral in the normal derivative of the conventional Helmholtz integral equation through a double surface integral method or regularization relationship has been studied. By introducing the new concept of discretized operator matrix, evaluation of the double surface integrals is reduced to calculate the product of two discretized operator matrices. Such a treatment greatly improves the computational efficiency. As the number of frequencies to be computed increases, the computational cost of solving the composite Helmholtz integral equation is comparable to that of solving the conventional Helmholtz integral equation. In this paper, the detailed formulation of the proposed regularization method is presented. The computational efficiency and accuracy of the regularization method are demonstrated for a general class of acoustic radiation and scattering problems. The radiation of a pulsating sphere, an oscillating sphere, and a rigid sphere insonified by a plane acoustic wave are solved using the new method with curvilinear quadrilateral isoparametric elements. It is found that the numerical results rapidly converge to the corresponding analytical solutions as finer meshes are applied.

  8. Primal-dual convex optimization in large deformation diffeomorphic metric mapping: LDDMM meets robust regularizers

    NASA Astrophysics Data System (ADS)

    Hernandez, Monica

    2017-12-01

    This paper proposes a method for primal-dual convex optimization in variational large deformation diffeomorphic metric mapping problems formulated with robust regularizers and robust image similarity metrics. The method is based on Chambolle and Pock primal-dual algorithm for solving general convex optimization problems. Diagonal preconditioning is used to ensure the convergence of the algorithm to the global minimum. We consider three robust regularizers liable to provide acceptable results in diffeomorphic registration: Huber, V-Huber and total generalized variation. The Huber norm is used in the image similarity term. The primal-dual equations are derived for the stationary and the non-stationary parameterizations of diffeomorphisms. The resulting algorithms have been implemented for running in the GPU using Cuda. For the most memory consuming methods, we have developed a multi-GPU implementation. The GPU implementations allowed us to perform an exhaustive evaluation study in NIREP and LPBA40 databases. The experiments showed that, for all the considered regularizers, the proposed method converges to diffeomorphic solutions while better preserving discontinuities at the boundaries of the objects compared to baseline diffeomorphic registration methods. In most cases, the evaluation showed a competitive performance for the robust regularizers, close to the performance of the baseline diffeomorphic registration methods.

  9. Green Jobs: Definition and Method of Appraisal of Chemical and Biological Risks.

    PubMed

    Cheneval, Erwan; Busque, Marc-Antoine; Ostiguy, Claude; Lavoie, Jacques; Bourbonnais, Robert; Labrèche, France; Bakhiyi, Bouchra; Zayed, Joseph

    2016-04-01

    In the wake of sustainable development, green jobs are developing rapidly, changing the work environment. However a green job is not automatically a safe job. The aim of the study was to define green jobs, and to establish a preliminary risk assessment of chemical substances and biological agents for workers in Quebec. An operational definition was developed, along with criteria and sustainable development principles to discriminate green jobs from regular jobs. The potential toxicity or hazard associated with their chemical and biological exposures was assessed, and the workers' exposure appraised using an expert assessment method. A control banding approach was then used to assess risks for workers in selected green jobs. A double entry model allowed us to set priorities in terms of chemical or biological risk. Among jobs that present the highest risk potential, several are related to waste management. The developed method is flexible and could be adapted to better appraise the risks that workers are facing or to propose control measures. © The Author 2015. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  10. A flexible motif search technique based on generalized profiles.

    PubMed

    Bucher, P; Karplus, K; Moeri, N; Hofmann, K

    1996-03-01

    A flexible motif search technique is presented which has two major components: (1) a generalized profile syntax serving as a motif definition language; and (2) a motif search method specifically adapted to the problem of finding multiple instances of a motif in the same sequence. The new profile structure, which is the core of the generalized profile syntax, combines the functions of a variety of motif descriptors implemented in other methods, including regular expression-like patterns, weight matrices, previously used profiles, and certain types of hidden Markov models (HMMs). The relationship between generalized profiles and other biomolecular motif descriptors is analyzed in detail, with special attention to HMMs. Generalized profiles are shown to be equivalent to a particular class of HMMs, and conversion procedures in both directions are given. The conversion procedures provide an interpretation for local alignment in the framework of stochastic models, allowing for clear, simple significance tests. A mathematical statement of the motif search problem defines the new method exactly without linking it to a specific algorithmic solution. Part of the definition includes a new definition of disjointness of alignments.

  11. Introduction of Total Variation Regularization into Filtered Backprojection Algorithm

    NASA Astrophysics Data System (ADS)

    Raczyński, L.; Wiślicki, W.; Klimaszewski, K.; Krzemień, W.; Kowalski, P.; Shopa, R. Y.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kisielewska-Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Sharma, N. G.; Sharma, S.; Silarski, M.; Skurzok, M.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.

    In this paper we extend the state-of-the-art filtered backprojection (FBP) method with application of the concept of Total Variation regularization. We compare the performance of the new algorithm with the most common form of regularizing in the FBP image reconstruction via apodizing functions. The methods are validated in terms of cross-correlation coefficient between reconstructed and real image of radioactive tracer distribution using standard Derenzo-type phantom. We demonstrate that the proposed approach results in higher cross-correlation values with respect to the standard FBP method.

  12. Surface Estimation, Variable Selection, and the Nonparametric Oracle Property.

    PubMed

    Storlie, Curtis B; Bondell, Howard D; Reich, Brian J; Zhang, Hao Helen

    2011-04-01

    Variable selection for multivariate nonparametric regression is an important, yet challenging, problem due, in part, to the infinite dimensionality of the function space. An ideal selection procedure should be automatic, stable, easy to use, and have desirable asymptotic properties. In particular, we define a selection procedure to be nonparametric oracle (np-oracle) if it consistently selects the correct subset of predictors and at the same time estimates the smooth surface at the optimal nonparametric rate, as the sample size goes to infinity. In this paper, we propose a model selection procedure for nonparametric models, and explore the conditions under which the new method enjoys the aforementioned properties. Developed in the framework of smoothing spline ANOVA, our estimator is obtained via solving a regularization problem with a novel adaptive penalty on the sum of functional component norms. Theoretical properties of the new estimator are established. Additionally, numerous simulated and real examples further demonstrate that the new approach substantially outperforms other existing methods in the finite sample setting.

  13. Surface Estimation, Variable Selection, and the Nonparametric Oracle Property

    PubMed Central

    Storlie, Curtis B.; Bondell, Howard D.; Reich, Brian J.; Zhang, Hao Helen

    2010-01-01

    Variable selection for multivariate nonparametric regression is an important, yet challenging, problem due, in part, to the infinite dimensionality of the function space. An ideal selection procedure should be automatic, stable, easy to use, and have desirable asymptotic properties. In particular, we define a selection procedure to be nonparametric oracle (np-oracle) if it consistently selects the correct subset of predictors and at the same time estimates the smooth surface at the optimal nonparametric rate, as the sample size goes to infinity. In this paper, we propose a model selection procedure for nonparametric models, and explore the conditions under which the new method enjoys the aforementioned properties. Developed in the framework of smoothing spline ANOVA, our estimator is obtained via solving a regularization problem with a novel adaptive penalty on the sum of functional component norms. Theoretical properties of the new estimator are established. Additionally, numerous simulated and real examples further demonstrate that the new approach substantially outperforms other existing methods in the finite sample setting. PMID:21603586

  14. Resolution of the 1D regularized Burgers equation using a spatial wavelet approximation

    NASA Technical Reports Server (NTRS)

    Liandrat, J.; Tchamitchian, PH.

    1990-01-01

    The Burgers equation with a small viscosity term, initial and periodic boundary conditions is resolved using a spatial approximation constructed from an orthonormal basis of wavelets. The algorithm is directly derived from the notions of multiresolution analysis and tree algorithms. Before the numerical algorithm is described these notions are first recalled. The method uses extensively the localization properties of the wavelets in the physical and Fourier spaces. Moreover, the authors take advantage of the fact that the involved linear operators have constant coefficients. Finally, the algorithm can be considered as a time marching version of the tree algorithm. The most important point is that an adaptive version of the algorithm exists: it allows one to reduce in a significant way the number of degrees of freedom required for a good computation of the solution. Numerical results and description of the different elements of the algorithm are provided in combination with different mathematical comments on the method and some comparison with more classical numerical algorithms.

  15. Källén-Lehmann spectroscopy for (un)physical degrees of freedom

    NASA Astrophysics Data System (ADS)

    Dudal, David; Oliveira, Orlando; Silva, Paulo J.

    2014-01-01

    We consider the problem of "measuring" the Källén-Lehmann spectral density of a particle (be it elementary or bound state) propagator by means of 4D lattice data. As the latter are obtained from operations at (Euclidean momentum squared) p2≥0, we are facing the generically ill-posed problem of converting a limited data set over the positive real axis to an integral representation, extending over the whole complex p2 plane. We employ a linear regularization strategy, commonly known as the Tikhonov method with the Morozov discrepancy principle, with suitable adaptations to realistic data, e.g. with an unknown threshold. An important virtue over the (standard) maximum entropy method is the possibility to also probe unphysical spectral densities, for example, of a confined gluon. We apply our proposal here to "physical" mock spectral data as a litmus test and then to the lattice SU(3) Landau gauge gluon at zero temperature.

  16. Generation of a conditional analog-sensitive kinase in human cells using CRISPR/Cas9-mediated genome engineering.

    PubMed

    Moyer, Tyler C; Holland, Andrew J

    2015-01-01

    The ability to rapidly and specifically modify the genome of mammalian cells has been a long-term goal of biomedical researchers. Recently, the clustered, regularly interspaced, short palindromic repeats (CRISPR)/Cas9 system from bacteria has been exploited for genome engineering in human cells. The CRISPR system directs the RNA-guided Cas9 nuclease to a specific genomic locus to induce a DNA double-strand break that may be subsequently repaired by homology-directed repair using an exogenous DNA repair template. Here we describe a protocol using CRISPR/Cas9 to achieve bi-allelic insertion of a point mutation in human cells. Using this method, homozygous clonal cell lines can be constructed in 5-6 weeks. This method can also be adapted to insert larger DNA elements, such as fluorescent proteins and degrons, at defined genomic locations. CRISPR/Cas9 genome engineering offers exciting applications in both basic science and translational research. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Graph Frequency Analysis of Brain Signals

    PubMed Central

    Huang, Weiyu; Goldsberry, Leah; Wymbs, Nicholas F.; Grafton, Scott T.; Bassett, Danielle S.; Ribeiro, Alejandro

    2016-01-01

    This paper presents methods to analyze functional brain networks and signals from graph spectral perspectives. The notion of frequency and filters traditionally defined for signals supported on regular domains such as discrete time and image grids has been recently generalized to irregular graph domains, and defines brain graph frequencies associated with different levels of spatial smoothness across the brain regions. Brain network frequency also enables the decomposition of brain signals into pieces corresponding to smooth or rapid variations. We relate graph frequency with principal component analysis when the networks of interest denote functional connectivity. The methods are utilized to analyze brain networks and signals as subjects master a simple motor skill. We observe that brain signals corresponding to different graph frequencies exhibit different levels of adaptability throughout learning. Further, we notice a strong association between graph spectral properties of brain networks and the level of exposure to tasks performed, and recognize the most contributing and important frequency signatures at different levels of task familiarity. PMID:28439325

  18. Using intervention mapping to promote the receipt of clinical preventive services among women with physical disabilities.

    PubMed

    Suzuki, Rie; Peterson, Jana J; Weatherby, Amanda V; Buckley, David I; Walsh, Emily S; Kailes, June Isaacson; Krahn, Gloria L

    2012-01-01

    This article describes the development of Promoting Access to Health Services (PATHS), an intervention to promote regular use of clinical preventive services by women with physical disabilities. The intervention was developed using intervention mapping (IM), a theory-based logical process that incorporates the six steps of assessment of need, preparation of matrices, selection of theoretical methods and strategies, program design, program implementation, and evaluation. The development process used methods and strategies aligned with the social cognitive theory and the health belief model. PATHS was adapted from the workbook Making Preventive Health Care Work for You, developed by a disability advocate, and was informed by participant input at five points: at inception through consultation by the workbook author, in conceptualization through a town hall meeting, in pilot testing with feedback, in revision of the curriculum through an advisory group, and in implementation by trainers with disabilities. The resulting PATHS program is a 90-min participatory small-group workshop, followed by structured telephone support for 6 months.

  19. Helicobacter pylori Adaptation In Vivo in Response to a High-Salt Diet

    PubMed Central

    Loh, John T.; Gaddy, Jennifer A.; Algood, Holly M. Scott; Gaudieri, Silvana; Mallal, Simon

    2015-01-01

    Helicobacter pylori exhibits a high level of intraspecies genetic diversity. In this study, we investigated whether the diversification of H. pylori is influenced by the composition of the diet. Specifically, we investigated the effect of a high-salt diet (a known risk factor for gastric adenocarcinoma) on H. pylori diversification within a host. We analyzed H. pylori strains isolated from Mongolian gerbils fed either a high-salt diet or a regular diet for 4 months by proteomic and whole-genome sequencing methods. Compared to the input strain and output strains from animals fed a regular diet, the output strains from animals fed a high-salt diet produced higher levels of proteins involved in iron acquisition and oxidative-stress resistance. Several of these changes were attributable to a nonsynonymous mutation in fur (fur-R88H). Further experiments indicated that this mutation conferred increased resistance to high-salt conditions and oxidative stress. We propose a model in which a high-salt diet leads to high levels of gastric inflammation and associated oxidative stress in H. pylori-infected animals and that these conditions, along with the high intraluminal concentrations of sodium chloride, lead to selection of H. pylori strains that are most fit for growth in this environment. PMID:26438795

  20. 42 CFR 61.6 - Method of application.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 1 2011-10-01 2011-10-01 false Method of application. 61.6 Section 61.6 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.6 Method of application. Application for a regular fellowship shall be...

  1. 42 CFR 61.6 - Method of application.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 1 2013-10-01 2013-10-01 false Method of application. 61.6 Section 61.6 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.6 Method of application. Application for a regular fellowship shall be...

  2. 42 CFR 61.6 - Method of application.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 1 2012-10-01 2012-10-01 false Method of application. 61.6 Section 61.6 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.6 Method of application. Application for a regular fellowship shall be...

  3. 42 CFR 61.6 - Method of application.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Method of application. 61.6 Section 61.6 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.6 Method of application. Application for a regular fellowship shall be...

  4. 42 CFR 61.6 - Method of application.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 1 2014-10-01 2014-10-01 false Method of application. 61.6 Section 61.6 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.6 Method of application. Application for a regular fellowship shall be...

  5. Self-Avoiding Walks over Adaptive Triangular Grids

    NASA Technical Reports Server (NTRS)

    Heber, Gerd; Biswas, Rupak; Gao, Guang R.; Saini, Subhash (Technical Monitor)

    1998-01-01

    In this paper, we present a new approach to constructing a "self-avoiding" walk through a triangular mesh. Unlike the popular approach of visiting mesh elements using space-filling curves which is based on a geometric embedding, our approach is combinatorial in the sense that it uses the mesh connectivity only. We present an algorithm for constructing a self-avoiding walk which can be applied to any unstructured triangular mesh. The complexity of the algorithm is O(n x log(n)), where n is the number of triangles in the mesh. We show that for hierarchical adaptive meshes, the algorithm can be easily parallelized by taking advantage of the regularity of the refinement rules. The proposed approach should be very useful in the run-time partitioning and load balancing of adaptive unstructured grids.

  6. AOF LTAO mode: reconstruction strategy and first test results

    NASA Astrophysics Data System (ADS)

    Oberti, Sylvain; Kolb, Johann; Le Louarn, Miska; La Penna, Paolo; Madec, Pierre-Yves; Neichel, Benoit; Sauvage, Jean-François; Fusco, Thierry; Donaldson, Robert; Soenke, Christian; Suárez Valles, Marcos; Arsenault, Robin

    2016-07-01

    GALACSI is the Adaptive Optics (AO) system serving the instrument MUSE in the framework of the Adaptive Optics Facility (AOF) project. Its Narrow Field Mode (NFM) is a Laser Tomography AO (LTAO) mode delivering high resolution in the visible across a small Field of View (FoV) of 7.5" diameter around the optical axis. From a reconstruction standpoint, GALACSI NFM intends to optimize the correction on axis by estimating the turbulence in volume via a tomographic process, then projecting the turbulence profile onto one single Deformable Mirror (DM) located in the pupil, close to the ground. In this paper, the laser tomographic reconstruction process is described. Several methods (virtual DM, virtual layer projection) are studied, under the constraint of a single matrix vector multiplication. The pseudo-synthetic interaction matrix model and the LTAO reconstructor design are analysed. Moreover, the reconstruction parameter space is explored, in particular the regularization terms. Furthermore, we present here the strategy to define the modal control basis and split the reconstruction between the Low Order (LO) loop and the High Order (HO) loop. Finally, closed loop performance obtained with a 3D turbulence generator will be analysed with respect to the most relevant system parameters to be tuned.

  7. High-Resolution Numerical Simulation and Analysis of Mach Reflection Structures in Detonation Waves in Low-Pressure H 2 –O 2 –Ar Mixtures: A Summary of Results Obtained with the Adaptive Mesh Refinement Framework AMROC

    DOE PAGES

    Deiterding, Ralf

    2011-01-01

    Numerical simulation can be key to the understanding of the multidimensional nature of transient detonation waves. However, the accurate approximation of realistic detonations is demanding as a wide range of scales needs to be resolved. This paper describes a successful solution strategy that utilizes logically rectangular dynamically adaptive meshes. The hydrodynamic transport scheme and the treatment of the nonequilibrium reaction terms are sketched. A ghost fluid approach is integrated into the method to allow for embedded geometrically complex boundaries. Large-scale parallel simulations of unstable detonation structures of Chapman-Jouguet detonations in low-pressure hydrogen-oxygen-argon mixtures demonstrate the efficiency of the described techniquesmore » in practice. In particular, computations of regular cellular structures in two and three space dimensions and their development under transient conditions, that is, under diffraction and for propagation through bends are presented. Some of the observed patterns are classified by shock polar analysis, and a diagram of the transition boundaries between possible Mach reflection structures is constructed.« less

  8. Advanced algorithms for radiographic material discrimination and inspection system design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilbert, Andrew J.; McDonald, Benjamin S.; Deinert, Mark R.

    X-ray and neutron radiography are powerful tools for non-invasively inspecting the interior of objects. Materials can be discriminated by noting how the radiographic signal changes with variations in the input spectrum or inspection mode. However, current methods are limited in their ability to differentiate when multiple materials are present, especially within large and complex objects. With X-ray radiography, the inability to distinguish materials of a similar atomic number is especially problematic. To overcome these critical limitations, we augmented our existing inverse problem framework with two important expansions: 1) adapting the previous methodology for use with multi-modal radiography and energy-integrating detectors,more » and 2) applying the Cramer-Rao lower bound to select an optimal set of inspection modes for a given application a priori. Adding these expanded capabilities to our algorithmic framework with adaptive regularization, we observed improved discrimination between high-Z materials, specifically plutonium and tungsten. The combined system can estimate plutonium mass within our simulated system to within 1%. Three types of inspection modes were modeled: multi-endpoint X-ray radiography alone; in combination with neutron radiography using deuterium-deuterium (DD); or in combination with neutron radiography using deuterium-tritium (DT) sources.« less

  9. Scatter correction in cone-beam CT via a half beam blocker technique allowing simultaneous acquisition of scatter and image information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Ho; Xing Lei; Lee, Rena

    2012-05-15

    Purpose: X-ray scatter incurred to detectors degrades the quality of cone-beam computed tomography (CBCT) and represents a problem in volumetric image guided and adaptive radiation therapy. Several methods using a beam blocker for the estimation and subtraction of scatter have been proposed. However, due to missing information resulting from the obstruction of the blocker, such methods require dual scanning or dynamically moving blocker to obtain a complete volumetric image. Here, we propose a half beam blocker-based approach, in conjunction with a total variation (TV) regularized Feldkamp-Davis-Kress (FDK) algorithm, to correct scatter-induced artifacts by simultaneously acquiring image and scatter information frommore » a single-rotation CBCT scan. Methods: A half beam blocker, comprising lead strips, is used to simultaneously acquire image data on one side of the projection data and scatter data on the other half side. One-dimensional cubic B-Spline interpolation/extrapolation is applied to derive patient specific scatter information by using the scatter distributions on strips. The estimated scatter is subtracted from the projection image acquired at the opposite view. With scatter-corrected projections where this subtraction is completed, the FDK algorithm based on a cosine weighting function is performed to reconstruct CBCT volume. To suppress the noise in the reconstructed CBCT images produced by geometric errors between two opposed projections and interpolated scatter information, total variation regularization is applied by a minimization using a steepest gradient descent optimization method. The experimental studies using Catphan504 and anthropomorphic phantoms were carried out to evaluate the performance of the proposed scheme. Results: The scatter-induced shading artifacts were markedly suppressed in CBCT using the proposed scheme. Compared with CBCT without a blocker, the nonuniformity value was reduced from 39.3% to 3.1%. The root mean square error relative to values inside the regions of interest selected from a benchmark scatter free image was reduced from 50 to 11.3. The TV regularization also led to a better contrast-to-noise ratio. Conclusions: An asymmetric half beam blocker-based FDK acquisition and reconstruction technique has been established. The proposed scheme enables simultaneous detection of patient specific scatter and complete volumetric CBCT reconstruction without additional requirements such as prior images, dual scans, or moving strips.« less

  10. Adaptation of in vitro bioassays for the diagnosis of resistance to Fipronil and Ivermectin in R. (B.) microplus

    USDA-ARS?s Scientific Manuscript database

    R. microplus represents the most important pathological constraint to livestock production in Brazil and Uruguay. The infestation of cattle by ticks is controlled by chemical applications on a regular basis. Fipronil and ivermectin have been widely used in recent years to the benefit of cattle produ...

  11. Adapting Criterion-Referenced Measurement to Individualization of Instruction for Handicapped Children: Some Issues and a First Attempt.

    ERIC Educational Resources Information Center

    Proger, Barton B.; And Others

    Criterion-referenced measurement (CRM) has received increasing attention in regular education. However, it is in education for handicapped children that CRM's flexibility for individualization of both instruction and evaluation become even more fully realized. Research is described on one of the first CRM systems (Individual Achievement Monitoring…

  12. The Examination of Physical Education Teachers' Perceptions of Their Teacher Training to Include Students with Disabilities in General Physical Education

    ERIC Educational Resources Information Center

    Townsend, Amy

    2017-01-01

    Despite legislative mandates, only 32% of states require specific licensure in adapted physical education (APE); consequently, general physical educators are challenged with including students with disabilities into regular classrooms. Although physical education teachers are considered qualified personnel to teach students with disabilities in…

  13. Operation of AC Adapters Visualized Using Light-Emitting Diodes

    ERIC Educational Resources Information Center

    Regester, Jeffrey

    2016-01-01

    A bridge rectifier is a diamond-shaped configuration of diodes that serves to convert alternating current(AC) into direct current (DC). In our world of AC outlets and DC electronics, they are ubiquitous. Of course, most bridge rectifiers are built with regular diodes, not the light-emitting variety, because LEDs have a number of disadvantages. For…

  14. The Reception of German Progressive Education in Russia: On Regularities of International Educational Transfer

    ERIC Educational Resources Information Center

    Mchitarjan, Irina

    2015-01-01

    This article reports a historical case study of extensive educational transfer: the reception, adaptation, and use of German progressive education and German school reform ideas and practices in Russia at the beginning of the twentieth century. The reception of German educational ideas greatly enriched the theory and practice of the Russian school…

  15. Computers in My Curriculum? 18 Lesson Plans for Teaching Computer Awareness without a Computer. Adaptable Grades 3-12.

    ERIC Educational Resources Information Center

    Bailey, Suzanne Powers; Jeffers, Marcia

    Eighteen interrelated, sequential lesson plans and supporting materials for teaching computer literacy at the elementary and secondary levels are presented. The activities, intended to be infused into the regular curriculum, do not require the use of a computer. The introduction presents background information on computer literacy, suggests a…

  16. Machine Shop Suggested Job and Task Sheets. Part I. 25 Elementary Jobs.

    ERIC Educational Resources Information Center

    Texas A and M Univ., College Station. Vocational Instructional Services.

    This volume consists of elementary job and task sheets adaptable for use in the regular vocational industrial education programs for the training of machinists and machine shop operators. Twenty-five simple machine shop job sheets are included. Some or all of this material is provided for each job sheet: an introductory sheet with aim, checking…

  17. Machine Shop Suggested Job and Task Sheets. Part II. 21 Advanced Jobs.

    ERIC Educational Resources Information Center

    Texas A and M Univ., College Station. Vocational Instructional Services.

    This volume consists of advanced job and task sheets adaptable for use in the regular vocational industrial education programs for the training of machinists and machine shop operators. Twenty-one advanced machine shop job sheets are included. Some or all of this material is provided for each job: an introductory sheet with aim, checking…

  18. A dynamical regularization algorithm for solving inverse source problems of elliptic partial differential equations

    NASA Astrophysics Data System (ADS)

    Zhang, Ye; Gong, Rongfang; Cheng, Xiaoliang; Gulliksson, Mårten

    2018-06-01

    This study considers the inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary data. The unknown source term is to be determined by additional boundary conditions. Unlike the existing methods found in the literature, which usually employ the first-order in time gradient-like system (such as the steepest descent methods) for numerically solving the regularized optimization problem with a fixed regularization parameter, we propose a novel method with a second-order in time dissipative gradient-like system and a dynamical selected regularization parameter. A damped symplectic scheme is proposed for the numerical solution. Theoretical analysis is given for both the continuous model and the numerical algorithm. Several numerical examples are provided to show the robustness of the proposed algorithm.

  19. Adaptation to shift work: physiologically based modeling of the effects of lighting and shifts' start time.

    PubMed

    Postnova, Svetlana; Robinson, Peter A; Postnov, Dmitry D

    2013-01-01

    Shift work has become an integral part of our life with almost 20% of the population being involved in different shift schedules in developed countries. However, the atypical work times, especially the night shifts, are associated with reduced quality and quantity of sleep that leads to increase of sleepiness often culminating in accidents. It has been demonstrated that shift workers' sleepiness can be improved by a proper scheduling of light exposure and optimizing shifts timing. Here, an integrated physiologically-based model of sleep-wake cycles is used to predict adaptation to shift work in different light conditions and for different shift start times for a schedule of four consecutive days of work. The integrated model combines a model of the ascending arousal system in the brain that controls the sleep-wake switch and a human circadian pacemaker model. To validate the application of the integrated model and demonstrate its utility, its dynamics are adjusted to achieve a fit to published experimental results showing adaptation of night shift workers (n = 8) in conditions of either bright or regular lighting. Further, the model is used to predict the shift workers' adaptation to the same shift schedule, but for conditions not considered in the experiment. The model demonstrates that the intensity of shift light can be reduced fourfold from that used in the experiment and still produce good adaptation to night work. The model predicts that sleepiness of the workers during night shifts on a protocol with either bright or regular lighting can be significantly improved by starting the shift earlier in the night, e.g.; at 21:00 instead of 00:00. Finally, the study predicts that people of the same chronotype, i.e. with identical sleep times in normal conditions, can have drastically different responses to shift work depending on their intrinsic circadian and homeostatic parameters.

  20. Adaptation to Shift Work: Physiologically Based Modeling of the Effects of Lighting and Shifts’ Start Time

    PubMed Central

    Postnova, Svetlana; Robinson, Peter A.; Postnov, Dmitry D.

    2013-01-01

    Shift work has become an integral part of our life with almost 20% of the population being involved in different shift schedules in developed countries. However, the atypical work times, especially the night shifts, are associated with reduced quality and quantity of sleep that leads to increase of sleepiness often culminating in accidents. It has been demonstrated that shift workers’ sleepiness can be improved by a proper scheduling of light exposure and optimizing shifts timing. Here, an integrated physiologically-based model of sleep-wake cycles is used to predict adaptation to shift work in different light conditions and for different shift start times for a schedule of four consecutive days of work. The integrated model combines a model of the ascending arousal system in the brain that controls the sleep-wake switch and a human circadian pacemaker model. To validate the application of the integrated model and demonstrate its utility, its dynamics are adjusted to achieve a fit to published experimental results showing adaptation of night shift workers (n = 8) in conditions of either bright or regular lighting. Further, the model is used to predict the shift workers’ adaptation to the same shift schedule, but for conditions not considered in the experiment. The model demonstrates that the intensity of shift light can be reduced fourfold from that used in the experiment and still produce good adaptation to night work. The model predicts that sleepiness of the workers during night shifts on a protocol with either bright or regular lighting can be significantly improved by starting the shift earlier in the night, e.g.; at 21∶00 instead of 00∶00. Finally, the study predicts that people of the same chronotype, i.e. with identical sleep times in normal conditions, can have drastically different responses to shift work depending on their intrinsic circadian and homeostatic parameters. PMID:23308206

  1. History matching by spline approximation and regularization in single-phase areal reservoirs

    NASA Technical Reports Server (NTRS)

    Lee, T. Y.; Kravaris, C.; Seinfeld, J.

    1986-01-01

    An automatic history matching algorithm is developed based on bi-cubic spline approximations of permeability and porosity distributions and on the theory of regularization to estimate permeability or porosity in a single-phase, two-dimensional real reservoir from well pressure data. The regularization feature of the algorithm is used to convert the ill-posed history matching problem into a well-posed problem. The algorithm employs the conjugate gradient method as its core minimization method. A number of numerical experiments are carried out to evaluate the performance of the algorithm. Comparisons with conventional (non-regularized) automatic history matching algorithms indicate the superiority of the new algorithm with respect to the parameter estimates obtained. A quasioptimal regularization parameter is determined without requiring a priori information on the statistical properties of the observations.

  2. Specificity and timescales of cortical adaptation as inferences about natural movie statistics.

    PubMed

    Snow, Michoel; Coen-Cagli, Ruben; Schwartz, Odelia

    2016-10-01

    Adaptation is a phenomenological umbrella term under which a variety of temporal contextual effects are grouped. Previous models have shown that some aspects of visual adaptation reflect optimal processing of dynamic visual inputs, suggesting that adaptation should be tuned to the properties of natural visual inputs. However, the link between natural dynamic inputs and adaptation is poorly understood. Here, we extend a previously developed Bayesian modeling framework for spatial contextual effects to the temporal domain. The model learns temporal statistical regularities of natural movies and links these statistics to adaptation in primary visual cortex via divisive normalization, a ubiquitous neural computation. In particular, the model divisively normalizes the present visual input by the past visual inputs only to the degree that these are inferred to be statistically dependent. We show that this flexible form of normalization reproduces classical findings on how brief adaptation affects neuronal selectivity. Furthermore, prior knowledge acquired by the Bayesian model from natural movies can be modified by prolonged exposure to novel visual stimuli. We show that this updating can explain classical results on contrast adaptation. We also simulate the recent finding that adaptation maintains population homeostasis, namely, a balanced level of activity across a population of neurons with different orientation preferences. Consistent with previous disparate observations, our work further clarifies the influence of stimulus-specific and neuronal-specific normalization signals in adaptation.

  3. Specificity and timescales of cortical adaptation as inferences about natural movie statistics

    PubMed Central

    Snow, Michoel; Coen-Cagli, Ruben; Schwartz, Odelia

    2016-01-01

    Adaptation is a phenomenological umbrella term under which a variety of temporal contextual effects are grouped. Previous models have shown that some aspects of visual adaptation reflect optimal processing of dynamic visual inputs, suggesting that adaptation should be tuned to the properties of natural visual inputs. However, the link between natural dynamic inputs and adaptation is poorly understood. Here, we extend a previously developed Bayesian modeling framework for spatial contextual effects to the temporal domain. The model learns temporal statistical regularities of natural movies and links these statistics to adaptation in primary visual cortex via divisive normalization, a ubiquitous neural computation. In particular, the model divisively normalizes the present visual input by the past visual inputs only to the degree that these are inferred to be statistically dependent. We show that this flexible form of normalization reproduces classical findings on how brief adaptation affects neuronal selectivity. Furthermore, prior knowledge acquired by the Bayesian model from natural movies can be modified by prolonged exposure to novel visual stimuli. We show that this updating can explain classical results on contrast adaptation. We also simulate the recent finding that adaptation maintains population homeostasis, namely, a balanced level of activity across a population of neurons with different orientation preferences. Consistent with previous disparate observations, our work further clarifies the influence of stimulus-specific and neuronal-specific normalization signals in adaptation. PMID:27699416

  4. General phase regularized reconstruction using phase cycling.

    PubMed

    Ong, Frank; Cheng, Joseph Y; Lustig, Michael

    2018-07-01

    To develop a general phase regularized image reconstruction method, with applications to partial Fourier imaging, water-fat imaging and flow imaging. The problem of enforcing phase constraints in reconstruction was studied under a regularized inverse problem framework. A general phase regularized reconstruction algorithm was proposed to enable various joint reconstruction of partial Fourier imaging, water-fat imaging and flow imaging, along with parallel imaging (PI) and compressed sensing (CS). Since phase regularized reconstruction is inherently non-convex and sensitive to phase wraps in the initial solution, a reconstruction technique, named phase cycling, was proposed to render the overall algorithm invariant to phase wraps. The proposed method was applied to retrospectively under-sampled in vivo datasets and compared with state of the art reconstruction methods. Phase cycling reconstructions showed reduction of artifacts compared to reconstructions without phase cycling and achieved similar performances as state of the art results in partial Fourier, water-fat and divergence-free regularized flow reconstruction. Joint reconstruction of partial Fourier + water-fat imaging + PI + CS, and partial Fourier + divergence-free regularized flow imaging + PI + CS were demonstrated. The proposed phase cycling reconstruction provides an alternative way to perform phase regularized reconstruction, without the need to perform phase unwrapping. It is robust to the choice of initial solutions and encourages the joint reconstruction of phase imaging applications. Magn Reson Med 80:112-125, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  5. Cluster-size entropy in the Axelrod model of social influence: Small-world networks and mass media

    NASA Astrophysics Data System (ADS)

    Gandica, Y.; Charmell, A.; Villegas-Febres, J.; Bonalde, I.

    2011-10-01

    We study the Axelrod's cultural adaptation model using the concept of cluster-size entropy Sc, which gives information on the variability of the cultural cluster size present in the system. Using networks of different topologies, from regular to random, we find that the critical point of the well-known nonequilibrium monocultural-multicultural (order-disorder) transition of the Axelrod model is given by the maximum of the Sc(q) distributions. The width of the cluster entropy distributions can be used to qualitatively determine whether the transition is first or second order. By scaling the cluster entropy distributions we were able to obtain a relationship between the critical cultural trait qc and the number F of cultural features in two-dimensional regular networks. We also analyze the effect of the mass media (external field) on social systems within the Axelrod model in a square network. We find a partially ordered phase whose largest cultural cluster is not aligned with the external field, in contrast with a recent suggestion that this type of phase cannot be formed in regular networks. We draw a q-B phase diagram for the Axelrod model in regular networks.

  6. Isotropic model for cluster growth on a regular lattice

    NASA Astrophysics Data System (ADS)

    Yates, Christian A.; Baker, Ruth E.

    2013-08-01

    There exists a plethora of mathematical models for cluster growth and/or aggregation on regular lattices. Almost all suffer from inherent anisotropy caused by the regular lattice upon which they are grown. We analyze the little-known model for stochastic cluster growth on a regular lattice first introduced by Ferreira Jr. and Alves [J. Stat. Mech. Theo. & Exp.1742-546810.1088/1742-5468/2006/11/P11007 (2006) P11007], which produces circular clusters with no discernible anisotropy. We demonstrate that even in the noise-reduced limit the clusters remain circular. We adapt the model by introducing a specific rearrangement algorithm so that, rather than adding elements to the cluster from the outside (corresponding to apical growth), our model uses mitosis-like cell splitting events to increase the cluster size. We analyze the surface scaling properties of our model and compare it to the behavior of more traditional models. In “1+1” dimensions we discover and explore a new, nonmonotonic surface thickness scaling relationship which differs significantly from the Family-Vicsek scaling relationship. This suggests that, for models whose clusters do not grow through particle additions which are solely dependent on surface considerations, the traditional classification into “universality classes” may not be appropriate.

  7. Cluster-size entropy in the Axelrod model of social influence: small-world networks and mass media.

    PubMed

    Gandica, Y; Charmell, A; Villegas-Febres, J; Bonalde, I

    2011-10-01

    We study the Axelrod's cultural adaptation model using the concept of cluster-size entropy S(c), which gives information on the variability of the cultural cluster size present in the system. Using networks of different topologies, from regular to random, we find that the critical point of the well-known nonequilibrium monocultural-multicultural (order-disorder) transition of the Axelrod model is given by the maximum of the S(c)(q) distributions. The width of the cluster entropy distributions can be used to qualitatively determine whether the transition is first or second order. By scaling the cluster entropy distributions we were able to obtain a relationship between the critical cultural trait q(c) and the number F of cultural features in two-dimensional regular networks. We also analyze the effect of the mass media (external field) on social systems within the Axelrod model in a square network. We find a partially ordered phase whose largest cultural cluster is not aligned with the external field, in contrast with a recent suggestion that this type of phase cannot be formed in regular networks. We draw a q-B phase diagram for the Axelrod model in regular networks.

  8. EVOLUTION OF THE IMMUNE RESPONSE

    PubMed Central

    Papermaster, Ben W.; Condie, Richard M.; Finstad, Joanne; Good, Robert A.

    1964-01-01

    1. The California hagfish, Eptatretus stoutii, seems to be completely lacking in adaptive immunity: it forms no detectable circulating antibody despite intensive stimulation with a range of antigens; it does not show reactivity to old tuberculin following sensitization with BCG; and gives no evidence of homograft immunity. 2. Studies on the sea lamprey, Petromyzon marinus, have been limited to the response to bacteriophage T2 and hemocyanin in small groups of spawning animals. They suggest that the lamprey may have a low degree of immunologic reactivity. 3. One holostean, the bowfin (Amia calva) and the guitarfish (Rhinobatos productus), an elasmobranch, showed a low level of primary response to phage and hemocyanin. The response is slow and antibody levels low. Both the bowfin and the guitarfish showed a vigorous secondary response to phage, but neither showed much enhancement of reactivity to hemocyanin in the secondary response. The bowfin formed precipitating antibody to hemocyanin, but the guitarfish did not. Both hemagglutinating and precipitating antibody to hemocyanin were also observed in the primary response of the black bass. 4. The bowfin was successfully sensitized to Ascaris antigen, and lesions of the delayed type developed after challenge at varying intervals following sensitization. 5. The horned shark (Heterodontus franciscii) regularly cleared hemocyanin from the circulation after both primary and secondary antigenic stimulation, and regularly formed hemagglutinating antibody, but not precipitating antibody, after both primary and secondary stimulation with this antigen. These animals regularly cleared bacteriophage from the circulation after both the primary and secondary stimulation with bacteriophage T2. Significant but small amounts of antibody were produced in a few animals in the primary response, and larger amounts in the responding animals after secondary antigenic stimulation. 6. Studies by starch gel and immunoelectrophoresis show that the hagfish has no bands with mobilities of mammalian gamma globulins; that the lamprey has a single, relatively faint band of this type; and that multiple gamma bands are characteristic of the holostean, elasmobranchs, and teleosts studied. By this method of study, the bowfin appeared to have substantial amounts of gamma2 globulin. 7. We conclude that adaptive immunity and its cellular and humoral correlates developed in the lowest vertebrates, and that a rising level of immunologic reactivity and an increasingly differentiated and complex immunologic mechanism are observed going up the phylogenetic scale from the hagfish, to the lamprey, to the elasmobranchs, to the holosteans, and finally the teleosts. PMID:14113107

  9. Using dual-energy x-ray imaging to enhance automated lung tumor tracking during real-time adaptive radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menten, Martin J., E-mail: martin.menten@icr.ac.uk; Fast, Martin F.; Nill, Simeon

    2015-12-15

    Purpose: Real-time, markerless localization of lung tumors with kV imaging is often inhibited by ribs obscuring the tumor and poor soft-tissue contrast. This study investigates the use of dual-energy imaging, which can generate radiographs with reduced bone visibility, to enhance automated lung tumor tracking for real-time adaptive radiotherapy. Methods: kV images of an anthropomorphic breathing chest phantom were experimentally acquired and radiographs of actual lung cancer patients were Monte-Carlo-simulated at three imaging settings: low-energy (70 kVp, 1.5 mAs), high-energy (140 kVp, 2.5 mAs, 1 mm additional tin filtration), and clinical (120 kVp, 0.25 mAs). Regular dual-energy images were calculated bymore » weighted logarithmic subtraction of high- and low-energy images and filter-free dual-energy images were generated from clinical and low-energy radiographs. The weighting factor to calculate the dual-energy images was determined by means of a novel objective score. The usefulness of dual-energy imaging for real-time tracking with an automated template matching algorithm was investigated. Results: Regular dual-energy imaging was able to increase tracking accuracy in left–right images of the anthropomorphic phantom as well as in 7 out of 24 investigated patient cases. Tracking accuracy remained comparable in three cases and decreased in five cases. Filter-free dual-energy imaging was only able to increase accuracy in 2 out of 24 cases. In four cases no change in accuracy was observed and tracking accuracy worsened in nine cases. In 9 out of 24 cases, it was not possible to define a tracking template due to poor soft-tissue contrast regardless of input images. The mean localization errors using clinical, regular dual-energy, and filter-free dual-energy radiographs were 3.85, 3.32, and 5.24 mm, respectively. Tracking success was dependent on tumor position, tumor size, imaging beam angle, and patient size. Conclusions: This study has highlighted the influence of patient anatomy on the success rate of real-time markerless tumor tracking using dual-energy imaging. Additionally, the importance of the spectral separation of the imaging beams used to generate the dual-energy images has been shown.« less

  10. Enhanced nearfield acoustic holography for larger distances of reconstructions using fixed parameter Tikhonov regularization

    DOE PAGES

    Chelliah, Kanthasamy; Raman, Ganesh G.; Muehleisen, Ralph T.

    2016-07-07

    This paper evaluates the performance of various regularization parameter choice methods applied to different approaches of nearfield acoustic holography when a very nearfield measurement is not possible. For a fixed grid resolution, the larger the hologram distance, the larger the error in the naive nearfield acoustic holography reconstructions. These errors can be smoothed out by using an appropriate order of regularization. In conclusion, this study shows that by using a fixed/manual choice of regularization parameter, instead of automated parameter choice methods, reasonably accurate reconstructions can be obtained even when the hologram distance is 16 times larger than the grid resolution.

  11. Evaluation of uncertainty for regularized deconvolution: A case study in hydrophone measurements.

    PubMed

    Eichstädt, S; Wilkens, V

    2017-06-01

    An estimation of the measurand in dynamic metrology usually requires a deconvolution based on a dynamic calibration of the measuring system. Since deconvolution is, mathematically speaking, an ill-posed inverse problem, some kind of regularization is required to render the problem stable and obtain usable results. Many approaches to regularized deconvolution exist in the literature, but the corresponding evaluation of measurement uncertainties is, in general, an unsolved issue. In particular, the uncertainty contribution of the regularization itself is a topic of great importance, because it has a significant impact on the estimation result. Here, a versatile approach is proposed to express prior knowledge about the measurand based on a flexible, low-dimensional modeling of an upper bound on the magnitude spectrum of the measurand. This upper bound allows the derivation of an uncertainty associated with the regularization method in line with the guidelines in metrology. As a case study for the proposed method, hydrophone measurements in medical ultrasound with an acoustic working frequency of up to 7.5 MHz are considered, but the approach is applicable for all kinds of estimation methods in dynamic metrology, where regularization is required and which can be expressed as a multiplication in the frequency domain.

  12. 50 years of the International Committee on Taxonomy of Viruses: progress and prospects.

    PubMed

    Adams, Michael J; Lefkowitz, Elliot J; King, Andrew M Q; Harrach, Balázs; Harrison, Robert L; Knowles, Nick J; Kropinski, Andrew M; Krupovic, Mart; Kuhn, Jens H; Mushegian, Arcady R; Nibert, Max L; Sabanadzovic, Sead; Sanfaçon, Hélène; Siddell, Stuart G; Simmonds, Peter; Varsani, Arvind; Zerbini, Francisco Murilo; Orton, Richard J; Smith, Donald B; Gorbalenya, Alexander E; Davison, Andrew J

    2017-05-01

    We mark the 50th anniversary of the International Committee on Taxonomy of Viruses (ICTV) by presenting a brief history of the organization since its foundation, showing how it has adapted to advancements in our knowledge of virus diversity and the methods used to characterize it. We also outline recent developments, supported by a grant from the Wellcome Trust (UK), that are facilitating substantial changes in the operations of the ICTV and promoting dialogue with the virology community. These developments will generate improved online resources, including a freely available and regularly updated ICTV Virus Taxonomy Report. They also include a series of meetings between the ICTV and the broader community focused on some of the major challenges facing virus taxonomy, with the outcomes helping to inform the future policy and practice of the ICTV.

  13. [Histological and histochemical characteristics of pancreas of deer at the Altay].

    PubMed

    Riadinskaia, N I; Siraziev, R Z

    2008-01-01

    Season changes in the pancreas from animals belonging to genuine deer subfamily have been investigated by histological, histochemical and biometric methods. Glycogen is not found in the pancreas cells throughout the seasons pointing to high functional activity of glandular cells, since glycogen is consumed for carbohydrate biopolymer synthesis and not accumulated. Depending upon the season, cytoplasm of pancreacells, cells of excretory ducts and pancreas islets showed different intensity of pyroninophilous reaction indicating RNA presence. These data coupled with the presence of protein in these cells demonstrate protein-synthesizing ability of the gland adapted to biorhythm. Changes in quantity and types of web cells as well as in functional activity of pancrea cells and pancreas islets revealed season regularity and reflected functional lability of the cells and their constant involvement in many of vital important process.

  14. Restoration of Monotonicity Respecting in Dynamic Regression

    PubMed Central

    Huang, Yijian

    2017-01-01

    Dynamic regression models, including the quantile regression model and Aalen’s additive hazards model, are widely adopted to investigate evolving covariate effects. Yet lack of monotonicity respecting with standard estimation procedures remains an outstanding issue. Advances have recently been made, but none provides a complete resolution. In this article, we propose a novel adaptive interpolation method to restore monotonicity respecting, by successively identifying and then interpolating nearest monotonicity-respecting points of an original estimator. Under mild regularity conditions, the resulting regression coefficient estimator is shown to be asymptotically equivalent to the original. Our numerical studies have demonstrated that the proposed estimator is much more smooth and may have better finite-sample efficiency than the original as well as, when available as only in special cases, other competing monotonicity-respecting estimators. Illustration with a clinical study is provided. PMID:29430068

  15. Utilising flags to reduce drag around a short finite circular cylinder

    NASA Astrophysics Data System (ADS)

    Javadi, Kh.; Kiani, F.; Tahaye Abadi, M.

    2018-03-01

    This paper utilises flags to decrease the drag around a short finite circular cylinder. Wall-adapted large eddy simulation and two-way fluid-structure interaction methods were applied to resolve unsteady turbulent flow structure. The far-field Reynolds number of the current configuration based on the cylinder diameter was chosen to be 20,000. In addition, the length-to-diameter ratio of the cylinder was assumed to be L/D = 2 whereas the flexible flag had a width-to-diameter ratio of W/D = 1.5. The results were compared with the regular short finite circular cylinder and the rigid flagged cylinder in our previous work. The results indicate that utilising flags inside the near-wake region of the cylinder reduces the pressure drag. The physical mechanism of this drag reduction is presented.

  16. Adapting CRISPR/Cas9 for functional genomics screens.

    PubMed

    Malina, Abba; Katigbak, Alexandra; Cencic, Regina; Maïga, Rayelle Itoua; Robert, Francis; Miura, Hisashi; Pelletier, Jerry

    2014-01-01

    The use of CRISPR/Cas9 (clustered regularly interspaced short palindromic repeats/CRISPR-associated protein) for targeted genome editing has been widely adopted and is considered a "game changing" technology. The ease and rapidity by which this approach can be used to modify endogenous loci in a wide spectrum of cell types and organisms makes it a powerful tool for customizable genetic modifications as well as for large-scale functional genomics. The development of retrovirus-based expression platforms to simultaneously deliver the Cas9 nuclease and single guide (sg) RNAs provides unique opportunities by which to ensure stable and reproducible expression of the editing tools and a broad cell targeting spectrum, while remaining compatible with in vivo genetic screens. Here, we describe methods and highlight considerations for designing and generating sgRNA libraries in all-in-one retroviral vectors for such applications.

  17. Thermal depth profiling of vascular lesions: automated regularization of reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Verkruysse, Wim; Choi, Bernard; Zhang, Jenny R.; Kim, Jeehyun; Nelson, J. Stuart

    2008-03-01

    Pulsed photo-thermal radiometry (PPTR) is a non-invasive, non-contact diagnostic technique used to locate cutaneous chromophores such as melanin (epidermis) and hemoglobin (vascular structures). Clinical utility of PPTR is limited because it typically requires trained user intervention to regularize the inversion solution. Herein, the feasibility of automated regularization was studied. A second objective of this study was to depart from modeling port wine stain PWS, a vascular skin lesion frequently studied with PPTR, as strictly layered structures since this may influence conclusions regarding PPTR reconstruction quality. Average blood vessel depths, diameters and densities derived from histology of 30 PWS patients were used to generate 15 randomized lesion geometries for which we simulated PPTR signals. Reconstruction accuracy for subjective regularization was compared with that for automated regularization methods. The objective regularization approach performed better. However, the average difference was much smaller than the variation between the 15 simulated profiles. Reconstruction quality depended more on the actual profile to be reconstructed than on the reconstruction algorithm or regularization method. Similar, or better, accuracy reconstructions can be achieved with an automated regularization procedure which enhances prospects for user friendly implementation of PPTR to optimize laser therapy on an individual patient basis.

  18. Probing the Detailed Seismic Velocity Structure of Subduction Zones Using Advanced Seismic Tomography Methods

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Thurber, C. H.

    2005-12-01

    Subduction zones are one of the most important components of the Earth's plate tectonic system. Knowing the detailed seismic velocity structure within and around subducting slabs is vital to understand the constitution of the slab, the cause of intermediate depth earthquakes inside the slab, the fluid distribution and recycling, and tremor occurrence [Hacker et al., 2001; Obara, 2002].Thanks to the ability of double-difference tomography [Zhang and Thurber, 2003] to resolve the fine-scale structure near the source region and the favorable seismicity distribution inside many subducting slabs, it is now possible to characterize the fine details of the velocity structure and earthquake locations inside the slab, as shown in the study of the Japan subduction zone [Zhang et al., 2004]. We further develop the double-difference tomography method in two aspects: the first improvement is to use an adaptive inversion mesh rather than a regular inversion grid and the second improvement is to determine a reliable Vp/Vs structure using various strategies rather than directly from Vp and Vs [see our abstract ``Strategies to solve for a better Vp/Vs model using P and S arrival time'' at Session T29]. The adaptive mesh seismic tomography method is based on tetrahedral diagrams and can automatically adjust the inversion mesh according to the ray distribution so that the inversion mesh nodes are denser where there are more rays and vice versa [Zhang and Thurber, 2005]. As a result, the number of inversion mesh nodes is greatly reduced compared to a regular inversion grid with comparable spatial resolution, and the tomographic system is more stable and better conditioned. This improvement is quite valuable for characterizing the fine structure of the subduction zone considering the highly uneven distribution of earthquakes within and around the subducting slab. The second improvement, to determine a reliable Vp/Vs model, lies in jointly inverting Vp, Vs, and Vp/Vs using P, S, and S-P times in a manner similar to double-difference tomography. Obtaining a reliable Vp/Vs model of the subduction zone is more helpful for understanding its mechanical and petrologic properties. Our applications of the original version of double-difference tomography to several subduction zones beneath northern Honshu, Japan, the Wellington region, New Zealand, and Alaska, United States, have shown evident velocity variations within and around the subducting slab, which likely is evidence of dehydration reactions of various hydrous minerals that are hypothesized to be responsible for intermediate depth earthquakes. We will show the new velocity models for these subduction zones by applying our advanced tomographic methods.

  19. Retained energy-based coding for EEG signals.

    PubMed

    Bazán-Prieto, Carlos; Blanco-Velasco, Manuel; Cárdenas-Barrera, Julián; Cruz-Roldán, Fernando

    2012-09-01

    The recent use of long-term records in electroencephalography is becoming more frequent due to its diagnostic potential and the growth of novel signal processing methods that deal with these types of recordings. In these cases, the considerable volume of data to be managed makes compression necessary to reduce the bit rate for transmission and storage applications. In this paper, a new compression algorithm specifically designed to encode electroencephalographic (EEG) signals is proposed. Cosine modulated filter banks are used to decompose the EEG signal into a set of subbands well adapted to the frequency bands characteristic of the EEG. Given that no regular pattern may be easily extracted from the signal in time domain, a thresholding-based method is applied for quantizing samples. The method of retained energy is designed for efficiently computing the threshold in the decomposition domain which, at the same time, allows the quality of the reconstructed EEG to be controlled. The experiments are conducted over a large set of signals taken from two public databases available at Physionet and the results show that the compression scheme yields better compression than other reported methods. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.

  20. Edge guided image reconstruction in linear scan CT by weighted alternating direction TV minimization.

    PubMed

    Cai, Ailong; Wang, Linyuan; Zhang, Hanming; Yan, Bin; Li, Lei; Xi, Xiaoqi; Li, Jianxin

    2014-01-01

    Linear scan computed tomography (CT) is a promising imaging configuration with high scanning efficiency while the data set is under-sampled and angularly limited for which high quality image reconstruction is challenging. In this work, an edge guided total variation minimization reconstruction (EGTVM) algorithm is developed in dealing with this problem. The proposed method is modeled on the combination of total variation (TV) regularization and iterative edge detection strategy. In the proposed method, the edge weights of intermediate reconstructions are incorporated into the TV objective function. The optimization is efficiently solved by applying alternating direction method of multipliers. A prudential and conservative edge detection strategy proposed in this paper can obtain the true edges while restricting the errors within an acceptable degree. Based on the comparison on both simulation studies and real CT data set reconstructions, EGTVM provides comparable or even better quality compared to the non-edge guided reconstruction and adaptive steepest descent-projection onto convex sets method. With the utilization of weighted alternating direction TV minimization and edge detection, EGTVM achieves fast and robust convergence and reconstructs high quality image when applied in linear scan CT with under-sampled data set.

Top