Sample records for regularization parameter selection

  1. Selection of regularization parameter for l1-regularized damage detection

    NASA Astrophysics Data System (ADS)

    Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing

    2018-06-01

    The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.

  2. Selection of regularization parameter in total variation image restoration.

    PubMed

    Liao, Haiyong; Li, Fang; Ng, Michael K

    2009-11-01

    We consider and study total variation (TV) image restoration. In the literature there are several regularization parameter selection methods for Tikhonov regularization problems (e.g., the discrepancy principle and the generalized cross-validation method). However, to our knowledge, these selection methods have not been applied to TV regularization problems. The main aim of this paper is to develop a fast TV image restoration method with an automatic selection of the regularization parameter scheme to restore blurred and noisy images. The method exploits the generalized cross-validation (GCV) technique to determine inexpensively how much regularization to use in each restoration step. By updating the regularization parameter in each iteration, the restored image can be obtained. Our experimental results for testing different kinds of noise show that the visual quality and SNRs of images restored by the proposed method is promising. We also demonstrate that the method is efficient, as it can restore images of size 256 x 256 in approximately 20 s in the MATLAB computing environment.

  3. Minimal residual method provides optimal regularization parameter for diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Jagannath, Ravi Prasad K.; Yalavarthy, Phaneendra K.

    2012-10-01

    The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.

  4. Minimal residual method provides optimal regularization parameter for diffuse optical tomography.

    PubMed

    Jagannath, Ravi Prasad K; Yalavarthy, Phaneendra K

    2012-10-01

    The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.

  5. Design of a multiple kernel learning algorithm for LS-SVM by convex programming.

    PubMed

    Jian, Ling; Xia, Zhonghang; Liang, Xijun; Gao, Chuanhou

    2011-06-01

    As a kernel based method, the performance of least squares support vector machine (LS-SVM) depends on the selection of the kernel as well as the regularization parameter (Duan, Keerthi, & Poo, 2003). Cross-validation is efficient in selecting a single kernel and the regularization parameter; however, it suffers from heavy computational cost and is not flexible to deal with multiple kernels. In this paper, we address the issue of multiple kernel learning for LS-SVM by formulating it as semidefinite programming (SDP). Furthermore, we show that the regularization parameter can be optimized in a unified framework with the kernel, which leads to an automatic process for model selection. Extensive experimental validations are performed and analyzed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Fast Quantitative Susceptibility Mapping with L1-Regularization and Automatic Parameter Selection

    PubMed Central

    Bilgic, Berkin; Fan, Audrey P.; Polimeni, Jonathan R.; Cauley, Stephen F.; Bianciardi, Marta; Adalsteinsson, Elfar; Wald, Lawrence L.; Setsompop, Kawin

    2014-01-01

    Purpose To enable fast reconstruction of quantitative susceptibility maps with Total Variation penalty and automatic regularization parameter selection. Methods ℓ1-regularized susceptibility mapping is accelerated by variable-splitting, which allows closed-form evaluation of each iteration of the algorithm by soft thresholding and FFTs. This fast algorithm also renders automatic regularization parameter estimation practical. A weighting mask derived from the magnitude signal can be incorporated to allow edge-aware regularization. Results Compared to the nonlinear Conjugate Gradient (CG) solver, the proposed method offers 20× speed-up in reconstruction time. A complete pipeline including Laplacian phase unwrapping, background phase removal with SHARP filtering and ℓ1-regularized dipole inversion at 0.6 mm isotropic resolution is completed in 1.2 minutes using Matlab on a standard workstation compared to 22 minutes using the Conjugate Gradient solver. This fast reconstruction allows estimation of regularization parameters with the L-curve method in 13 minutes, which would have taken 4 hours with the CG algorithm. Proposed method also permits magnitude-weighted regularization, which prevents smoothing across edges identified on the magnitude signal. This more complicated optimization problem is solved 5× faster than the nonlinear CG approach. Utility of the proposed method is also demonstrated in functional BOLD susceptibility mapping, where processing of the massive time-series dataset would otherwise be prohibitive with the CG solver. Conclusion Online reconstruction of regularized susceptibility maps may become feasible with the proposed dipole inversion. PMID:24259479

  7. Focal-Plane Alignment Sensing

    DTIC Science & Technology

    1993-02-01

    amplification induced by the inverse filter. The problem of noise amplification that arises in conventional image deblurring problems has often been... noise sensitivity, and strategies for selecting a regularization parameter have been developed. The probability of convergence to within a prescribed...Strategies in Image Deblurring .................. 12 2.2.2 CLS Parameter Selection ........................... 14 2.2.3 Wiener Parameter Selection

  8. SU-D-12A-06: A Comprehensive Parameter Analysis for Low Dose Cone-Beam CT Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, W; Southern Medical University, Guangzhou; Yan, H

    Purpose: There is always a parameter in compressive sensing based iterative reconstruction (IR) methods low dose cone-beam CT (CBCT), which controls the weight of regularization relative to data fidelity. A clear understanding of the relationship between image quality and parameter values is important. The purpose of this study is to investigate this subject based on experimental data and a representative advanced IR algorithm using Tight-frame (TF) regularization. Methods: Three data sets of a Catphan phantom acquired at low, regular and high dose levels are used. For each tests, 90 projections covering a 200-degree scan range are used for reconstruction. Threemore » different regions-of-interest (ROIs) of different contrasts are used to calculate contrast-to-noise ratios (CNR) for contrast evaluation. A single point structure is used to measure modulation transfer function (MTF) for spatial-resolution evaluation. Finally, we analyze CNRs and MTFs to study the relationship between image quality and parameter selections. Results: It was found that: 1) there is no universal optimal parameter. The optimal parameter value depends on specific task and dose level. 2) There is a clear trade-off between CNR and resolution. The parameter for the best CNR is always smaller than that for the best resolution. 3) Optimal parameters are also dose-specific. Data acquired under a high dose protocol require less regularization, yielding smaller optimal parameter values. 4) Comparing with conventional FDK images, TF-based CBCT images are better under a certain optimally selected parameters. The advantages are more obvious for low dose data. Conclusion: We have investigated the relationship between image quality and parameter values in the TF-based IR algorithm. Preliminary results indicate optimal parameters are specific to both the task types and dose levels, providing guidance for selecting parameters in advanced IR algorithms. This work is supported in part by NIH (1R01CA154747-01)« less

  9. Non-Cartesian MRI Reconstruction With Automatic Regularization Via Monte-Carlo SURE

    PubMed Central

    Weller, Daniel S.; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.

    2013-01-01

    Magnetic resonance image (MRI) reconstruction from undersampled k-space data requires regularization to reduce noise and aliasing artifacts. Proper application of regularization however requires appropriate selection of associated regularization parameters. In this work, we develop a data-driven regularization parameter adjustment scheme that minimizes an estimate (based on the principle of Stein’s unbiased risk estimate—SURE) of a suitable weighted squared-error measure in k-space. To compute this SURE-type estimate, we propose a Monte-Carlo scheme that extends our previous approach to inverse problems (e.g., MRI reconstruction) involving complex-valued images. Our approach depends only on the output of a given reconstruction algorithm and does not require knowledge of its internal workings, so it is capable of tackling a wide variety of reconstruction algorithms and nonquadratic regularizers including total variation and those based on the ℓ1-norm. Experiments with simulated and real MR data indicate that the proposed approach is capable of providing near mean squared-error (MSE) optimal regularization parameters for single-coil undersampled non-Cartesian MRI reconstruction. PMID:23591478

  10. Optimal Tikhonov regularization for DEER spectroscopy

    NASA Astrophysics Data System (ADS)

    Edwards, Thomas H.; Stoll, Stefan

    2018-03-01

    Tikhonov regularization is the most commonly used method for extracting distance distributions from experimental double electron-electron resonance (DEER) spectroscopy data. This method requires the selection of a regularization parameter, α , and a regularization operator, L. We analyze the performance of a large set of α selection methods and several regularization operators, using a test set of over half a million synthetic noisy DEER traces. These are generated from distance distributions obtained from in silico double labeling of a protein crystal structure of T4 lysozyme with the spin label MTSSL. We compare the methods and operators based on their ability to recover the model distance distributions from the noisy time traces. The results indicate that several α selection methods perform quite well, among them the Akaike information criterion and the generalized cross validation method with either the first- or second-derivative operator. They perform significantly better than currently utilized L-curve methods.

  11. Regularization Parameter Selection for Nonlinear Iterative Image Restoration and MRI Reconstruction Using GCV and SURE-Based Methods

    PubMed Central

    Ramani, Sathish; Liu, Zhihao; Rosen, Jeffrey; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.

    2012-01-01

    Regularized iterative reconstruction algorithms for imaging inverse problems require selection of appropriate regularization parameter values. We focus on the challenging problem of tuning regularization parameters for nonlinear algorithms for the case of additive (possibly complex) Gaussian noise. Generalized cross-validation (GCV) and (weighted) mean-squared error (MSE) approaches (based on Stein's Unbiased Risk Estimate— SURE) need the Jacobian matrix of the nonlinear reconstruction operator (representative of the iterative algorithm) with respect to the data. We derive the desired Jacobian matrix for two types of nonlinear iterative algorithms: a fast variant of the standard iterative reweighted least-squares method and the contemporary split-Bregman algorithm, both of which can accommodate a wide variety of analysis- and synthesis-type regularizers. The proposed approach iteratively computes two weighted SURE-type measures: Predicted-SURE and Projected-SURE (that require knowledge of noise variance σ2), and GCV (that does not need σ2) for these algorithms. We apply the methods to image restoration and to magnetic resonance image (MRI) reconstruction using total variation (TV) and an analysis-type ℓ1-regularization. We demonstrate through simulations and experiments with real data that minimizing Predicted-SURE and Projected-SURE consistently lead to near-MSE-optimal reconstructions. We also observed that minimizing GCV yields reconstruction results that are near-MSE-optimal for image restoration and slightly sub-optimal for MRI. Theoretical derivations in this work related to Jacobian matrix evaluations can be extended, in principle, to other types of regularizers and reconstruction algorithms. PMID:22531764

  12. On-Line Identification of Simulation Examples for Forgetting Methods to Track Time Varying Parameters Using the Alternative Covariance Matrix in Matlab

    NASA Astrophysics Data System (ADS)

    Vachálek, Ján

    2011-12-01

    The paper compares the abilities of forgetting methods to track time varying parameters of two different simulated models with different types of excitation. The observed parameters in the simulations are the integral sum of the Euclidean norm, deviation of the parameter estimates from their true values and a selected band prediction error count. As supplementary information, we observe the eigenvalues of the covariance matrix. In the paper we used a modified method of Regularized Exponential Forgetting with Alternative Covariance Matrix (REFACM) along with Directional Forgetting (DF) and three standard regularized methods.

  13. A dynamical regularization algorithm for solving inverse source problems of elliptic partial differential equations

    NASA Astrophysics Data System (ADS)

    Zhang, Ye; Gong, Rongfang; Cheng, Xiaoliang; Gulliksson, Mårten

    2018-06-01

    This study considers the inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary data. The unknown source term is to be determined by additional boundary conditions. Unlike the existing methods found in the literature, which usually employ the first-order in time gradient-like system (such as the steepest descent methods) for numerically solving the regularized optimization problem with a fixed regularization parameter, we propose a novel method with a second-order in time dissipative gradient-like system and a dynamical selected regularization parameter. A damped symplectic scheme is proposed for the numerical solution. Theoretical analysis is given for both the continuous model and the numerical algorithm. Several numerical examples are provided to show the robustness of the proposed algorithm.

  14. An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng Jinchao; Qin Chenghu; Jia Kebin

    2011-11-15

    Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescentmore » photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used rather than monochromatic data. Furthermore, the study conducted using an adaptive regularization parameter demonstrated our ability to accurately localize the bioluminescent source. With the adaptively estimated regularization parameter, the reconstructed center position of the source was (20.37, 31.05, 12.95) mm, and the distance to the real source was 0.63 mm. The results of the dual-source experiments further showed that our algorithm could localize the bioluminescent sources accurately. The authors then presented experimental evidence that the proposed algorithm exhibited its calculated efficiency over the heuristic method. The effectiveness of the new algorithm was also confirmed by comparing it with the L-curve method. Furthermore, various initial speculations regarding the regularization parameter were used to illustrate the convergence of our algorithm. Finally, in vivo mouse experiment further illustrates the effectiveness of the proposed algorithm. Conclusions: Utilizing numerical, physical phantom and in vivo examples, we demonstrated that the bioluminescent sources could be reconstructed accurately with automatic regularization parameters. The proposed algorithm exhibited superior performance than both the heuristic regularization parameter choice method and L-curve method based on the computational speed and localization error.« less

  15. Gene selection in cancer classification using sparse logistic regression with Bayesian regularization.

    PubMed

    Cawley, Gavin C; Talbot, Nicola L C

    2006-10-01

    Gene selection algorithms for cancer classification, based on the expression of a small number of biomarker genes, have been the subject of considerable research in recent years. Shevade and Keerthi propose a gene selection algorithm based on sparse logistic regression (SLogReg) incorporating a Laplace prior to promote sparsity in the model parameters, and provide a simple but efficient training procedure. The degree of sparsity obtained is determined by the value of a regularization parameter, which must be carefully tuned in order to optimize performance. This normally involves a model selection stage, based on a computationally intensive search for the minimizer of the cross-validation error. In this paper, we demonstrate that a simple Bayesian approach can be taken to eliminate this regularization parameter entirely, by integrating it out analytically using an uninformative Jeffrey's prior. The improved algorithm (BLogReg) is then typically two or three orders of magnitude faster than the original algorithm, as there is no longer a need for a model selection step. The BLogReg algorithm is also free from selection bias in performance estimation, a common pitfall in the application of machine learning algorithms in cancer classification. The SLogReg, BLogReg and Relevance Vector Machine (RVM) gene selection algorithms are evaluated over the well-studied colon cancer and leukaemia benchmark datasets. The leave-one-out estimates of the probability of test error and cross-entropy of the BLogReg and SLogReg algorithms are very similar, however the BlogReg algorithm is found to be considerably faster than the original SLogReg algorithm. Using nested cross-validation to avoid selection bias, performance estimation for SLogReg on the leukaemia dataset takes almost 48 h, whereas the corresponding result for BLogReg is obtained in only 1 min 24 s, making BLogReg by far the more practical algorithm. BLogReg also demonstrates better estimates of conditional probability than the RVM, which are of great importance in medical applications, with similar computational expense. A MATLAB implementation of the sparse logistic regression algorithm with Bayesian regularization (BLogReg) is available from http://theoval.cmp.uea.ac.uk/~gcc/cbl/blogreg/

  16. MEG Connectivity and Power Detections with Minimum Norm Estimates Require Different Regularization Parameters.

    PubMed

    Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim

    2016-01-01

    Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation.

  17. MEG Connectivity and Power Detections with Minimum Norm Estimates Require Different Regularization Parameters

    PubMed Central

    Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim

    2016-01-01

    Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation. PMID:27092179

  18. A genetic algorithm approach to estimate glacier mass variations from GRACE data

    NASA Astrophysics Data System (ADS)

    Reimond, Stefan; Klinger, Beate; Krauss, Sandro; Mayer-Gürr, Torsten

    2017-04-01

    The application of a genetic algorithm (GA) to the inference of glacier mass variations with a point-mass modeling method is described. GRACE K-band ranging data (available since April 2002) processed at the Graz University of Technology serve as input for this study. The reformulation of the point-mass inversion method in terms of an optimization problem is motivated by two reasons: first, an improved choice of the positions of the modeled point-masses (with a particular focus on the depth parameter) is expected to increase the signal-to-noise ratio. Considering these coordinates as additional unknown parameters (besides from the mass change magnitudes) results in a highly non-linear optimization problem. The second reason is that the mass inversion from satellite tracking data is an ill-posed problem, and hence regularization becomes necessary. The main task in this context is the determination of the regularization parameter, which is typically done by means of heuristic selection rules like, e.g., the L-curve criterion. In this study, however, the challenge of selecting a suitable balancing parameter (or even a matrix) is tackled by introducing regularization to the overall optimization problem. Based on this novel approach, estimations of ice-mass changes in various alpine glacier systems (e.g. Svalbard) are presented and compared to existing results and alternative inversion methods.

  19. Parameter selection in limited data cone-beam CT reconstruction using edge-preserving total variation algorithms

    NASA Astrophysics Data System (ADS)

    Lohvithee, Manasavee; Biguri, Ander; Soleimani, Manuchehr

    2017-12-01

    There are a number of powerful total variation (TV) regularization methods that have great promise in limited data cone-beam CT reconstruction with an enhancement of image quality. These promising TV methods require careful selection of the image reconstruction parameters, for which there are no well-established criteria. This paper presents a comprehensive evaluation of parameter selection in a number of major TV-based reconstruction algorithms. An appropriate way of selecting the values for each individual parameter has been suggested. Finally, a new adaptive-weighted projection-controlled steepest descent (AwPCSD) algorithm is presented, which implements the edge-preserving function for CBCT reconstruction with limited data. The proposed algorithm shows significant robustness compared to three other existing algorithms: ASD-POCS, AwASD-POCS and PCSD. The proposed AwPCSD algorithm is able to preserve the edges of the reconstructed images better with fewer sensitive parameters to tune.

  20. A regularized variable selection procedure in additive hazards model with stratified case-cohort design.

    PubMed

    Ni, Ai; Cai, Jianwen

    2018-07-01

    Case-cohort designs are commonly used in large epidemiological studies to reduce the cost associated with covariate measurement. In many such studies the number of covariates is very large. An efficient variable selection method is needed for case-cohort studies where the covariates are only observed in a subset of the sample. Current literature on this topic has been focused on the proportional hazards model. However, in many studies the additive hazards model is preferred over the proportional hazards model either because the proportional hazards assumption is violated or the additive hazards model provides more relevent information to the research question. Motivated by one such study, the Atherosclerosis Risk in Communities (ARIC) study, we investigate the properties of a regularized variable selection procedure in stratified case-cohort design under an additive hazards model with a diverging number of parameters. We establish the consistency and asymptotic normality of the penalized estimator and prove its oracle property. Simulation studies are conducted to assess the finite sample performance of the proposed method with a modified cross-validation tuning parameter selection methods. We apply the variable selection procedure to the ARIC study to demonstrate its practical use.

  1. Further investigation on "A multiplicative regularization for force reconstruction"

    NASA Astrophysics Data System (ADS)

    Aucejo, M.; De Smet, O.

    2018-05-01

    We have recently proposed a multiplicative regularization to reconstruct mechanical forces acting on a structure from vibration measurements. This method does not require any selection procedure for choosing the regularization parameter, since the amount of regularization is automatically adjusted throughout an iterative resolution process. The proposed iterative algorithm has been developed with performance and efficiency in mind, but it is actually a simplified version of a full iterative procedure not described in the original paper. The present paper aims at introducing the full resolution algorithm and comparing it with its simplified version in terms of computational efficiency and solution accuracy. In particular, it is shown that both algorithms lead to very similar identified solutions.

  2. Multiple graph regularized protein domain ranking.

    PubMed

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2012-11-19

    Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  3. Multiple graph regularized protein domain ranking

    PubMed Central

    2012-01-01

    Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. PMID:23157331

  4. Bayesian Recurrent Neural Network for Language Modeling.

    PubMed

    Chien, Jen-Tzung; Ku, Yuan-Chu

    2016-02-01

    A language model (LM) is calculated as the probability of a word sequence that provides the solution to word prediction for a variety of information systems. A recurrent neural network (RNN) is powerful to learn the large-span dynamics of a word sequence in the continuous space. However, the training of the RNN-LM is an ill-posed problem because of too many parameters from a large dictionary size and a high-dimensional hidden layer. This paper presents a Bayesian approach to regularize the RNN-LM and apply it for continuous speech recognition. We aim to penalize the too complicated RNN-LM by compensating for the uncertainty of the estimated model parameters, which is represented by a Gaussian prior. The objective function in a Bayesian classification network is formed as the regularized cross-entropy error function. The regularized model is constructed not only by calculating the regularized parameters according to the maximum a posteriori criterion but also by estimating the Gaussian hyperparameter by maximizing the marginal likelihood. A rapid approximation to a Hessian matrix is developed to implement the Bayesian RNN-LM (BRNN-LM) by selecting a small set of salient outer-products. The proposed BRNN-LM achieves a sparser model than the RNN-LM. Experiments on different corpora show the robustness of system performance by applying the rapid BRNN-LM under different conditions.

  5. Semi-experimental equilibrium structure of pyrazinamide from gas-phase electron diffraction. How much experimental is it?

    NASA Astrophysics Data System (ADS)

    Tikhonov, Denis S.; Vishnevskiy, Yury V.; Rykov, Anatolii N.; Grikina, Olga E.; Khaikin, Leonid S.

    2017-03-01

    A semi-experimental equilibrium structure of free molecules of pyrazinamide has been determined for the first time using gas electron diffraction method. The refinement was carried using regularization of geometry by calculated quantum chemical parameters. It is discussed to which extent is the final structure experimental. A numerical approach for estimation of the amount of experimental information in the refined parameters is suggested. The following values of selected internuclear distances were determined (values are in Å with 1σ in the parentheses): re(Cpyrazine-Cpyrazine)av = 1.397(2), re(Npyrazine-Cpyrazine)av = 1.332(3), re(Cpyrazine-Camide) = 1.493(1), re(Namide-Camide) = 1.335(2), re(Oamide-Camide) = 1.219(1). The given standard deviations represent pure experimental uncertainties without the influence of regularization.

  6. Efficient and sparse feature selection for biomedical text classification via the elastic net: Application to ICU risk stratification from nursing notes.

    PubMed

    Marafino, Ben J; Boscardin, W John; Dudley, R Adams

    2015-04-01

    Sparsity is often a desirable property of statistical models, and various feature selection methods exist so as to yield sparser and interpretable models. However, their application to biomedical text classification, particularly to mortality risk stratification among intensive care unit (ICU) patients, has not been thoroughly studied. To develop and characterize sparse classifiers based on the free text of nursing notes in order to predict ICU mortality risk and to discover text features most strongly associated with mortality. We selected nursing notes from the first 24h of ICU admission for 25,826 adult ICU patients from the MIMIC-II database. We then developed a pair of stochastic gradient descent-based classifiers with elastic-net regularization. We also studied the performance-sparsity tradeoffs of both classifiers as their regularization parameters were varied. The best-performing classifier achieved a 10-fold cross-validated AUC of 0.897 under the log loss function and full L2 regularization, while full L1 regularization used just 0.00025% of candidate input features and resulted in an AUC of 0.889. Using the log loss (range of AUCs 0.889-0.897) yielded better performance compared to the hinge loss (0.850-0.876), but the latter yielded even sparser models. Most features selected by both classifiers appear clinically relevant and correspond to predictors already present in existing ICU mortality models. The sparser classifiers were also able to discover a number of informative - albeit nonclinical - features. The elastic-net-regularized classifiers perform reasonably well and are capable of reducing the number of features required by over a thousandfold, with only a modest impact on performance. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Statistical approach to Higgs boson couplings in the standard model effective field theory

    NASA Astrophysics Data System (ADS)

    Murphy, Christopher W.

    2018-01-01

    We perform a parameter fit in the standard model effective field theory (SMEFT) with an emphasis on using regularized linear regression to tackle the issue of the large number of parameters in the SMEFT. In regularized linear regression, a positive definite function of the parameters of interest is added to the usual cost function. A cross-validation is performed to try to determine the optimal value of the regularization parameter to use, but it selects the standard model (SM) as the best model to explain the measurements. Nevertheless as proof of principle of this technique we apply it to fitting Higgs boson signal strengths in SMEFT, including the latest Run-2 results. Results are presented in terms of the eigensystem of the covariance matrix of the least squares estimators as it has a degree model-independent to it. We find several results in this initial work: the SMEFT predicts the total width of the Higgs boson to be consistent with the SM prediction; the ATLAS and CMS experiments at the LHC are currently sensitive to non-resonant double Higgs boson production. Constraints are derived on the viable parameter space for electroweak baryogenesis in the SMEFT, reinforcing the notion that a first order phase transition requires fairly low-scale beyond the SM physics. Finally, we study which future experimental measurements would give the most improvement on the global constraints on the Higgs sector of the SMEFT.

  8. Cross Validation Through Two-Dimensional Solution Surface for Cost-Sensitive SVM.

    PubMed

    Gu, Bin; Sheng, Victor S; Tay, Keng Yeow; Romano, Walter; Li, Shuo

    2017-06-01

    Model selection plays an important role in cost-sensitive SVM (CS-SVM). It has been proven that the global minimum cross validation (CV) error can be efficiently computed based on the solution path for one parameter learning problems. However, it is a challenge to obtain the global minimum CV error for CS-SVM based on one-dimensional solution path and traditional grid search, because CS-SVM is with two regularization parameters. In this paper, we propose a solution and error surfaces based CV approach (CV-SES). More specifically, we first compute a two-dimensional solution surface for CS-SVM based on a bi-parameter space partition algorithm, which can fit solutions of CS-SVM for all values of both regularization parameters. Then, we compute a two-dimensional validation error surface for each CV fold, which can fit validation errors of CS-SVM for all values of both regularization parameters. Finally, we obtain the CV error surface by superposing K validation error surfaces, which can find the global minimum CV error of CS-SVM. Experiments are conducted on seven datasets for cost sensitive learning and on four datasets for imbalanced learning. Experimental results not only show that our proposed CV-SES has a better generalization ability than CS-SVM with various hybrids between grid search and solution path methods, and than recent proposed cost-sensitive hinge loss SVM with three-dimensional grid search, but also show that CV-SES uses less running time.

  9. Spatially adapted second-order total generalized variational image deblurring model under impulse noise

    NASA Astrophysics Data System (ADS)

    Zhong, Qiu-Xiang; Wu, Chuan-Sheng; Shu, Qiao-Ling; Liu, Ryan Wen

    2018-04-01

    Image deblurring under impulse noise is a typical ill-posed problem which requires regularization methods to guarantee high-quality imaging. L1-norm data-fidelity term and total variation (TV) regularizer have been combined to contribute the popular regularization method. However, the TV-regularized variational image deblurring model often suffers from the staircase-like artifacts leading to image quality degradation. To enhance image quality, the detailpreserving total generalized variation (TGV) was introduced to replace TV to eliminate the undesirable artifacts. The resulting nonconvex optimization problem was effectively solved using the alternating direction method of multipliers (ADMM). In addition, an automatic method for selecting spatially adapted regularization parameters was proposed to further improve deblurring performance. Our proposed image deblurring framework is able to remove blurring and impulse noise effects while maintaining the image edge details. Comprehensive experiments have been conducted to demonstrate the superior performance of our proposed method over several state-of-the-art image deblurring methods.

  10. Thermodynamic Modeling of the YO(l.5)-ZrO2 System

    NASA Technical Reports Server (NTRS)

    Jacobson, Nathan S.; Liu, Zi-Kui; Kaufman, Larry; Zhang, Fan

    2003-01-01

    The YO1.5-ZrO2 system consists of five solid solutions, one liquid solution, and one intermediate compound. A thermodynamic description of this system is developed, which allows calculation of the phase diagram and thermodynamic properties. Two different solution models are used-a neutral species model with YO1.5 and ZrO2 as the components and a charged species model with Y(+3), Zr(+4), O(-2), and vacancies as components. For each model, regular and sub-regular solution parameters are derived fiom selected equilibrium phase and thermodynamic data.

  11. A trade-off solution between model resolution and covariance in surface-wave inversion

    USGS Publications Warehouse

    Xia, J.; Xu, Y.; Miller, R.D.; Zeng, C.

    2010-01-01

    Regularization is necessary for inversion of ill-posed geophysical problems. Appraisal of inverse models is essential for meaningful interpretation of these models. Because uncertainties are associated with regularization parameters, extra conditions are usually required to determine proper parameters for assessing inverse models. Commonly used techniques for assessment of a geophysical inverse model derived (generally iteratively) from a linear system are based on calculating the model resolution and the model covariance matrices. Because the model resolution and the model covariance matrices of the regularized solutions are controlled by the regularization parameter, direct assessment of inverse models using only the covariance matrix may provide incorrect results. To assess an inverted model, we use the concept of a trade-off between model resolution and covariance to find a proper regularization parameter with singular values calculated in the last iteration. We plot the singular values from large to small to form a singular value plot. A proper regularization parameter is normally the first singular value that approaches zero in the plot. With this regularization parameter, we obtain a trade-off solution between model resolution and model covariance in the vicinity of a regularized solution. The unit covariance matrix can then be used to calculate error bars of the inverse model at a resolution level determined by the regularization parameter. We demonstrate this approach with both synthetic and real surface-wave data. ?? 2010 Birkh??user / Springer Basel AG.

  12. Identification of moving vehicle forces on bridge structures via moving average Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin

    2017-08-01

    Traffic-induced moving force identification (MFI) is a typical inverse problem in the field of bridge structural health monitoring. Lots of regularization-based methods have been proposed for MFI. However, the MFI accuracy obtained from the existing methods is low when the moving forces enter into and exit a bridge deck due to low sensitivity of structural responses to the forces at these zones. To overcome this shortcoming, a novel moving average Tikhonov regularization method is proposed for MFI by combining with the moving average concepts. Firstly, the bridge-vehicle interaction moving force is assumed as a discrete finite signal with stable average value (DFS-SAV). Secondly, the reasonable signal feature of DFS-SAV is quantified and introduced for improving the penalty function (∣∣x∣∣2 2) defined in the classical Tikhonov regularization. Then, a feasible two-step strategy is proposed for selecting regularization parameter and balance coefficient defined in the improved penalty function. Finally, both numerical simulations on a simply-supported beam and laboratory experiments on a hollow tube beam are performed for assessing the accuracy and the feasibility of the proposed method. The illustrated results show that the moving forces can be accurately identified with a strong robustness. Some related issues, such as selection of moving window length, effect of different penalty functions, and effect of different car speeds, are discussed as well.

  13. Anaemia, iron deficiency and iron deficiency anaemia among blood donors in Port Harcourt, Nigeria.

    PubMed

    Jeremiah, Zaccheaus Awortu; Koate, Baribefe Banavule

    2010-04-01

    There is paucity of information on the effect of blood donation on iron stores in Port Harcourt, Nigeria. The present study was, therefore, designed to assess, using a combination of haemoglobin and iron status parameters, the development of anaemia and prevalence of iron deficiency anaemia in this area of Nigeria. Three hundred and forty-eight unselected consecutive whole blood donors, comprising 96 regular donors, 156 relatives of patients and 96 voluntary donors, constituted the study population. Three haematological parameters (haemoglobin, packed cell volume, and mean cell haemoglobin concentration) and four biochemical iron parameters (serum ferritin, serum iron, total iron binding capacity and transferrin saturation) were assessed using standard colorimetric and ELISA techniques. The prevalence of anaemia alone (haemoglobin <11.0 g/dL) was 13.7%. The prevalence of isolated iron deficiency (serum ferritin <12 ng/mL) was 20.6% while that of iron-deficiency anaemia (haemoglobin <11.0 g/dL + serum ferritin <12.0 ng/mL) was 12.0%. Among the three categories of the donors, the regular donors were found to be most adversely affected as shown by the reduction in mean values of both haematological and biochemical iron parameters. Interestingly, anaemia, iron deficiency and iron-deficiency anaemia were present almost exclusively among regular blood donors, all of whom were over 35 years old. Anaemia, iron deficiency and iron-deficiency anaemia are highly prevalent among blood donors in Port Harcourt, Nigeria. It will be necessary to review the screening tests for the selection of blood donors and also include serum ferritin measurement for the routine assessment of blood donors, especially among regular blood donors.

  14. Regularization of rupture dynamics along bi-material interfaces: a parametric study and simulations of the Tohoku earthquake

    NASA Astrophysics Data System (ADS)

    Scala, Antonio; Festa, Gaetano; Vilotte, Jean-Pierre

    2015-04-01

    Faults are often interfaces between materials with different elastic properties. This is generally the case of plate boundaries in subduction zones, where the ruptures extend for many kilometers crossing materials with strong impedance contrasts (oceanic crust, continental crust, mantle wedge, accretionary prism). From a physical point of view, several peculiar features emerged both from analogic experiments and numerical simulations for a rupture propagating along a bimaterial interface. The elastodynamic flux at the rupture tip breaks its symmetry, inducing normal stress changes and an asymmetric propagation. This latter was widely shown for rupture velocity and slip rate (e.g. Xia et al, 2005) and was supposed to generate an asymmetric distribution of the aftershocks (Rubin and Ampuero, 2007). The bimaterial problem coupled with a Coulomb friction law is ill-posed for a wide range of impedance contrasts, due to a missing length scale in the instantaneous response to the normal traction changes. The ill-posedness also results into simulations no longer independent of the grid size. A regularization can be introduced by delaying the tangential traction from the normal traction as suggested by Cochard and Rice (2000) and Ranjith and Rice (2000) δσeff α|v|+-v* δt = δσ (σn - σeff) where σeff represents the effective normal stress to be used in the Coulomb friction. This regularization introduces two delays depending on the slip rate and on a fixed time scale. In this study we performed a large number of 2D numerical simulations of in plane rupture with the spectral element method dynamic and we systematically investigated the effect of parameter selection on the rupture propagation, dissipation and radiation, by also performing a direct comparison with solutions provided by numerical and experimental results. We found that a purely time-dependent regularization requires a fine tuning rapidly jumping from a too fast, ineffective delay to a slow, invasive, regularization as a function of the actual slip rate. Conversely, the choice of a fixed relaxation length, smaller than the critical slip weakening distance, provides a reliable class of solutions for a wide range of elastic and frictional parameters. Nevertheless critical rupture stages, such as the nucleation or the very fast steady-state propagation may show resolution problems and may take advantage of adaptive schemes, with a space/time variation of the parameters. We used recipes for bimaterial regularization to perform along-dip dynamic simulations of the Tohoku earthquake in the framework of a slip weakening model, with a realistic description of the geometry of the interface and the geological structure. We finely investigated the role of the impedance contrasts on the evolution of the rupture and short wavelength radiation. We also show that pathological effects may arise from a bad selection of regularization parameters.

  15. On convergence and convergence rates for Ivanov and Morozov regularization and application to some parameter identification problems in elliptic PDEs

    NASA Astrophysics Data System (ADS)

    Kaltenbacher, Barbara; Klassen, Andrej

    2018-05-01

    In this paper we provide a convergence analysis of some variational methods alternative to the classical Tikhonov regularization, namely Ivanov regularization (also called the method of quasi solutions) with some versions of the discrepancy principle for choosing the regularization parameter, and Morozov regularization (also called the method of the residuals). After motivating nonequivalence with Tikhonov regularization by means of an example, we prove well-definedness of the Ivanov and the Morozov method, convergence in the sense of regularization, as well as convergence rates under variational source conditions. Finally, we apply these results to some linear and nonlinear parameter identification problems in elliptic boundary value problems.

  16. The quasi-optimality criterion in the linear functional strategy

    NASA Astrophysics Data System (ADS)

    Kindermann, Stefan; Pereverzyev, Sergiy, Jr.; Pilipenko, Andrey

    2018-07-01

    The linear functional strategy for the regularization of inverse problems is considered. For selecting the regularization parameter therein, we propose the heuristic quasi-optimality principle and some modifications including the smoothness of the linear functionals. We prove convergence rates for the linear functional strategy with these heuristic rules taking into account the smoothness of the solution and the functionals and imposing a structural condition on the noise. Furthermore, we study these noise conditions in both a deterministic and stochastic setup and verify that for mildly-ill-posed problems and Gaussian noise, these conditions are satisfied almost surely, where on the contrary, in the severely-ill-posed case and in a similar setup, the corresponding noise condition fails to hold. Moreover, we propose an aggregation method for adaptively optimizing the parameter choice rule by making use of improved rates for linear functionals. Numerical results indicate that this method yields better results than the standard heuristic rule.

  17. Robust check loss-based variable selection of high-dimensional single-index varying-coefficient model

    NASA Astrophysics Data System (ADS)

    Song, Yunquan; Lin, Lu; Jian, Ling

    2016-07-01

    Single-index varying-coefficient model is an important mathematical modeling method to model nonlinear phenomena in science and engineering. In this paper, we develop a variable selection method for high-dimensional single-index varying-coefficient models using a shrinkage idea. The proposed procedure can simultaneously select significant nonparametric components and parametric components. Under defined regularity conditions, with appropriate selection of tuning parameters, the consistency of the variable selection procedure and the oracle property of the estimators are established. Moreover, due to the robustness of the check loss function to outliers in the finite samples, our proposed variable selection method is more robust than the ones based on the least squares criterion. Finally, the method is illustrated with numerical simulations.

  18. Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms

    NASA Astrophysics Data System (ADS)

    Lee, Chien-Cheng; Huang, Shin-Sheng; Shih, Cheng-Yuan

    2010-12-01

    This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB) with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA). It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO) algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.

  19. Regularized matrix regression

    PubMed Central

    Zhou, Hua; Li, Lexin

    2014-01-01

    Summary Modern technologies are producing a wealth of data with complex structures. For instance, in two-dimensional digital imaging, flow cytometry and electroencephalography, matrix-type covariates frequently arise when measurements are obtained for each combination of two underlying variables. To address scientific questions arising from those data, new regression methods that take matrices as covariates are needed, and sparsity or other forms of regularization are crucial owing to the ultrahigh dimensionality and complex structure of the matrix data. The popular lasso and related regularization methods hinge on the sparsity of the true signal in terms of the number of its non-zero coefficients. However, for the matrix data, the true signal is often of, or can be well approximated by, a low rank structure. As such, the sparsity is frequently in the form of low rank of the matrix parameters, which may seriously violate the assumption of the classical lasso. We propose a class of regularized matrix regression methods based on spectral regularization. A highly efficient and scalable estimation algorithm is developed, and a degrees-of-freedom formula is derived to facilitate model selection along the regularization path. Superior performance of the method proposed is demonstrated on both synthetic and real examples. PMID:24648830

  20. The addition of entropy-based regularity parameters improves sleep stage classification based on heart rate variability.

    PubMed

    Aktaruzzaman, M; Migliorini, M; Tenhunen, M; Himanen, S L; Bianchi, A M; Sassi, R

    2015-05-01

    The work considers automatic sleep stage classification, based on heart rate variability (HRV) analysis, with a focus on the distinction of wakefulness (WAKE) from sleep and rapid eye movement (REM) from non-REM (NREM) sleep. A set of 20 automatically annotated one-night polysomnographic recordings was considered, and artificial neural networks were selected for classification. For each inter-heartbeat (RR) series, beside features previously presented in literature, we introduced a set of four parameters related to signal regularity. RR series of three different lengths were considered (corresponding to 2, 6, and 10 successive epochs, 30 s each, in the same sleep stage). Two sets of only four features captured 99 % of the data variance in each classification problem, and both of them contained one of the new regularity features proposed. The accuracy of classification for REM versus NREM (68.4 %, 2 epochs; 83.8 %, 10 epochs) was higher than when distinguishing WAKE versus SLEEP (67.6 %, 2 epochs; 71.3 %, 10 epochs). Also, the reliability parameter (Cohens's Kappa) was higher (0.68 and 0.45, respectively). Sleep staging classification based on HRV was still less precise than other staging methods, employing a larger variety of signals collected during polysomnographic studies. However, cheap and unobtrusive HRV-only sleep classification proved sufficiently precise for a wide range of applications.

  1. Symmetry-plane model of 3D Euler flows: Mapping to regular systems and numerical solutions of blowup

    NASA Astrophysics Data System (ADS)

    Mulungye, Rachel M.; Lucas, Dan; Bustamante, Miguel D.

    2014-11-01

    We introduce a family of 2D models describing the dynamics on the so-called symmetry plane of the full 3D Euler fluid equations. These models depend on a free real parameter and can be solved analytically. For selected representative values of the free parameter, we apply the method introduced in [M.D. Bustamante, Physica D: Nonlinear Phenom. 240, 1092 (2011)] to map the fluid equations bijectively to globally regular systems. By comparing the analytical solutions with the results of numerical simulations, we establish that the numerical simulations of the mapped regular systems are far more accurate than the numerical simulations of the original systems, at the same spatial resolution and CPU time. In particular, the numerical integrations of the mapped regular systems produce robust estimates for the growth exponent and singularity time of the main blowup quantity (vorticity stretching rate), converging well to the analytically-predicted values even beyond the time at which the flow becomes under-resolved (i.e. the reliability time). In contrast, direct numerical integrations of the original systems develop unstable oscillations near the reliability time. We discuss the reasons for this improvement in accuracy, and explain how to extend the analysis to the full 3D case. Supported under the programme for Research in Third Level Institutions (PRTLI) Cycle 5 and co-funded by the European Regional Development Fund.

  2. An intelligent identification algorithm for the monoclonal picking instrument

    NASA Astrophysics Data System (ADS)

    Yan, Hua; Zhang, Rongfu; Yuan, Xujun; Wang, Qun

    2017-11-01

    The traditional colony selection is mainly operated by manual mode, which takes on low efficiency and strong subjectivity. Therefore, it is important to develop an automatic monoclonal-picking instrument. The critical stage of the automatic monoclonal-picking and intelligent optimal selection is intelligent identification algorithm. An auto-screening algorithm based on Support Vector Machine (SVM) is proposed in this paper, which uses the supervised learning method, which combined with the colony morphological characteristics to classify the colony accurately. Furthermore, through the basic morphological features of the colony, system can figure out a series of morphological parameters step by step. Through the establishment of maximal margin classifier, and based on the analysis of the growth trend of the colony, the selection of the monoclonal colony was carried out. The experimental results showed that the auto-screening algorithm could screen out the regular colony from the other, which meets the requirement of various parameters.

  3. Parameters Selection for Bivariate Multiscale Entropy Analysis of Postural Fluctuations in Fallers and Non-Fallers Older Adults.

    PubMed

    Ramdani, Sofiane; Bonnet, Vincent; Tallon, Guillaume; Lagarde, Julien; Bernard, Pierre Louis; Blain, Hubert

    2016-08-01

    Entropy measures are often used to quantify the regularity of postural sway time series. Recent methodological developments provided both multivariate and multiscale approaches allowing the extraction of complexity features from physiological signals; see "Dynamical complexity of human responses: A multivariate data-adaptive framework," in Bulletin of Polish Academy of Science and Technology, vol. 60, p. 433, 2012. The resulting entropy measures are good candidates for the analysis of bivariate postural sway signals exhibiting nonstationarity and multiscale properties. These methods are dependant on several input parameters such as embedding parameters. Using two data sets collected from institutionalized frail older adults, we numerically investigate the behavior of a recent multivariate and multiscale entropy estimator; see "Multivariate multiscale entropy: A tool for complexity analysis of multichannel data," Physics Review E, vol. 84, p. 061918, 2011. We propose criteria for the selection of the input parameters. Using these optimal parameters, we statistically compare the multivariate and multiscale entropy values of postural sway data of non-faller subjects to those of fallers. These two groups are discriminated by the resulting measures over multiple time scales. We also demonstrate that the typical parameter settings proposed in the literature lead to entropy measures that do not distinguish the two groups. This last result confirms the importance of the selection of appropriate input parameters.

  4. Enhanced nearfield acoustic holography for larger distances of reconstructions using fixed parameter Tikhonov regularization

    DOE PAGES

    Chelliah, Kanthasamy; Raman, Ganesh G.; Muehleisen, Ralph T.

    2016-07-07

    This paper evaluates the performance of various regularization parameter choice methods applied to different approaches of nearfield acoustic holography when a very nearfield measurement is not possible. For a fixed grid resolution, the larger the hologram distance, the larger the error in the naive nearfield acoustic holography reconstructions. These errors can be smoothed out by using an appropriate order of regularization. In conclusion, this study shows that by using a fixed/manual choice of regularization parameter, instead of automated parameter choice methods, reasonably accurate reconstructions can be obtained even when the hologram distance is 16 times larger than the grid resolution.

  5. Evaluating large-scale propensity score performance through real-world and synthetic data experiments.

    PubMed

    Tian, Yuxi; Schuemie, Martijn J; Suchard, Marc A

    2018-06-22

    Propensity score adjustment is a popular approach for confounding control in observational studies. Reliable frameworks are needed to determine relative propensity score performance in large-scale studies, and to establish optimal propensity score model selection methods. We detail a propensity score evaluation framework that includes synthetic and real-world data experiments. Our synthetic experimental design extends the 'plasmode' framework and simulates survival data under known effect sizes, and our real-world experiments use a set of negative control outcomes with presumed null effect sizes. In reproductions of two published cohort studies, we compare two propensity score estimation methods that contrast in their model selection approach: L1-regularized regression that conducts a penalized likelihood regression, and the 'high-dimensional propensity score' (hdPS) that employs a univariate covariate screen. We evaluate methods on a range of outcome-dependent and outcome-independent metrics. L1-regularization propensity score methods achieve superior model fit, covariate balance and negative control bias reduction compared with the hdPS. Simulation results are mixed and fluctuate with simulation parameters, revealing a limitation of simulation under the proportional hazards framework. Including regularization with the hdPS reduces commonly reported non-convergence issues but has little effect on propensity score performance. L1-regularization incorporates all covariates simultaneously into the propensity score model and offers propensity score performance superior to the hdPS marginal screen.

  6. Regularization of soft-X-ray imaging in the DIII-D tokamak

    DOE PAGES

    Wingen, A.; Shafer, M. W.; Unterberg, E. A.; ...

    2015-03-02

    We developed an image inversion scheme for the soft X-ray imaging system (SXRIS) diagnostic at the DIII-D tokamak in order to obtain the local soft X-ray emission at a poloidal cross-section from the spatially line-integrated image taken by the SXRIS camera. The scheme uses the Tikhonov regularization method since the inversion problem is generally ill-posed. The regularization technique uses the generalized singular value decomposition to determine a solution that depends on a free regularization parameter. The latter has to be chosen carefully, and the so called {\\it L-curve} method to find the optimum regularization parameter is outlined. A representative testmore » image is used to study the properties of the inversion scheme with respect to inversion accuracy, amount/strength of regularization, image noise and image resolution. Moreover, the optimum inversion parameters are identified, while the L-curve method successfully computes the optimum regularization parameter. Noise is found to be the most limiting issue, but sufficient regularization is still possible at noise to signal ratios up to 10%-15%. Finally, the inversion scheme is applied to measured SXRIS data and the line-integrated SXRIS image is successfully inverted.« less

  7. Solving regularly and singularly perturbed reaction-diffusion equations in three space dimensions

    NASA Astrophysics Data System (ADS)

    Moore, Peter K.

    2007-06-01

    In [P.K. Moore, Effects of basis selection and h-refinement on error estimator reliability and solution efficiency for higher-order methods in three space dimensions, Int. J. Numer. Anal. Mod. 3 (2006) 21-51] a fixed, high-order h-refinement finite element algorithm, Href, was introduced for solving reaction-diffusion equations in three space dimensions. In this paper Href is coupled with continuation creating an automatic method for solving regularly and singularly perturbed reaction-diffusion equations. The simple quasilinear Newton solver of Moore, (2006) is replaced by the nonlinear solver NITSOL [M. Pernice, H.F. Walker, NITSOL: a Newton iterative solver for nonlinear systems, SIAM J. Sci. Comput. 19 (1998) 302-318]. Good initial guesses for the nonlinear solver are obtained using continuation in the small parameter ɛ. Two strategies allow adaptive selection of ɛ. The first depends on the rate of convergence of the nonlinear solver and the second implements backtracking in ɛ. Finally a simple method is used to select the initial ɛ. Several examples illustrate the effectiveness of the algorithm.

  8. A Unified Fisher's Ratio Learning Method for Spatial Filter Optimization.

    PubMed

    Li, Xinyang; Guan, Cuntai; Zhang, Haihong; Ang, Kai Keng

    To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.

  9. Laser-Induced Breakdown Spectroscopy (LIBS) for spectral characterization of regular coffee beans and luwak coffee bean

    NASA Astrophysics Data System (ADS)

    Nufiqurakhmah, Nufiqurakhmah; Nasution, Aulia; Suyanto, Hery

    2016-11-01

    Luwak (civet) coffee refers to a type of coffee, where the cherries have been priorly digested and then defecated by a civet (Paradoxurus Hermaphroditus), a catlike animals typically habited in Indonesia. Luwak will only selectively select ripe cherries, and digesting them by enzymatic fermentation in its digestive system. The defecated beans is then removed and cleaned from the feces. It is regarded as the world's most expensive coffee, Traditionally the quality of the coffee is subjectively determined by a tester. This research is motivated by the needs to study and develop quantitative parameters in determining the quality of coffee bean, which are more objective to measure the quality of coffee products. LIBS technique was used to identify the elemental contents of coffee beans based on its spectral characteristics in the range 200-900 nm. Samples of green beans from variant of arabica and robusta, either regular and luwak, were collected from 5 plantations in East Java. From the recorded spectra, intensity ratio of nitrogen (N), hydrogen (H), and oxygen (O) as essential elements in coffee is applied. In general, values extracted from luwak coffee bean is higher with increases 0.03% - 79.93%. A Discriminant Function Analysis (DFA) also applied to identify marker elements that characterize the regular and luwak beans. Elements of Ca, W, Sr, Mg, and H are the ones used to differentiate the regular and luwak beans from arabica variant, while Ca and W are the ones used to differentiate the regular and luwak beans of robusta variant.

  10. Improving the performance of extreme learning machine for hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Li, Jiaojiao; Du, Qian; Li, Wei; Li, Yunsong

    2015-05-01

    Extreme learning machine (ELM) and kernel ELM (KELM) can offer comparable performance as the standard powerful classifier―support vector machine (SVM), but with much lower computational cost due to extremely simple training step. However, their performance may be sensitive to several parameters, such as the number of hidden neurons. An empirical linear relationship between the number of training samples and the number of hidden neurons is proposed. Such a relationship can be easily estimated with two small training sets and extended to large training sets so as to greatly reduce computational cost. Other parameters, such as the steepness parameter in the sigmodal activation function and regularization parameter in the KELM, are also investigated. The experimental results show that classification performance is sensitive to these parameters; fortunately, simple selections will result in suboptimal performance.

  11. Novel harmonic regularization approach for variable selection in Cox's proportional hazards model.

    PubMed

    Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan

    2014-01-01

    Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq  (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods.

  12. What Drives the Variability of the Mid-Latitude Ionosphere?

    NASA Astrophysics Data System (ADS)

    Goncharenko, L. P.; Zhang, S.; Erickson, P. J.; Harvey, L.; Spraggs, M. E.; Maute, A. I.

    2016-12-01

    The state of the ionosphere is determined by the superposition of the regular changes and stochastic variations of the ionospheric parameters. Regular variations are represented by diurnal, seasonal and solar cycle changes, and can be well described by empirical models. Short-term perturbations that vary from a few seconds to a few hours or days can be induced in the ionosphere by solar flares, changes in solar wind, coronal mass ejections, travelling ionospheric disturbances, or meteorological influences. We use over 40 years of observations by the Millstone Hill incoherent scatter radar (42.6oN, 288.5oE) to develop an updated empirical model of ionospheric parameters, and wintertime data collected in 2004-2016 to study variability in ionospheric parameters. We also use NASA MERRA2 atmospheric reanalysis data to examine possible connections between the state of the stratosphere & mesosphere and the upper atmosphere (250-400km). A case of major SSW of January 2013 is selected for in-depth study and reveals large anomalies in ionospheric parameters. Modeling with the NCAR Thermospheric-Ionospheric-Mesospheric-Electrodynamics general Circulation Model (TIME-GCM) nudged by WACCM-GEOS5 simulation indicates that during the 2013 SSW the neutral and ion temperature in the polar through mid-latitude region deviates from the seasonal behavior.

  13. A multi-frequency iterative imaging method for discontinuous inverse medium problem

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Feng, Lixin

    2018-06-01

    The inverse medium problem with discontinuous refractive index is a kind of challenging inverse problem. We employ the primal dual theory and fast solution of integral equations, and propose a new iterative imaging method. The selection criteria of regularization parameter is given by the method of generalized cross-validation. Based on multi-frequency measurements of the scattered field, a recursive linearization algorithm has been presented with respect to the frequency from low to high. We also discuss the initial guess selection strategy by semi-analytical approaches. Numerical experiments are presented to show the effectiveness of the proposed method.

  14. Traction cytometry: regularization in the Fourier approach and comparisons with finite element method.

    PubMed

    Kulkarni, Ankur H; Ghosh, Prasenjit; Seetharaman, Ashwin; Kondaiah, Paturu; Gundiah, Namrata

    2018-05-09

    Traction forces exerted by adherent cells are quantified using displacements of embedded markers on polyacrylamide substrates due to cell contractility. Fourier Transform Traction Cytometry (FTTC) is widely used to calculate tractions but has inherent limitations due to errors in the displacement fields; these are mitigated through a regularization parameter (γ) in the Reg-FTTC method. An alternate finite element (FE) approach computes tractions on a domain using known boundary conditions. Robust verification and recovery studies are lacking but essential in assessing the accuracy and noise sensitivity of the traction solutions from the different methods. We implemented the L2 regularization method and defined a maximum curvature point in the traction with γ plot as the optimal regularization parameter (γ*) in the Reg-FTTC approach. Traction reconstructions using γ* yield accurate values of low and maximum tractions (Tmax) in the presence of up to 5% noise. Reg-FTTC is hence a clear improvement over the FTTC method but is inadequate to reconstruct low stresses such as those at nascent focal adhesions. FE, implemented using a node-by-node comparison, showed an intermediate reconstruction compared to Reg-FTTC. We performed experiments using mouse embryonic fibroblast (MEF) and compared results between these approaches. Tractions from FTTC and FE showed differences of ∼92% and 22% as compared to Reg-FTTC. Selection of an optimum value of γ for each cell reduced variability in the computed tractions as compared to using a single value of γ for all the MEF cells in this study.

  15. Refraction tomography mapping of near-surface dipping layers using landstreamer data at East Canyon Dam, Utah

    USGS Publications Warehouse

    Ivanov, J.; Miller, R.D.; Markiewicz, R.D.; Xia, J.

    2008-01-01

    We apply the P-wave refraction-tomography method to seismic data collected with a landstreamer. Refraction-tomography inversion solutions were determined using regularization parameters that provided the most realistic near-surface solutions that best matched the dipping layer structure of nearby outcrops. A reasonably well matched solution was obtained using an unusual set of optimal regularization parameters. In comparison, the use of conventional regularization parameters did not provide as realistic results. Thus, we consider that even if there is only qualitative a-priori information about a site (i.e., visual) - in the case of the East Canyon Dam, Utah - it might be possible to minimize the refraction nonuniqueness by estimating the most appropriate regularization parameters.

  16. Selection of entropy-measure parameters for knowledge discovery in heart rate variability data

    PubMed Central

    2014-01-01

    Background Heart rate variability is the variation of the time interval between consecutive heartbeats. Entropy is a commonly used tool to describe the regularity of data sets. Entropy functions are defined using multiple parameters, the selection of which is controversial and depends on the intended purpose. This study describes the results of tests conducted to support parameter selection, towards the goal of enabling further biomarker discovery. Methods This study deals with approximate, sample, fuzzy, and fuzzy measure entropies. All data were obtained from PhysioNet, a free-access, on-line archive of physiological signals, and represent various medical conditions. Five tests were defined and conducted to examine the influence of: varying the threshold value r (as multiples of the sample standard deviation σ, or the entropy-maximizing rChon), the data length N, the weighting factors n for fuzzy and fuzzy measure entropies, and the thresholds rF and rL for fuzzy measure entropy. The results were tested for normality using Lilliefors' composite goodness-of-fit test. Consequently, the p-value was calculated with either a two sample t-test or a Wilcoxon rank sum test. Results The first test shows a cross-over of entropy values with regard to a change of r. Thus, a clear statement that a higher entropy corresponds to a high irregularity is not possible, but is rather an indicator of differences in regularity. N should be at least 200 data points for r = 0.2 σ and should even exceed a length of 1000 for r = rChon. The results for the weighting parameters n for the fuzzy membership function show different behavior when coupled with different r values, therefore the weighting parameters have been chosen independently for the different threshold values. The tests concerning rF and rL showed that there is no optimal choice, but r = rF = rL is reasonable with r = rChon or r = 0.2σ. Conclusions Some of the tests showed a dependency of the test significance on the data at hand. Nevertheless, as the medical conditions are unknown beforehand, compromises had to be made. Optimal parameter combinations are suggested for the methods considered. Yet, due to the high number of potential parameter combinations, further investigations of entropy for heart rate variability data will be necessary. PMID:25078574

  17. Selection of entropy-measure parameters for knowledge discovery in heart rate variability data.

    PubMed

    Mayer, Christopher C; Bachler, Martin; Hörtenhuber, Matthias; Stocker, Christof; Holzinger, Andreas; Wassertheurer, Siegfried

    2014-01-01

    Heart rate variability is the variation of the time interval between consecutive heartbeats. Entropy is a commonly used tool to describe the regularity of data sets. Entropy functions are defined using multiple parameters, the selection of which is controversial and depends on the intended purpose. This study describes the results of tests conducted to support parameter selection, towards the goal of enabling further biomarker discovery. This study deals with approximate, sample, fuzzy, and fuzzy measure entropies. All data were obtained from PhysioNet, a free-access, on-line archive of physiological signals, and represent various medical conditions. Five tests were defined and conducted to examine the influence of: varying the threshold value r (as multiples of the sample standard deviation σ, or the entropy-maximizing rChon), the data length N, the weighting factors n for fuzzy and fuzzy measure entropies, and the thresholds rF and rL for fuzzy measure entropy. The results were tested for normality using Lilliefors' composite goodness-of-fit test. Consequently, the p-value was calculated with either a two sample t-test or a Wilcoxon rank sum test. The first test shows a cross-over of entropy values with regard to a change of r. Thus, a clear statement that a higher entropy corresponds to a high irregularity is not possible, but is rather an indicator of differences in regularity. N should be at least 200 data points for r = 0.2 σ and should even exceed a length of 1000 for r = rChon. The results for the weighting parameters n for the fuzzy membership function show different behavior when coupled with different r values, therefore the weighting parameters have been chosen independently for the different threshold values. The tests concerning rF and rL showed that there is no optimal choice, but r = rF = rL is reasonable with r = rChon or r = 0.2σ. Some of the tests showed a dependency of the test significance on the data at hand. Nevertheless, as the medical conditions are unknown beforehand, compromises had to be made. Optimal parameter combinations are suggested for the methods considered. Yet, due to the high number of potential parameter combinations, further investigations of entropy for heart rate variability data will be necessary.

  18. The study on injection parameters of selected alternative fuels used in diesel engines

    NASA Astrophysics Data System (ADS)

    Balawender, K.; Kuszewski, H.; Lejda, K.; Lew, K.

    2016-09-01

    The paper presents selected results concerning fuel charging and spraying process for selected alternative fuels, including regular diesel fuel, rape oil, FAME, blends of these fuels in various proportions, and blends of rape oil with diesel fuel. Examination of the process included the fuel charge measurements. To this end, a set-up for examination of Common Rail-type injection systems was used constructed on the basis of Bosch EPS-815 test bench, from which the high-pressure pump drive system was adopted. For tests concerning the spraying process, a visualisation chamber with constant volume was utilised. The fuel spray development was registered with the use of VisioScope (AVL).

  19. Novel Harmonic Regularization Approach for Variable Selection in Cox's Proportional Hazards Model

    PubMed Central

    Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan

    2014-01-01

    Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq  (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods. PMID:25506389

  20. Regularization strategies for hyperplane classifiers: application to cancer classification with gene expression data.

    PubMed

    Andries, Erik; Hagstrom, Thomas; Atlas, Susan R; Willman, Cheryl

    2007-02-01

    Linear discrimination, from the point of view of numerical linear algebra, can be treated as solving an ill-posed system of linear equations. In order to generate a solution that is robust in the presence of noise, these problems require regularization. Here, we examine the ill-posedness involved in the linear discrimination of cancer gene expression data with respect to outcome and tumor subclasses. We show that a filter factor representation, based upon Singular Value Decomposition, yields insight into the numerical ill-posedness of the hyperplane-based separation when applied to gene expression data. We also show that this representation yields useful diagnostic tools for guiding the selection of classifier parameters, thus leading to improved performance.

  1. Regularization Paths for Conditional Logistic Regression: The clogitL1 Package.

    PubMed

    Reid, Stephen; Tibshirani, Rob

    2014-07-01

    We apply the cyclic coordinate descent algorithm of Friedman, Hastie, and Tibshirani (2010) to the fitting of a conditional logistic regression model with lasso [Formula: see text] and elastic net penalties. The sequential strong rules of Tibshirani, Bien, Hastie, Friedman, Taylor, Simon, and Tibshirani (2012) are also used in the algorithm and it is shown that these offer a considerable speed up over the standard coordinate descent algorithm with warm starts. Once implemented, the algorithm is used in simulation studies to compare the variable selection and prediction performance of the conditional logistic regression model against that of its unconditional (standard) counterpart. We find that the conditional model performs admirably on datasets drawn from a suitable conditional distribution, outperforming its unconditional counterpart at variable selection. The conditional model is also fit to a small real world dataset, demonstrating how we obtain regularization paths for the parameters of the model and how we apply cross validation for this method where natural unconditional prediction rules are hard to come by.

  2. Regularization Paths for Conditional Logistic Regression: The clogitL1 Package

    PubMed Central

    Reid, Stephen; Tibshirani, Rob

    2014-01-01

    We apply the cyclic coordinate descent algorithm of Friedman, Hastie, and Tibshirani (2010) to the fitting of a conditional logistic regression model with lasso (ℓ1) and elastic net penalties. The sequential strong rules of Tibshirani, Bien, Hastie, Friedman, Taylor, Simon, and Tibshirani (2012) are also used in the algorithm and it is shown that these offer a considerable speed up over the standard coordinate descent algorithm with warm starts. Once implemented, the algorithm is used in simulation studies to compare the variable selection and prediction performance of the conditional logistic regression model against that of its unconditional (standard) counterpart. We find that the conditional model performs admirably on datasets drawn from a suitable conditional distribution, outperforming its unconditional counterpart at variable selection. The conditional model is also fit to a small real world dataset, demonstrating how we obtain regularization paths for the parameters of the model and how we apply cross validation for this method where natural unconditional prediction rules are hard to come by. PMID:26257587

  3. Gluten-Free Precooked Rice-Yellow Pea Pasta: Effect of Extrusion-Cooking Conditions on Phenolic Acids Composition, Selected Properties and Microstructure.

    PubMed

    Bouasla, Abdallah; Wójtowicz, Agnieszka; Zidoune, Mohammed Nasereddine; Olech, Marta; Nowak, Renata; Mitrus, Marcin; Oniszczuk, Anna

    2016-05-01

    Rice/yellow pea flour blend (2/1 ratio) was used to produce gluten-free precooked pasta using a single-screw modified extrusion-cooker TS-45. The effect of moisture content (28%, 30%, and 32%) and screw speed (60, 80, and 100 rpm) on some quality parameters was assessed. The phenolic acids profile and selected pasta properties were tested, like pasting properties, water absorption capacity, cooking loss, texture characteristics, microstructure, and sensory overall acceptability. Results indicated that dough moisture content influenced all tested quality parameters of precooked pasta except firmness. Screw speed showed an effect only on some quality parameters. The extrusion-cooking process at 30% of dough moisture with 80 rpm is appropriate to obtain rice-yellow pea precooked pasta with high content of phenolics and adequate quality. These pasta products exhibited firm texture, low stickiness, and regular and compact interne structure confirmed by high score in sensory overall acceptability. © 2016 Institute of Food Technologists®

  4. Regularized quantile regression for SNP marker estimation of pig growth curves.

    PubMed

    Barroso, L M A; Nascimento, M; Nascimento, A C C; Silva, F F; Serão, N V L; Cruz, C D; Resende, M D V; Silva, F L; Azevedo, C F; Lopes, P S; Guimarães, S E F

    2017-01-01

    Genomic growth curves are generally defined only in terms of population mean; an alternative approach that has not yet been exploited in genomic analyses of growth curves is the Quantile Regression (QR). This methodology allows for the estimation of marker effects at different levels of the variable of interest. We aimed to propose and evaluate a regularized quantile regression for SNP marker effect estimation of pig growth curves, as well as to identify the chromosome regions of the most relevant markers and to estimate the genetic individual weight trajectory over time (genomic growth curve) under different quantiles (levels). The regularized quantile regression (RQR) enabled the discovery, at different levels of interest (quantiles), of the most relevant markers allowing for the identification of QTL regions. We found the same relevant markers simultaneously affecting different growth curve parameters (mature weight and maturity rate): two (ALGA0096701 and ALGA0029483) for RQR(0.2), one (ALGA0096701) for RQR(0.5), and one (ALGA0003761) for RQR(0.8). Three average genomic growth curves were obtained and the behavior was explained by the curve in quantile 0.2, which differed from the others. RQR allowed for the construction of genomic growth curves, which is the key to identifying and selecting the most desirable animals for breeding purposes. Furthermore, the proposed model enabled us to find, at different levels of interest (quantiles), the most relevant markers for each trait (growth curve parameter estimates) and their respective chromosomal positions (identification of new QTL regions for growth curves in pigs). These markers can be exploited under the context of marker assisted selection while aiming to change the shape of pig growth curves.

  5. Application of Two-Parameter Stabilizing Functions in Solving a Convolution-Type Integral Equation by Regularization Method

    NASA Astrophysics Data System (ADS)

    Maslakov, M. L.

    2018-04-01

    This paper examines the solution of convolution-type integral equations of the first kind by applying the Tikhonov regularization method with two-parameter stabilizing functions. The class of stabilizing functions is expanded in order to improve the accuracy of the resulting solution. The features of the problem formulation for identification and adaptive signal correction are described. A method for choosing regularization parameters in problems of identification and adaptive signal correction is suggested.

  6. A multiplicative regularization for force reconstruction

    NASA Astrophysics Data System (ADS)

    Aucejo, M.; De Smet, O.

    2017-02-01

    Additive regularizations, such as Tikhonov-like approaches, are certainly the most popular methods for reconstructing forces acting on a structure. These approaches require, however, the knowledge of a regularization parameter, that can be numerically computed using specific procedures. Unfortunately, these procedures are generally computationally intensive. For this particular reason, it could be of primary interest to propose a method able to proceed without defining any regularization parameter beforehand. In this paper, a multiplicative regularization is introduced for this purpose. By construction, the regularized solution has to be calculated in an iterative manner. In doing so, the amount of regularization is automatically adjusted throughout the resolution process. Validations using synthetic and experimental data highlight the ability of the proposed approach in providing consistent reconstructions.

  7. Selected Characteristics, Classified & Unclassified (Regular) Students; Community Colleges, Fall 1978.

    ERIC Educational Resources Information Center

    Hawaii Univ., Honolulu. Community Coll. System.

    Fall 1978 enrollment data for Hawaii's community colleges and data on selected characteristics of students enrolled in regular credit programs are presented. Of the 27,880 registrants, 74% were regular students, 1% were early admittees, 6% were registered in non-credit apprenticeship programs, and 18% were in special programs. Regular student…

  8. Implication of adaptive smoothness constraint and Helmert variance component estimation in seismic slip inversion

    NASA Astrophysics Data System (ADS)

    Fan, Qingbiao; Xu, Caijun; Yi, Lei; Liu, Yang; Wen, Yangmao; Yin, Zhi

    2017-10-01

    When ill-posed problems are inverted, the regularization process is equivalent to adding constraint equations or prior information from a Bayesian perspective. The veracity of the constraints (or the regularization matrix R) significantly affects the solution, and a smoothness constraint is usually added in seismic slip inversions. In this paper, an adaptive smoothness constraint (ASC) based on the classic Laplacian smoothness constraint (LSC) is proposed. The ASC not only improves the smoothness constraint, but also helps constrain the slip direction. A series of experiments are conducted in which different magnitudes of noise are imposed and different densities of observation are assumed, and the results indicated that the ASC was superior to the LSC. Using the proposed ASC, the Helmert variance component estimation method is highlighted as the best for selecting the regularization parameter compared with other methods, such as generalized cross-validation or the mean squared error criterion method. The ASC may also benefit other ill-posed problems in which a smoothness constraint is required.

  9. Regularities And Irregularities Of The Stark Parameters For Single Ionized Noble Gases

    NASA Astrophysics Data System (ADS)

    Peláez, R. J.; Djurovic, S.; Cirišan, M.; Aparicio, J. A.; Mar S.

    2010-07-01

    Spectroscopy of ionized noble gases has a great importance for the laboratory and astrophysical plasmas. Generally, spectra of inert gases are important for many physics areas, for example laser physics, fusion diagnostics, photoelectron spectroscopy, collision physics, astrophysics etc. Stark halfwidths as well as shifts of spectral lines are usually employed for plasma diagnostic purposes. For example atomic data of argon krypton and xenon will be useful for the spectral diagnostic of ITER. In addition, the software used for stellar atmosphere simulation like TMAP, and SMART require a large amount of atomic and spectroscopic data. Availability of these parameters will be useful for a further development of stellar atmosphere and evolution models. Stark parameters data of spectral lines can also be useful for verification of theoretical calculations and investigation of regularities and systematic trends of these parameters within a multiplet, supermultiplet or transition array. In the last years, different trends and regularities of Stark parameters (halwidths and shifts of spectral lines) have been analyzed. The conditions related with atomic structure of the element as well as plasma conditions are responsible for regular or irregular behaviors of the Stark parameters. The absence of very close perturbing levels makes Ne II as a good candidate for analysis of the regularities. Other two considered elements Kr II and Xe II with complex spectra present strong perturbations and in some cases an irregularities in Stark parameters appear. In this work we analyze the influence of the perturbations to Stark parameters within the multiplets.

  10. A LSQR-type method provides a computationally efficient automated optimal choice of regularization parameter in diffuse optical tomography.

    PubMed

    Prakash, Jaya; Yalavarthy, Phaneendra K

    2013-03-01

    Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time.

  11. Characterizing the functional MRI response using Tikhonov regularization.

    PubMed

    Vakorin, Vasily A; Borowsky, Ron; Sarty, Gordon E

    2007-09-20

    The problem of evaluating an averaged functional magnetic resonance imaging (fMRI) response for repeated block design experiments was considered within a semiparametric regression model with autocorrelated residuals. We applied functional data analysis (FDA) techniques that use a least-squares fitting of B-spline expansions with Tikhonov regularization. To deal with the noise autocorrelation, we proposed a regularization parameter selection method based on the idea of combining temporal smoothing with residual whitening. A criterion based on a generalized chi(2)-test of the residuals for white noise was compared with a generalized cross-validation scheme. We evaluated and compared the performance of the two criteria, based on their effect on the quality of the fMRI response. We found that the regularization parameter can be tuned to improve the noise autocorrelation structure, but the whitening criterion provides too much smoothing when compared with the cross-validation criterion. The ultimate goal of the proposed smoothing techniques is to facilitate the extraction of temporal features in the hemodynamic response for further analysis. In particular, these FDA methods allow us to compute derivatives and integrals of the fMRI signal so that fMRI data may be correlated with behavioral and physiological models. For example, positive and negative hemodynamic responses may be easily and robustly identified on the basis of the first derivative at an early time point in the response. Ultimately, these methods allow us to verify previously reported correlations between the hemodynamic response and the behavioral measures of accuracy and reaction time, showing the potential to recover new information from fMRI data. 2007 John Wiley & Sons, Ltd

  12. Polarimetric image reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Valenzuela, John R.

    In the field of imaging polarimetry Stokes parameters are sought and must be inferred from noisy and blurred intensity measurements. Using a penalized-likelihood estimation framework we investigate reconstruction quality when estimating intensity images and then transforming to Stokes parameters (traditional estimator), and when estimating Stokes parameters directly (Stokes estimator). We define our cost function for reconstruction by a weighted least squares data fit term and a regularization penalty. It is shown that under quadratic regularization, the traditional and Stokes estimators can be made equal by appropriate choice of regularization parameters. It is empirically shown that, when using edge preserving regularization, estimating the Stokes parameters directly leads to lower RMS error in reconstruction. Also, the addition of a cross channel regularization term further lowers the RMS error for both methods especially in the case of low SNR. The technique of phase diversity has been used in traditional incoherent imaging systems to jointly estimate an object and optical system aberrations. We extend the technique of phase diversity to polarimetric imaging systems. Specifically, we describe penalized-likelihood methods for jointly estimating Stokes images and optical system aberrations from measurements that contain phase diversity. Jointly estimating Stokes images and optical system aberrations involves a large parameter space. A closed-form expression for the estimate of the Stokes images in terms of the aberration parameters is derived and used in a formulation that reduces the dimensionality of the search space to the number of aberration parameters only. We compare the performance of the joint estimator under both quadratic and edge-preserving regularization. The joint estimator with edge-preserving regularization yields higher fidelity polarization estimates than with quadratic regularization. Under quadratic regularization, using the reduced-parameter search strategy, accurate aberration estimates can be obtained without recourse to regularization "tuning". Phase-diverse wavefront sensing is emerging as a viable candidate wavefront sensor for adaptive-optics systems. In a quadratically penalized weighted least squares estimation framework a closed form expression for the object being imaged in terms of the aberrations in the system is available. This expression offers a dramatic reduction of the dimensionality of the estimation problem and thus is of great interest for practical applications. We have derived an expression for an approximate joint covariance matrix for object and aberrations in the phase diversity context. Our expression for the approximate joint covariance is compared with the "known-object" Cramer-Rao lower bound that is typically used for system parameter optimization. Estimates of the optimal amount of defocus in a phase-diverse wavefront sensor derived from the joint-covariance matrix, the known-object Cramer-Rao bound, and Monte Carlo simulations are compared for an extended scene and a point object. It is found that our variance approximation, that incorporates the uncertainty of the object, leads to an improvement in predicting the optimal amount of defocus to use in a phase-diverse wavefront sensor.

  13. Approximate isotropic cloak for the Maxwell equations

    NASA Astrophysics Data System (ADS)

    Ghosh, Tuhin; Tarikere, Ashwin

    2018-05-01

    We construct a regular isotropic approximate cloak for the Maxwell system of equations. The method of transformation optics has enabled the design of electromagnetic parameters that cloak a region from external observation. However, these constructions are singular and anisotropic, making practical implementation difficult. Thus, regular approximations to these cloaks have been constructed that cloak a given region to any desired degree of accuracy. In this paper, we show how to construct isotropic approximations to these regularized cloaks using homogenization techniques so that one obtains cloaking of arbitrary accuracy with regular and isotropic parameters.

  14. Wavelength selection in the crown splash

    NASA Astrophysics Data System (ADS)

    Zhang, Li V.; Brunet, Philippe; Eggers, Jens; Deegan, Robert D.

    2010-12-01

    The impact of a drop onto a liquid layer produces a splash that results from the ejection and dissolution of one or more liquid sheets, which expand radially from the point of impact. In the crown splash parameter regime, secondary droplets appear at fairly regularly spaced intervals along the rim of the sheet. By performing many experiments for the same parameter values, we measure the spectrum of small-amplitude perturbations growing on the rim. We show that for a range of parameters in the crown splash regime, the generation of secondary droplets results from a Rayleigh-Plateau instability of the rim, whose shape is almost cylindrical. In our theoretical calculation, we include the time dependence of the base state. The remaining irregularity of the pattern is explained by the finite width of the Rayleigh-Plateau dispersion relation. Alternative mechanisms, such as the Rayleigh-Taylor instability, can be excluded for the experimental parameters of our study.

  15. Constitutive Modeling of Porcine Liver in Indentation Using 3D Ultrasound Imaging

    PubMed Central

    Jordan, P.; Socrate, S.; Zickler, T.E.; Howe, R.D.

    2009-01-01

    In this work we present an inverse finite-element modeling framework for constitutive modeling and parameter estimation of soft tissues using full-field volumetric deformation data obtained from 3D ultrasound. The finite-element model is coupled to full-field visual measurements by regularization springs attached at nodal locations. The free ends of the springs are displaced according to the locally estimated tissue motion and the normalized potential energy stored in all springs serves as a measure of model-experiment agreement for material parameter optimization. We demonstrate good accuracy of estimated parameters and consistent convergence properties on synthetically generated data. We present constitutive model selection and parameter estimation for perfused porcine liver in indentation and demonstrate that a quasilinear viscoelastic model with shear modulus relaxation offers good model-experiment agreement in terms of indenter displacement (0.19 mm RMS error) and tissue displacement field (0.97 mm RMS error). PMID:19627823

  16. Heavyweight cement concrete with high stability of strength parameters

    NASA Astrophysics Data System (ADS)

    Kudyakov, Konstantin; Nevsky, Andrey; Danke, Ilia; Kudyakov, Aleksandr; Kudyakov, Vitaly

    2016-01-01

    The present paper establishes regularities of basalt fibers distribution in movable cement concrete mixes under different conditions of their preparation and their selective introduction into mixer during the mixing process. The optimum content of basalt fibers was defined as 0.5% of the cement weight, which provides a uniform distribution of fibers in the concrete volume. It allows increasing compressive strength up to 51.2% and increasing tensile strength up to 28.8%. Micro-structural analysis identified new formations on the surface of basalt fibers, which indicates the good adhesion of hardened cement paste to the fibers. Stability of concrete strength parameters has significantly increased with introduction of basalt fibers into concrete mix.

  17. A comprehensive numerical analysis of background phase correction with V-SHARP.

    PubMed

    Özbay, Pinar Senay; Deistung, Andreas; Feng, Xiang; Nanz, Daniel; Reichenbach, Jürgen Rainer; Schweser, Ferdinand

    2017-04-01

    Sophisticated harmonic artifact reduction for phase data (SHARP) is a method to remove background field contributions in MRI phase images, which is an essential processing step for quantitative susceptibility mapping (QSM). To perform SHARP, a spherical kernel radius and a regularization parameter need to be defined. In this study, we carried out an extensive analysis of the effect of these two parameters on the corrected phase images and on the reconstructed susceptibility maps. As a result of the dependence of the parameters on acquisition and processing characteristics, we propose a new SHARP scheme with generalized parameters. The new SHARP scheme uses a high-pass filtering approach to define the regularization parameter. We employed the variable-kernel SHARP (V-SHARP) approach, using different maximum radii (R m ) between 1 and 15 mm and varying regularization parameters (f) in a numerical brain model. The local root-mean-square error (RMSE) between the ground-truth, background-corrected field map and the results from SHARP decreased towards the center of the brain. RMSE of susceptibility maps calculated with a spatial domain algorithm was smallest for R m between 6 and 10 mm and f between 0 and 0.01 mm -1 , and for maps calculated with a Fourier domain algorithm for R m between 10 and 15 mm and f between 0 and 0.0091 mm -1 . We demonstrated and confirmed the new parameter scheme in vivo. The novel regularization scheme allows the use of the same regularization parameter irrespective of other imaging parameters, such as image resolution. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. Demonstration of Aerosol Property Profiling by Multi-wavelength Lidar Under Varying Relative Humidity Conditions

    NASA Technical Reports Server (NTRS)

    Whiteman, D.N.; Veselovskii, I.; Kolgotin, A.; Korenskii, M.; Andrews, E.

    2008-01-01

    The feasibility of using a multi-wavelength Mie-Raman lidar based on a tripled Nd:YAG laser for profiling aerosol physical parameters in the planetary boundary layer (PBL) under varying conditions of relative humidity (RH) is studied. The lidar quantifies three aerosol backscattering and two extinction coefficients and from these optical data the particle parameters such as concentration, size and complex refractive index are retrieved through inversion with regularization. The column-integrated, lidar-derived parameters are compared with results from the AERONET sun photometer. The lidar and sun photometer agree well in the characterization of the fine mode parameters, however the lidar shows less sensitivity to coarse mode. The lidar results reveal a strong dependence of particle properties on RH. The height regions with enhanced RH are characterized by an increase of backscattering and extinction coefficient and a decrease in the Angstrom exponent coinciding with an increase in the particle size. We present data selection techniques useful for selecting cases that can support the calculation of hygroscopic growth parameters using lidar. Hygroscopic growth factors calculated using these techniques agree with expectations despite the lack of co-located radiosonde data. Despite this limitation, the results demonstrate the potential of multi-wavelength Raman lidar technique for study of aerosol humidification process.

  19. SPECT reconstruction using DCT-induced tight framelet regularization

    NASA Astrophysics Data System (ADS)

    Zhang, Jiahan; Li, Si; Xu, Yuesheng; Schmidtlein, C. R.; Lipson, Edward D.; Feiglin, David H.; Krol, Andrzej

    2015-03-01

    Wavelet transforms have been successfully applied in many fields of image processing. Yet, to our knowledge, they have never been directly incorporated to the objective function in Emission Computed Tomography (ECT) image reconstruction. Our aim has been to investigate if the ℓ1-norm of non-decimated discrete cosine transform (DCT) coefficients of the estimated radiotracer distribution could be effectively used as the regularization term for the penalized-likelihood (PL) reconstruction, where a regularizer is used to enforce the image smoothness in the reconstruction. In this study, the ℓ1-norm of 2D DCT wavelet decomposition was used as a regularization term. The Preconditioned Alternating Projection Algorithm (PAPA), which we proposed in earlier work to solve penalized likelihood (PL) reconstruction with non-differentiable regularizers, was used to solve this optimization problem. The DCT wavelet decompositions were performed on the transaxial reconstructed images. We reconstructed Monte Carlo simulated SPECT data obtained for a numerical phantom with Gaussian blobs as hot lesions and with a warm random lumpy background. Reconstructed images using the proposed method exhibited better noise suppression and improved lesion conspicuity, compared with images reconstructed using expectation maximization (EM) algorithm with Gaussian post filter (GPF). Also, the mean square error (MSE) was smaller, compared with EM-GPF. A critical and challenging aspect of this method was selection of optimal parameters. In summary, our numerical experiments demonstrated that the ℓ1-norm of discrete cosine transform (DCT) wavelet frame transform DCT regularizer shows promise for SPECT image reconstruction using PAPA method.

  20. Enhancing Our Knowledge of Northern Cepheids through Photometric Monitoring

    NASA Astrophysics Data System (ADS)

    Turner, D. G.; Majaess, D. J.; Lane, D. J.; Szabados, L.; Kovtyukh, V. V.; Usenko, I. A.; Berdnikov, L. N.

    2009-09-01

    A selection of known and newly-discovered northern hemisphere Cepheids and related objects are being monitored regularly through CCD observations at the automated Abbey Ridge Observatory, near Halifax, and photoelectric photometry from the Saint Mary's University Burke-Gaffney Observatory. Included is Polaris, which is displaying unusual fluctuations in its growing light amplitude, and a short-period, double-mode Cepheid, HDE 344787, with an amplitude smaller than that of Polaris, along with a selection of other classical Cepheids in need of additional observations. The observations are being used to establish basic parameters for the Cepheids, for application to the Galactic calibration of the Cepheid period-luminosity relation as well as studies of Galactic structure.

  1. Sparse Poisson noisy image deblurring.

    PubMed

    Carlavan, Mikael; Blanc-Féraud, Laure

    2012-04-01

    Deblurring noisy Poisson images has recently been a subject of an increasing amount of works in many areas such as astronomy and biological imaging. In this paper, we focus on confocal microscopy, which is a very popular technique for 3-D imaging of biological living specimens that gives images with a very good resolution (several hundreds of nanometers), although degraded by both blur and Poisson noise. Deconvolution methods have been proposed to reduce these degradations, and in this paper, we focus on techniques that promote the introduction of an explicit prior on the solution. One difficulty of these techniques is to set the value of the parameter, which weights the tradeoff between the data term and the regularizing term. Only few works have been devoted to the research of an automatic selection of this regularizing parameter when considering Poisson noise; therefore, it is often set manually such that it gives the best visual results. We present here two recent methods to estimate this regularizing parameter, and we first propose an improvement of these estimators, which takes advantage of confocal images. Following these estimators, we secondly propose to express the problem of the deconvolution of Poisson noisy images as the minimization of a new constrained problem. The proposed constrained formulation is well suited to this application domain since it is directly expressed using the antilog likelihood of the Poisson distribution and therefore does not require any approximation. We show how to solve the unconstrained and constrained problems using the recent alternating-direction technique, and we present results on synthetic and real data using well-known priors, such as total variation and wavelet transforms. Among these wavelet transforms, we specially focus on the dual-tree complex wavelet transform and on the dictionary composed of curvelets and an undecimated wavelet transform.

  2. Anxiety associated with parachute jumping as the cause of blood red-ox balance impairment.

    PubMed

    Kowalczyk, Mateusz; Kozak, Katarzyna; Ciećwierz, Julita; Sienkiewicz, Monika; Kura, Marcin; Jasiak, Łukasz; Kowalczyk, Edward

    2016-12-23

    The aim of the study was to assess the effect of anxiety associated with parachute jumps on selected redox balance parameters in regular soldiers from airborne forces. The study allows estimating whether the paratroopers exposed to high level of mental stress are simultaneously under severe oxidative stress. The investigations were carried out on 46 professional soldiers from airborne forces divided into groups depending on the number of performed parachute jumps. Peripheral venous blood samples were obtained under fasting conditions three times for the determination of selected parameters of red-ox balance: on an ordinary working day, on the day when the jump was performed and on the day after the jump. The time of the performed determinations was to reflect the initial balance of the organism, the state at the moment of stress and its effect on the organism. Our investigations showed lack of differences in characteristics of the activity of antioxidant enzymes (CAT and SOD) in response to mental stress depending on the experience of the investigated group in parachuting. Decrease in GSH-Px activity was demonstrated in response to mental stress in all the investigated groups. The TBARS level was higher in more experienced parachutists. The analysis of changes in selected redox balance parameters may be useful for monitoring anxiety associated with parachute jumps.

  3. A combined reconstruction-classification method for diffuse optical tomography.

    PubMed

    Hiltunen, P; Prince, S J D; Arridge, S

    2009-11-07

    We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels.

  4. Influence of parameter settings in voxel-based morphometry 8. Using DARTEL and region-of-interest on reproducibility in gray matter volumetry.

    PubMed

    Goto, M; Abe, O; Aoki, S; Hayashi, N; Miyati, T; Takao, H; Matsuda, H; Yamashita, F; Iwatsubo, T; Mori, H; Kunimatsu, A; Ino, K; Yano, K; Ohtomo, K

    2015-01-01

    To investigate whether reproducibility of gray matter volumetry is influenced by parameter settings for VBM 8 using Diffeomorphic Anatomical Registration Through Exponentiated Lie Algebra (DARTEL) with region-of-interest (ROI) analyses. We prepared three-dimensional T1-weighted magnetic resonance images (3D-T1WIs) of 21 healthy subjects. All subjects were imaged with each of five MRI systems. Voxel-based morphometry 8 (VBM 8) and WFU PickAtlas software were used for gray matter volumetry. The bilateral ROI labels used were those provided as default settings with the software: Frontal Lobe, Hippocampus, Occipital Lobe, Orbital Gyrus, Parietal Lobe, Putamen, and Temporal Lobe. All 3D-T1WIs were segmented to gray matter with six parameters of VBM 8, with each parameter having between three and eight selectable levels. Reproducibility was evaluated as the standard deviation (mm³) of measured values for the five MRI systems. Reproducibility was influenced by 'Bias regularization (BiasR)', 'Bias FWHM', and 'De-noising filter' settings, but not by 'MRF weighting', 'Sampling distance', or 'Warping regularization' settings. Reproducibility in BiasR was influenced by ROI. Superior reproducibility was observed in Frontal Lobe with the BiasR1 setting, and in Hippocampus, Parietal Lobe, and Putamen with the BiasR3*, BiasR1, and BiasR5 settings, respectively. Reproducibility of gray matter volumetry was influenced by parameter settings in VBM 8 using DARTEL and ROI. In multi-center studies, the use of appropriate settings in VBM 8 with DARTEL results in reduced scanner effect.

  5. Application of the L-curve in geophysical inverse problems: methodologies for the extraction of the optimal parameter

    NASA Astrophysics Data System (ADS)

    Bassrei, A.; Terra, F. A.; Santos, E. T.

    2007-12-01

    Inverse problems in Applied Geophysics are usually ill-posed. One way to reduce such deficiency is through derivative matrices, which are a particular case of a more general family that receive the name regularization. The regularization by derivative matrices has an input parameter called regularization parameter, which choice is already a problem. It was suggested in the 1970's a heuristic approach later called L-curve, with the purpose to provide the optimum regularization parameter. The L-curve is a parametric curve, where each point is associated to a λ parameter. In the horizontal axis one represents the error between the observed data and the calculated one and in the vertical axis one represents the product between the regularization matrix and the estimated model. The ideal point is the L-curve knee, where there is a balance between the quantities represented in the Cartesian axes. The L-curve has been applied to a variety of inverse problems, also in Geophysics. However, the visualization of the knee is not always an easy task, in special when the L-curve does not the L shape. In this work three methodologies are employed for the search and obtainment of the optimal regularization parameter from the L curve. The first criterion is the utilization of Hansen's tool box which extracts λ automatically. The second criterion consists in to extract visually the optimal parameter. By third criterion one understands the construction of the first derivative of the L-curve, and the posterior automatic extraction of the inflexion point. The utilization of the L-curve with the three above criteria were applied and validated in traveltime tomography and 2-D gravity inversion. After many simulations with synthetic data, noise- free as well as data corrupted with noise, with the regularization orders 0, 1, and 2, we verified that the three criteria are valid and provide satisfactory results. The third criterion presented the best performance, specially in cases where the L-curve has an irregular shape.

  6. Least squares QR-based decomposition provides an efficient way of computing optimal regularization parameter in photoacoustic tomography.

    PubMed

    Shaw, Calvin B; Prakash, Jaya; Pramanik, Manojit; Yalavarthy, Phaneendra K

    2013-08-01

    A computationally efficient approach that computes the optimal regularization parameter for the Tikhonov-minimization scheme is developed for photoacoustic imaging. This approach is based on the least squares-QR decomposition which is a well-known dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is effective in terms of quantitative and qualitative reconstructions of initial pressure distribution enabled via finding an optimal regularization parameter. The computational efficiency and performance of the proposed method are shown using a test case of numerical blood vessel phantom, where the initial pressure is exactly known for quantitative comparison.

  7. RZA-NLMF algorithm-based adaptive sparse sensing for realizing compressive sensing

    NASA Astrophysics Data System (ADS)

    Gui, Guan; Xu, Li; Adachi, Fumiyuki

    2014-12-01

    Nonlinear sparse sensing (NSS) techniques have been adopted for realizing compressive sensing in many applications such as radar imaging. Unlike the NSS, in this paper, we propose an adaptive sparse sensing (ASS) approach using the reweighted zero-attracting normalized least mean fourth (RZA-NLMF) algorithm which depends on several given parameters, i.e., reweighted factor, regularization parameter, and initial step size. First, based on the independent assumption, Cramer-Rao lower bound (CRLB) is derived as for the performance comparisons. In addition, reweighted factor selection method is proposed for achieving robust estimation performance. Finally, to verify the algorithm, Monte Carlo-based computer simulations are given to show that the ASS achieves much better mean square error (MSE) performance than the NSS.

  8. Effect of Low-Dose MDCT and Iterative Reconstruction on Trabecular Bone Microstructure Assessment.

    PubMed

    Kopp, Felix K; Holzapfel, Konstantin; Baum, Thomas; Nasirudin, Radin A; Mei, Kai; Garcia, Eduardo G; Burgkart, Rainer; Rummeny, Ernst J; Kirschke, Jan S; Noël, Peter B

    2016-01-01

    We investigated the effects of low-dose multi detector computed tomography (MDCT) in combination with statistical iterative reconstruction algorithms on trabecular bone microstructure parameters. Twelve donated vertebrae were scanned with the routine radiation exposure used in our department (standard-dose) and a low-dose protocol. Reconstructions were performed with filtered backprojection (FBP) and maximum-likelihood based statistical iterative reconstruction (SIR). Trabecular bone microstructure parameters were assessed and statistically compared for each reconstruction. Moreover, fracture loads of the vertebrae were biomechanically determined and correlated to the assessed microstructure parameters. Trabecular bone microstructure parameters based on low-dose MDCT and SIR significantly correlated with vertebral bone strength. There was no significant difference between microstructure parameters calculated on low-dose SIR and standard-dose FBP images. However, the results revealed a strong dependency on the regularization strength applied during SIR. It was observed that stronger regularization might corrupt the microstructure analysis, because the trabecular structure is a very small detail that might get lost during the regularization process. As a consequence, the introduction of SIR for trabecular bone microstructure analysis requires a specific optimization of the regularization parameters. Moreover, in comparison to other approaches, superior noise-resolution trade-offs can be found with the proposed methods.

  9. [Physical activity by pregnant women and its influence on maternal and foetal parameters; a systematic review].

    PubMed

    Aguilar Cordero, M J; Sánchez López, A M; Rodríguez Blanque, R; Noack Segovia, J P; Pozo Cano, M D; López-Contreras, G; Mur Villar, N

    2014-10-01

    Regular physical activity is known to be very beneficial to health. While it is important at all stages of life, during pregnancy doubts may arise about the suitability of physical exercise, as well as the type of activity, its frequency, intensity and duration. To analyse major studies on the influence of physical activity on maternal and foetal parameters. Systematic review of physical activity programmes for pregnant women and the results achieved, during pregnancy, childbirth and postpartum. 45 items were identified through an automated database search in PubMed, Scopus and Google Scholar, carried out from October 2013 to March 2014. In selecting the items, the criteria applied included the usefulness and relevance of the subject matter and the credibility or experience of the research study authors. The internal and external validity of each of the articles reviewed was taken into account. The results of the review highlight the importance of physical activity during pregnancy, and show that the information currently available can serve as an initial benchmark for further investigation into the impact of regular physical exercise, in an aquatic environment, on maternal-foetal health. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  10. History matching by spline approximation and regularization in single-phase areal reservoirs

    NASA Technical Reports Server (NTRS)

    Lee, T. Y.; Kravaris, C.; Seinfeld, J.

    1986-01-01

    An automatic history matching algorithm is developed based on bi-cubic spline approximations of permeability and porosity distributions and on the theory of regularization to estimate permeability or porosity in a single-phase, two-dimensional real reservoir from well pressure data. The regularization feature of the algorithm is used to convert the ill-posed history matching problem into a well-posed problem. The algorithm employs the conjugate gradient method as its core minimization method. A number of numerical experiments are carried out to evaluate the performance of the algorithm. Comparisons with conventional (non-regularized) automatic history matching algorithms indicate the superiority of the new algorithm with respect to the parameter estimates obtained. A quasioptimal regularization parameter is determined without requiring a priori information on the statistical properties of the observations.

  11. FIBER OPTICS. ACOUSTOOPTICS: Compression of random pulses in fiber waveguides

    NASA Astrophysics Data System (ADS)

    Aleshkevich, Viktor A.; Kozhoridze, G. D.

    1990-07-01

    An investigation is made of the compression of randomly modulated signal + noise pulses during their propagation in a fiber waveguide. An allowance is made for a cubic nonlinearity and quadratic dispersion. The relationships governing the kinetics of transformation of the time envelope, and those which determine the duration and intensity of a random pulse are derived. The expressions for the optimal length of a fiber waveguide and for the maximum degree of compression are compared with the available data for regular pulses and the recommendations on selection of the optimal parameters are given.

  12. Regular Decompositions for H(div) Spaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolev, Tzanio; Vassilevski, Panayot

    We study regular decompositions for H(div) spaces. In particular, we show that such regular decompositions are closely related to a previously studied “inf-sup” condition for parameter-dependent Stokes problems, for which we provide an alternative, more direct, proof.

  13. Neutron Tomography of a Fuel Cell: Statistical Learning Implementation of a Penalized Likelihood Method

    NASA Astrophysics Data System (ADS)

    Coakley, Kevin J.; Vecchia, Dominic F.; Hussey, Daniel S.; Jacobson, David L.

    2013-10-01

    At the NIST Neutron Imaging Facility, we collect neutron projection data for both the dry and wet states of a Proton-Exchange-Membrane (PEM) fuel cell. Transmitted thermal neutrons captured in a scintillator doped with lithium-6 produce scintillation light that is detected by an amorphous silicon detector. Based on joint analysis of the dry and wet state projection data, we reconstruct a residual neutron attenuation image with a Penalized Likelihood method with an edge-preserving Huber penalty function that has two parameters that control how well jumps in the reconstruction are preserved and how well noisy fluctuations are smoothed out. The choice of these parameters greatly influences the resulting reconstruction. We present a data-driven method that objectively selects these parameters, and study its performance for both simulated and experimental data. Before reconstruction, we transform the projection data so that the variance-to-mean ratio is approximately one. For both simulated and measured projection data, the Penalized Likelihood method reconstruction is visually sharper than a reconstruction yielded by a standard Filtered Back Projection method. In an idealized simulation experiment, we demonstrate that the cross validation procedure selects regularization parameters that yield a reconstruction that is nearly optimal according to a root-mean-square prediction error criterion.

  14. A derivation of the Cramer-Rao lower bound of euclidean parameters under equality constraints via score function

    NASA Astrophysics Data System (ADS)

    Susyanto, Nanang

    2017-12-01

    We propose a simple derivation of the Cramer-Rao Lower Bound (CRLB) of parameters under equality constraints from the CRLB without constraints in regular parametric models. When a regular parametric model and an equality constraint of the parameter are given, a parametric submodel can be defined by restricting the parameter under that constraint. The tangent space of this submodel is then computed with the help of the implicit function theorem. Finally, the score function of the restricted parameter is obtained by projecting the efficient influence function of the unrestricted parameter on the appropriate inner product spaces.

  15. Assembly of the most topologically regular two-dimensional micro and nanocrystals with spherical, conical, and tubular shapes

    NASA Astrophysics Data System (ADS)

    Roshal, D. S.; Konevtsova, O. V.; Myasnikova, A. E.; Rochal, S. B.

    2016-11-01

    We consider how to control the extension of curvature-induced defects in the hexagonal order covering different curved surfaces. In these frames we propose a physical mechanism for improving structures of two-dimensional spherical colloidal crystals (SCCs). For any SCC comprising of about 300 or less particles the mechanism transforms all extended topological defects (ETDs) in the hexagonal order into the point disclinations. Perfecting the structure is carried out by successive cycles of the particle implantation and subsequent relaxation of the crystal. The mechanism is potentially suitable for obtaining colloidosomes with better selective permeability. Our approach enables modeling the most topologically regular tubular and conical two-dimensional nanocrystals including various possible polymorphic forms of the HIV viral capsid. Different HIV-like shells with an arbitrary number of structural units (SUs) and desired geometrical parameters are easily formed. Faceting of the obtained structures is performed by minimizing the suggested elastic energy.

  16. Automatic Aircraft Collision Avoidance System and Method

    NASA Technical Reports Server (NTRS)

    Skoog, Mark (Inventor); Hook, Loyd (Inventor); McWherter, Shaun (Inventor); Willhite, Jaimie (Inventor)

    2014-01-01

    The invention is a system and method of compressing a DTM to be used in an Auto-GCAS system using a semi-regular geometric compression algorithm. In general, the invention operates by first selecting the boundaries of the three dimensional map to be compressed and dividing the three dimensional map data into regular areas. Next, a type of free-edged, flat geometric surface is selected which will be used to approximate terrain data of the three dimensional map data. The flat geometric surface is used to approximate terrain data for each regular area. The approximations are checked to determine if they fall within selected tolerances. If the approximation for a specific regular area is within specified tolerance, the data is saved for that specific regular area. If the approximation for a specific area falls outside the specified tolerances, the regular area is divided and a flat geometric surface approximation is made for each of the divided areas. This process is recursively repeated until all of the regular areas are approximated by flat geometric surfaces. Finally, the compressed three dimensional map data is provided to the automatic ground collision system for an aircraft.

  17. Approaches to highly parameterized inversion-A guide to using PEST for groundwater-model calibration

    USGS Publications Warehouse

    Doherty, John E.; Hunt, Randall J.

    2010-01-01

    Highly parameterized groundwater models can create calibration difficulties. Regularized inversion-the combined use of large numbers of parameters with mathematical approaches for stable parameter estimation-is becoming a common approach to address these difficulties and enhance the transfer of information contained in field measurements to parameters used to model that system. Though commonly used in other industries, regularized inversion is somewhat imperfectly understood in the groundwater field. There is concern that this unfamiliarity can lead to underuse, and misuse, of the methodology. This document is constructed to facilitate the appropriate use of regularized inversion for calibrating highly parameterized groundwater models. The presentation is directed at an intermediate- to advanced-level modeler, and it focuses on the PEST software suite-a frequently used tool for highly parameterized model calibration and one that is widely supported by commercial graphical user interfaces. A brief overview of the regularized inversion approach is provided, and techniques for mathematical regularization offered by PEST are outlined, including Tikhonov, subspace, and hybrid schemes. Guidelines for applying regularized inversion techniques are presented after a logical progression of steps for building suitable PEST input. The discussion starts with use of pilot points as a parameterization device and processing/grouping observations to form multicomponent objective functions. A description of potential parameter solution methodologies and resources available through the PEST software and its supporting utility programs follows. Directing the parameter-estimation process through PEST control variables is then discussed, including guidance for monitoring and optimizing the performance of PEST. Comprehensive listings of PEST control variables, and of the roles performed by PEST utility support programs, are presented in the appendixes.

  18. The predictive consequences of parameterization

    NASA Astrophysics Data System (ADS)

    White, J.; Hughes, J. D.; Doherty, J. E.

    2013-12-01

    In numerical groundwater modeling, parameterization is the process of selecting the aspects of a computer model that will be allowed to vary during history matching. This selection process is dependent on professional judgment and is, therefore, inherently subjective. Ideally, a robust parameterization should be commensurate with the spatial and temporal resolution of the model and should include all uncertain aspects of the model. Limited computing resources typically require reducing the number of adjustable parameters so that only a subset of the uncertain model aspects are treated as estimable parameters; the remaining aspects are treated as fixed parameters during history matching. We use linear subspace theory to develop expressions for the predictive error incurred by fixing parameters. The predictive error is comprised of two terms. The first term arises directly from the sensitivity of a prediction to fixed parameters. The second term arises from prediction-sensitive adjustable parameters that are forced to compensate for fixed parameters during history matching. The compensation is accompanied by inappropriate adjustment of otherwise uninformed, null-space parameter components. Unwarranted adjustment of null-space components away from prior maximum likelihood values may produce bias if a prediction is sensitive to those components. The potential for subjective parameterization choices to corrupt predictions is examined using a synthetic model. Several strategies are evaluated, including use of piecewise constant zones, use of pilot points with Tikhonov regularization and use of the Karhunen-Loeve transformation. The best choice of parameterization (as defined by minimum error variance) is strongly dependent on the types of predictions to be made by the model.

  19. Hematological and Biochemical Parameters in Elite Soccer Players During A Competitive Half Season

    PubMed Central

    Anđelković, Marija; Baralić, Ivana; Đorđević, Brižita; Stevuljević, Jelena Kotur; Radivojević, Nenad; Dikić, Nenad; Škodrić, Sanja Radojević; Stojković, Mirjana

    2015-01-01

    Summary Background The purpose of the present study was to report and discuss the hematological and biochemical behavior of elite soccer players, in order to get more insight in the physiological characteristics of these sportsmen and to provide trainers and sports doctors with useful indicators. Methods Nineteen male soccer players volunteered to participate in this study. We followed the young elite soccer players during a competitive half season. Venous blood samples were collected between 9:00 and 10:00 a.m. after an overnight fast (10 h) at baseline, after 45 and 90 days and hematological and biochemical parameters were measured. Results Hemoglobin and hematocrit levels were significantly reduced over the observational period (p<0.05), but erythrocyte count and iron levels remained unchanged. Bilirubin and ferritin levels significantly increased in response to regular soccer training (p<0.05). We observed a significant decrease in muscle enzyme plasma activity during the 90 days study period. ANOVA analysis revealed a significant increase in the leukocyte and neutrophil counts (p<0.05), in parallel with a significant decrease in the lymphocyte count (p<0.05) after the observational period of 90 days. Conclusions Elite soccer players are characterized by significant changes in biochemical and hematological parameters over the half season, which are linked to training workload, as well as adaptation induced by the soccer training. Although the values of the measured parameters fell within the reference range, regular monitoring of the biochemical and hematological parameters is fundamental for the identification of a healthy status and related optimal performances by sport doctors and trainers and selection of a correct workload by trainers. PMID:28356856

  20. Systematic search for wide periodic windows and bounds for the set of regular parameters for the quadratic map.

    PubMed

    Galias, Zbigniew

    2017-05-01

    An efficient method to find positions of periodic windows for the quadratic map f(x)=ax(1-x) and a heuristic algorithm to locate the majority of wide periodic windows are proposed. Accurate rigorous bounds of positions of all periodic windows with periods below 37 and the majority of wide periodic windows with longer periods are found. Based on these results, we prove that the measure of the set of regular parameters in the interval [3,4] is above 0.613960137. The properties of periodic windows are studied numerically. The results of the analysis are used to estimate that the true value of the measure of the set of regular parameters is close to 0.6139603.

  1. Effects of dance therapy on the selected hematological and rheological indicators in older women.

    PubMed

    Filar-Mierzwa, Katarzyna; Marchewka, Anna; Bac, Aneta; Kulis, Aleksandra; Dąbrowski, Zbigniew; Teległów, Aneta

    2017-01-01

    The aim of this study was to analyze the effects of dance therapy on selected hematological and rheological indicators in older women. The study included 30 women (aged 71.8±7.4), and the control group comprised of 10 women of corresponding age. Women from the experimental group were subjected to a five-month dance therapy program (three 45-minute sessions per week); women from the control group were not involved in any regular physical activity. Blood samples from all the women were examined for hematological, rheological, and biochemical parameters prior to the study and five months thereafter. The dance therapy program was reflected by a significant improvement of erythrocyte count and hematocrit. Furthermore, the dance therapy resulted in a significant increase in the plasma viscosity, while no significant changes in glucose and fibrinogen levels were noted. Dance therapy modulates selected hematological parameters of older women; it leads to increase in erythrocyte count and hematocrit level. Dance therapy is reflected by higher plasma viscosity. Concentrations of fibrinogen and glucose are not affected by the dance therapy in older women, suggesting maintenance of homeostasis. Those findings advocate implementation of dance therapy programs in older women.

  2. Generating Models of Infinite-State Communication Protocols Using Regular Inference with Abstraction

    NASA Astrophysics Data System (ADS)

    Aarts, Fides; Jonsson, Bengt; Uijen, Johan

    In order to facilitate model-based verification and validation, effort is underway to develop techniques for generating models of communication system components from observations of their external behavior. Most previous such work has employed regular inference techniques which generate modest-size finite-state models. They typically suppress parameters of messages, although these have a significant impact on control flow in many communication protocols. We present a framework, which adapts regular inference to include data parameters in messages and states for generating components with large or infinite message alphabets. A main idea is to adapt the framework of predicate abstraction, successfully used in formal verification. Since we are in a black-box setting, the abstraction must be supplied externally, using information about how the component manages data parameters. We have implemented our techniques by connecting the LearnLib tool for regular inference with the protocol simulator ns-2, and generated a model of the SIP component as implemented in ns-2.

  3. ACCELERATING MR PARAMETER MAPPING USING SPARSITY-PROMOTING REGULARIZATION IN PARAMETRIC DIMENSION

    PubMed Central

    Velikina, Julia V.; Alexander, Andrew L.; Samsonov, Alexey

    2013-01-01

    MR parameter mapping requires sampling along additional (parametric) dimension, which often limits its clinical appeal due to a several-fold increase in scan times compared to conventional anatomic imaging. Data undersampling combined with parallel imaging is an attractive way to reduce scan time in such applications. However, inherent SNR penalties of parallel MRI due to noise amplification often limit its utility even at moderate acceleration factors, requiring regularization by prior knowledge. In this work, we propose a novel regularization strategy, which utilizes smoothness of signal evolution in the parametric dimension within compressed sensing framework (p-CS) to provide accurate and precise estimation of parametric maps from undersampled data. The performance of the method was demonstrated with variable flip angle T1 mapping and compared favorably to two representative reconstruction approaches, image space-based total variation regularization and an analytical model-based reconstruction. The proposed p-CS regularization was found to provide efficient suppression of noise amplification and preservation of parameter mapping accuracy without explicit utilization of analytical signal models. The developed method may facilitate acceleration of quantitative MRI techniques that are not suitable to model-based reconstruction because of complex signal models or when signal deviations from the expected analytical model exist. PMID:23213053

  4. Accretion onto some well-known regular black holes

    NASA Astrophysics Data System (ADS)

    Jawad, Abdul; Shahzad, M. Umair

    2016-03-01

    In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes.

  5. Investigation on the Effects of 12 Days Intensive Competition on Some Blood Parameters of Basketball Players

    ERIC Educational Resources Information Center

    Gencer, Yildirim Gokhan; Coskun, Funda; Sarikaya, Mucahit; Kaplan, Seyhmus

    2018-01-01

    The aim of this study is to investigate the effect of intensive basketball competitions (10 official basketball games in 12 days intensive competition period) on blood parameters of basketball players. Blood samples were taken from the basketball players of the university team. The players were training regularly and they had no regular health…

  6. Determination of the turbulence integral model parameters for a case of a coolant angular flow in regular rod-bundle

    NASA Astrophysics Data System (ADS)

    Bayaskhalanov, M. V.; Vlasov, M. N.; Korsun, A. S.; Merinov, I. G.; Philippov, M. Ph

    2017-11-01

    Research results of “k-ε” turbulence integral model (TIM) parameters dependence on the angle of a coolant flow in regular smooth cylindrical rod-bundle are presented. TIM is intended for the definition of efficient impulse and heat transport coefficients in the averaged equations of a heat and mass transfer in the regular rod structures in an anisotropic porous media approximation. The TIM equations are received by volume-averaging of the “k-ε” turbulence model equations on periodic cell of rod-bundle. The water flow across rod-bundle under angles from 15 to 75 degrees was simulated by means of an ANSYS CFX code. Dependence of the TIM parameters on flow angle was as a result received.

  7. Reducing errors in the GRACE gravity solutions using regularization

    NASA Astrophysics Data System (ADS)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.

    2012-09-01

    The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4 solutions (RL04) from the Center for Space Research (CSR). Post-fit residual analysis shows that the regularized solutions fit the data to within the noise level of GRACE. A time series of filtered hydrological model is used to confirm that signal attenuation for basins in the Total Runoff Integrating Pathways (TRIP) database over 320 km radii is less than 1 cm equivalent water height RMS, which is within the noise level of GRACE.

  8. Iterative image reconstruction that includes a total variation regularization for radial MRI.

    PubMed

    Kojima, Shinya; Shinohara, Hiroyuki; Hashimoto, Takeyuki; Hirata, Masami; Ueno, Eiko

    2015-07-01

    This paper presents an iterative image reconstruction method for radial encodings in MRI based on a total variation (TV) regularization. The algebraic reconstruction method combined with total variation regularization (ART_TV) is implemented with a regularization parameter specifying the weight of the TV term in the optimization process. We used numerical simulations of a Shepp-Logan phantom, as well as experimental imaging of a phantom that included a rectangular-wave chart, to evaluate the performance of ART_TV, and to compare it with that of the Fourier transform (FT) method. The trade-off between spatial resolution and signal-to-noise ratio (SNR) was investigated for different values of the regularization parameter by experiments on a phantom and a commercially available MRI system. ART_TV was inferior to the FT with respect to the evaluation of the modulation transfer function (MTF), especially at high frequencies; however, it outperformed the FT with regard to the SNR. In accordance with the results of SNR measurement, visual impression suggested that the image quality of ART_TV was better than that of the FT for reconstruction of a noisy image of a kiwi fruit. In conclusion, ART_TV provides radial MRI with improved image quality for low-SNR data; however, the regularization parameter in ART_TV is a critical factor for obtaining improvement over the FT.

  9. 3D near-to-surface conductivity reconstruction by inversion of VETEM data using the distorted Born iterative method

    USGS Publications Warehouse

    Wang, G.L.; Chew, W.C.; Cui, T.J.; Aydiner, A.A.; Wright, D.L.; Smith, D.V.

    2004-01-01

    Three-dimensional (3D) subsurface imaging by using inversion of data obtained from the very early time electromagnetic system (VETEM) was discussed. The study was carried out by using the distorted Born iterative method to match the internal nonlinear property of the 3D inversion problem. The forward solver was based on the total-current formulation bi-conjugate gradient-fast Fourier transform (BCCG-FFT). It was found that the selection of regularization parameter follow a heuristic rule as used in the Levenberg-Marquardt algorithm so that the iteration is stable.

  10. [The use of controlled physical training in patients with acute coronary syndrome treated with intervention - assessment of effects on biochemical parameters and functional myocardial].

    PubMed

    Kapusta, Joanna; Kapusta, Anna; Pawlicki, Lucjan; Irzmański, Robert

    2016-06-01

    Diseases of the cardiovascular system is one of the most common causes of death among people over 65 years. Due to its course and incidence are a major cause of disability and impaired quality of life for seniors, as well as a serious economic problem in health care. Important role in the prevention of cardiovascular disease plays making systematic physical activity, which is a component of any rehabilitation program. Regular physical training by doing cardio-and vasoprotective has a beneficial effect on cardiovascular status and physical performance in patients with diagnosed coronary heart disease, regardless of age. The aim of this study was to evaluate the effect of controlled exercise on selected biochemical parameters and functional myocardial infarction. A group of 89 patients were divided into 3 subgroups. In group I (n = 30) was performed 2 weeks cardiac rehabilitation program, in group II (n = 30) 4 weekly. Streamline the program consisted of a series of interval training performed using a bicycle ergometer and general exercise. The remaining group (gr. III, n = 29) participated in individually selected training program. In all subjects before and after the training cycle underwent thoracic impedance plethysmography, also determined the level of plasma natriuretic peptide NT-proBNP and echocardiography and exercise test. After training, in groups, which carried out a controlled physical training, improvement was observed: exercise capacity of patients respectively in group I (p = 0.0003), group II (p = 0.0001) and group III (p = 0.032), stroke volume SV, cardiac output CO and global myocardial contractility, there was also reduction in the concentration of natriuretic peptide NT-proBNP. Furthermore, the correlation between the results shown pletyzmography parameters and NT-proBNP, SV, CO and EF. Regular physical training as part of the cardiac rehabilitation has a beneficial effect on biochemical parameters and functional myocardial infarction in patients with ACS. Size of the observed changes conditioned by the nature and duration of the training. © 2016 MEDPRESS.

  11. Z-Index Parameterization for Volumetric CT Image Reconstruction via 3-D Dictionary Learning.

    PubMed

    Bai, Ti; Yan, Hao; Jia, Xun; Jiang, Steve; Wang, Ge; Mou, Xuanqin

    2017-12-01

    Despite the rapid developments of X-ray cone-beam CT (CBCT), image noise still remains a major issue for the low dose CBCT. To suppress the noise effectively while retain the structures well for low dose CBCT image, in this paper, a sparse constraint based on the 3-D dictionary is incorporated into a regularized iterative reconstruction framework, defining the 3-D dictionary learning (3-DDL) method. In addition, by analyzing the sparsity level curve associated with different regularization parameters, a new adaptive parameter selection strategy is proposed to facilitate our 3-DDL method. To justify the proposed method, we first analyze the distributions of the representation coefficients associated with the 3-D dictionary and the conventional 2-D dictionary to compare their efficiencies in representing volumetric images. Then, multiple real data experiments are conducted for performance validation. Based on these results, we found: 1) the 3-D dictionary-based sparse coefficients have three orders narrower Laplacian distribution compared with the 2-D dictionary, suggesting the higher representation efficiencies of the 3-D dictionary; 2) the sparsity level curve demonstrates a clear Z-shape, and hence referred to as Z-curve, in this paper; 3) the parameter associated with the maximum curvature point of the Z-curve suggests a nice parameter choice, which could be adaptively located with the proposed Z-index parameterization (ZIP) method; 4) the proposed 3-DDL algorithm equipped with the ZIP method could deliver reconstructions with the lowest root mean squared errors and the highest structural similarity index compared with the competing methods; 5) similar noise performance as the regular dose FDK reconstruction regarding the standard deviation metric could be achieved with the proposed method using (1/2)/(1/4)/(1/8) dose level projections. The contrast-noise ratio is improved by ~2.5/3.5 times with respect to two different cases under the (1/8) dose level compared with the low dose FDK reconstruction. The proposed method is expected to reduce the radiation dose by a factor of 8 for CBCT, considering the voted strongly discriminated low contrast tissues.

  12. Unified Bayesian Estimator of EEG Reference at Infinity: rREST (Regularized Reference Electrode Standardization Technique).

    PubMed

    Hu, Shiang; Yao, Dezhong; Valdes-Sosa, Pedro A

    2018-01-01

    The choice of reference for the electroencephalogram (EEG) is a long-lasting unsolved issue resulting in inconsistent usages and endless debates. Currently, both the average reference (AR) and the reference electrode standardization technique (REST) are two primary, apparently irreconcilable contenders. We propose a theoretical framework to resolve this reference issue by formulating both (a) estimation of potentials at infinity, and (b) determination of the reference, as a unified Bayesian linear inverse problem, which can be solved by maximum a posterior estimation. We find that AR and REST are very particular cases of this unified framework: AR results from biophysically non-informative prior; while REST utilizes the prior based on the EEG generative model. To allow for simultaneous denoising and reference estimation, we develop the regularized versions of AR and REST, named rAR and rREST, respectively. Both depend on a regularization parameter that is the noise to signal variance ratio. Traditional and new estimators are evaluated with this framework, by both simulations and analysis of real resting EEGs. Toward this end, we leverage the MRI and EEG data from 89 subjects which participated in the Cuban Human Brain Mapping Project. Generated artificial EEGs-with a known ground truth, show that relative error in estimating the EEG potentials at infinity is lowest for rREST. It also reveals that realistic volume conductor models improve the performances of REST and rREST. Importantly, for practical applications, it is shown that an average lead field gives the results comparable to the individual lead field. Finally, it is shown that the selection of the regularization parameter with Generalized Cross-Validation (GCV) is close to the "oracle" choice based on the ground truth. When evaluated with the real 89 resting state EEGs, rREST consistently yields the lowest GCV. This study provides a novel perspective to the EEG reference problem by means of a unified inverse solution framework. It may allow additional principled theoretical formulations and numerical evaluation of performance.

  13. A new approach to blind deconvolution of astronomical images

    NASA Astrophysics Data System (ADS)

    Vorontsov, S. V.; Jefferies, S. M.

    2017-05-01

    We readdress the strategy of finding approximate regularized solutions to the blind deconvolution problem, when both the object and the point-spread function (PSF) have finite support. Our approach consists in addressing fixed points of an iteration in which both the object x and the PSF y are approximated in an alternating manner, discarding the previous approximation for x when updating x (similarly for y), and considering the resultant fixed points as candidates for a sensible solution. Alternating approximations are performed by truncated iterative least-squares descents. The number of descents in the object- and in the PSF-space play a role of two regularization parameters. Selection of appropriate fixed points (which may not be unique) is performed by relaxing the regularization gradually, using the previous fixed point as an initial guess for finding the next one, which brings an approximation of better spatial resolution. We report the results of artificial experiments with noise-free data, targeted at examining the potential capability of the technique to deconvolve images of high complexity. We also show the results obtained with two sets of satellite images acquired using ground-based telescopes with and without adaptive optics compensation. The new approach brings much better results when compared with an alternating minimization technique based on positivity-constrained conjugate gradients, where the iterations stagnate when addressing data of high complexity. In the alternating-approximation step, we examine the performance of three different non-blind iterative deconvolution algorithms. The best results are provided by the non-negativity-constrained successive over-relaxation technique (+SOR) supplemented with an adaptive scheduling of the relaxation parameter. Results of comparable quality are obtained with steepest descents modified by imposing the non-negativity constraint, at the expense of higher numerical costs. The Richardson-Lucy (or expectation-maximization) algorithm fails to locate stable fixed points in our experiments, due apparently to inappropriate regularization properties.

  14. Improved liver R2* mapping by pixel-wise curve fitting with adaptive neighborhood regularization.

    PubMed

    Wang, Changqing; Zhang, Xinyuan; Liu, Xiaoyun; He, Taigang; Chen, Wufan; Feng, Qianjin; Feng, Yanqiu

    2018-08-01

    To improve liver R2* mapping by incorporating adaptive neighborhood regularization into pixel-wise curve fitting. Magnetic resonance imaging R2* mapping remains challenging because of the serial images with low signal-to-noise ratio. In this study, we proposed to exploit the neighboring pixels as regularization terms and adaptively determine the regularization parameters according to the interpixel signal similarity. The proposed algorithm, called the pixel-wise curve fitting with adaptive neighborhood regularization (PCANR), was compared with the conventional nonlinear least squares (NLS) and nonlocal means filter-based NLS algorithms on simulated, phantom, and in vivo data. Visually, the PCANR algorithm generates R2* maps with significantly reduced noise and well-preserved tiny structures. Quantitatively, the PCANR algorithm produces R2* maps with lower root mean square errors at varying R2* values and signal-to-noise-ratio levels compared with the NLS and nonlocal means filter-based NLS algorithms. For the high R2* values under low signal-to-noise-ratio levels, the PCANR algorithm outperforms the NLS and nonlocal means filter-based NLS algorithms in the accuracy and precision, in terms of mean and standard deviation of R2* measurements in selected region of interests, respectively. The PCANR algorithm can reduce the effect of noise on liver R2* mapping, and the improved measurement precision will benefit the assessment of hepatic iron in clinical practice. Magn Reson Med 80:792-801, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.

  15. Mixed linear-non-linear inversion of crustal deformation data: Bayesian inference of model, weighting and regularization parameters

    NASA Astrophysics Data System (ADS)

    Fukuda, Jun'ichi; Johnson, Kaj M.

    2010-06-01

    We present a unified theoretical framework and solution method for probabilistic, Bayesian inversions of crustal deformation data. The inversions involve multiple data sets with unknown relative weights, model parameters that are related linearly or non-linearly through theoretic models to observations, prior information on model parameters and regularization priors to stabilize underdetermined problems. To efficiently handle non-linear inversions in which some of the model parameters are linearly related to the observations, this method combines both analytical least-squares solutions and a Monte Carlo sampling technique. In this method, model parameters that are linearly and non-linearly related to observations, relative weights of multiple data sets and relative weights of prior information and regularization priors are determined in a unified Bayesian framework. In this paper, we define the mixed linear-non-linear inverse problem, outline the theoretical basis for the method, provide a step-by-step algorithm for the inversion, validate the inversion method using synthetic data and apply the method to two real data sets. We apply the method to inversions of multiple geodetic data sets with unknown relative data weights for interseismic fault slip and locking depth. We also apply the method to the problem of estimating the spatial distribution of coseismic slip on faults with unknown fault geometry, relative data weights and smoothing regularization weight.

  16. Filter Bank Regularized Common Spatial Pattern Ensemble for Small Sample Motor Imagery Classification.

    PubMed

    Park, Sang-Hoon; Lee, David; Lee, Sang-Goog

    2018-02-01

    For the last few years, many feature extraction methods have been proposed based on biological signals. Among these, the brain signals have the advantage that they can be obtained, even by people with peripheral nervous system damage. Motor imagery electroencephalograms (EEG) are inexpensive to measure, offer a high temporal resolution, and are intuitive. Therefore, these have received a significant amount of attention in various fields, including signal processing, cognitive science, and medicine. The common spatial pattern (CSP) algorithm is a useful method for feature extraction from motor imagery EEG. However, performance degradation occurs in a small-sample setting (SSS), because the CSP depends on sample-based covariance. Since the active frequency range is different for each subject, it is also inconvenient to set the frequency range to be different every time. In this paper, we propose the feature extraction method based on a filter bank to solve these problems. The proposed method consists of five steps. First, motor imagery EEG is divided by a using filter bank. Second, the regularized CSP (R-CSP) is applied to the divided EEG. Third, we select the features according to mutual information based on the individual feature algorithm. Fourth, parameter sets are selected for the ensemble. Finally, we classify using ensemble based on features. The brain-computer interface competition III data set IVa is used to evaluate the performance of the proposed method. The proposed method improves the mean classification accuracy by 12.34%, 11.57%, 9%, 4.95%, and 4.47% compared with CSP, SR-CSP, R-CSP, filter bank CSP (FBCSP), and SR-FBCSP. Compared with the filter bank R-CSP ( , ), which is a parameter selection version of the proposed method, the classification accuracy is improved by 3.49%. In particular, the proposed method shows a large improvement in performance in the SSS.

  17. Information fusion in regularized inversion of tomographic pumping tests

    USGS Publications Warehouse

    Bohling, Geoffrey C.; ,

    2008-01-01

    In this chapter we investigate a simple approach to incorporating geophysical information into the analysis of tomographic pumping tests for characterization of the hydraulic conductivity (K) field in an aquifer. A number of authors have suggested a tomographic approach to the analysis of hydraulic tests in aquifers - essentially simultaneous analysis of multiple tests or stresses on the flow system - in order to improve the resolution of the estimated parameter fields. However, even with a large amount of hydraulic data in hand, the inverse problem is still plagued by non-uniqueness and ill-conditioning and the parameter space for the inversion needs to be constrained in some sensible fashion in order to obtain plausible estimates of aquifer properties. For seismic and radar tomography problems, the parameter space is often constrained through the application of regularization terms that impose penalties on deviations of the estimated parameters from a prior or background model, with the tradeoff between data fit and model norm explored through systematic analysis of results for different levels of weighting on the regularization terms. In this study we apply systematic regularized inversion to analysis of tomographic pumping tests in an alluvial aquifer, taking advantage of the steady-shape flow regime exhibited in these tests to expedite the inversion process. In addition, we explore the possibility of incorporating geophysical information into the inversion through a regularization term relating the estimated K distribution to ground penetrating radar velocity and attenuation distributions through a smoothing spline model. ?? 2008 Springer-Verlag Berlin Heidelberg.

  18. A novel scatter-matrix eigenvalues-based total variation (SMETV) regularization for medical image restoration

    NASA Astrophysics Data System (ADS)

    Huang, Zhenghua; Zhang, Tianxu; Deng, Lihua; Fang, Hao; Li, Qian

    2015-12-01

    Total variation(TV) based on regularization has been proven as a popular and effective model for image restoration, because of its ability of edge preserved. However, as the TV favors a piece-wise constant solution, the processing results in the flat regions of the image are easily produced "staircase effects", and the amplitude of the edges will be underestimated; the underlying cause of the problem is that the regularization parameter can not be changeable with spatial local information of image. In this paper, we propose a novel Scatter-matrix eigenvalues-based TV(SMETV) regularization with image blind restoration algorithm for deblurring medical images. The spatial information in different image regions is incorporated into regularization by using the edge indicator called difference eigenvalue to distinguish edges from flat areas. The proposed algorithm can effectively reduce the noise in flat regions as well as preserve the edge and detailed information. Moreover, it becomes more robust with the change of the regularization parameter. Extensive experiments demonstrate that the proposed approach produces results superior to most methods in both visual image quality and quantitative measures.

  19. On the regularized fermionic projector of the vacuum

    NASA Astrophysics Data System (ADS)

    Finster, Felix

    2008-03-01

    We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed.

  20. A Model of Regularization Parameter Determination in Low-Dose X-Ray CT Reconstruction Based on Dictionary Learning.

    PubMed

    Zhang, Cheng; Zhang, Tao; Zheng, Jian; Li, Ming; Lu, Yanfei; You, Jiali; Guan, Yihui

    2015-01-01

    In recent years, X-ray computed tomography (CT) is becoming widely used to reveal patient's anatomical information. However, the side effect of radiation, relating to genetic or cancerous diseases, has caused great public concern. The problem is how to minimize radiation dose significantly while maintaining image quality. As a practical application of compressed sensing theory, one category of methods takes total variation (TV) minimization as the sparse constraint, which makes it possible and effective to get a reconstruction image of high quality in the undersampling situation. On the other hand, a preliminary attempt of low-dose CT reconstruction based on dictionary learning seems to be another effective choice. But some critical parameters, such as the regularization parameter, cannot be determined by detecting datasets. In this paper, we propose a reweighted objective function that contributes to a numerical calculation model of the regularization parameter. A number of experiments demonstrate that this strategy performs well with better reconstruction images and saving of a large amount of time.

  1. Stark widths regularities within spectral series of sodium isoelectronic sequence

    NASA Astrophysics Data System (ADS)

    Trklja, Nora; Tapalaga, Irinel; Dojčinović, Ivan P.; Purić, Jagoš

    2018-02-01

    Stark widths within spectral series of sodium isoelectronic sequence have been studied. This is a unique approach that includes both neutrals and ions. Two levels of problem are considered: if the required atomic parameters are known, Stark widths can be calculated by some of the known methods (in present paper modified semiempirical formula has been used), but if there is a lack of parameters, regularities enable determination of Stark broadening data. In the framework of regularity research, Stark broadening dependence on environmental conditions and certain atomic parameters has been investigated. The aim of this work is to give a simple model, with minimum of required parameters, which can be used for calculation of Stark broadening data for any chosen transitions within sodium like emitters. Obtained relations were used for predictions of Stark widths for transitions that have not been measured or calculated yet. This system enables fast data processing by using of proposed theoretical model and it provides quality control and verification of obtained results.

  2. Regularized Semiparametric Estimation for Ordinary Differential Equations

    PubMed Central

    Li, Yun; Zhu, Ji; Wang, Naisyin

    2015-01-01

    Ordinary differential equations (ODEs) are widely used in modeling dynamic systems and have ample applications in the fields of physics, engineering, economics and biological sciences. The ODE parameters often possess physiological meanings and can help scientists gain better understanding of the system. One key interest is thus to well estimate these parameters. Ideally, constant parameters are preferred due to their easy interpretation. In reality, however, constant parameters can be too restrictive such that even after incorporating error terms, there could still be unknown sources of disturbance that lead to poor agreement between observed data and the estimated ODE system. In this paper, we address this issue and accommodate short-term interferences by allowing parameters to vary with time. We propose a new regularized estimation procedure on the time-varying parameters of an ODE system so that these parameters could change with time during transitions but remain constants within stable stages. We found, through simulation studies, that the proposed method performs well and tends to have less variation in comparison to the non-regularized approach. On the theoretical front, we derive finite-sample estimation error bounds for the proposed method. Applications of the proposed method to modeling the hare-lynx relationship and the measles incidence dynamic in Ontario, Canada lead to satisfactory and meaningful results. PMID:26392639

  3. Selective-area catalyst-free MBE growth of GaN nanowires using a patterned oxide layer.

    PubMed

    Schumann, T; Gotschke, T; Limbach, F; Stoica, T; Calarco, R

    2011-03-04

    GaN nanowires (NWs) were grown selectively in holes of a patterned silicon oxide mask, by rf-plasma-assisted molecular beam epitaxy (PAMBE), without any metal catalyst. The oxide was deposited on a thin AlN buffer layer previously grown on a Si(111) substrate. Regular arrays of holes in the oxide layer were obtained using standard e-beam lithography. The selectivity of growth has been studied varying the substrate temperature, gallium beam equivalent pressure and patterning layout. Adjusting the growth parameters, GaN NWs can be selectively grown in the holes of the patterned oxide with complete suppression of the parasitic growth in between the holes. The occupation probability of a hole with a single or multiple NWs depends strongly on its diameter. The selectively grown GaN NWs have one common crystallographic orientation with respect to the Si(111) substrate via the AlN buffer layer, as proven by x-ray diffraction (XRD) measurements. Based on the experimental data, we present a schematic model of the GaN NW formation in which a GaN pedestal is initially grown in the hole.

  4. MSL-RAD Cruise Operations Concept

    NASA Technical Reports Server (NTRS)

    Brinza, David E.; Zeitlin, Cary; Hassler, Donald; Weigle, Gerald E.; Boettcher, Stephan; Martin, Cesar; Wimmer-Schweingrubber, Robert

    2012-01-01

    The Mars Science Laboratory (MSL) payload includes the Radiation Assessment Detector (RAD) instrument, intended to fully characterize the radiation environment for the MSL mission. The RAD instrument operations concept is intended to reduce impact to spacecraft resources and effort for the MSL operations team. By design, RAD autonomously performs regular science observations without the need for frequent commanding from the Rover Compute Element (RCE). RAD operates with pre-defined "sleep" and "observe" periods, with an adjustable duty cycle for meeting power and data volume constraints during the mission. At the start of a new science observation, RAD performs a pre-observation activity to assess count rates for selected RAD detector elements. Based on this assessment, RAD can enter "solar event" mode, in which instrument parameters (including observation duration) are selected to more effectively characterize the environment. At the end of each observation period, RAD stores a time-tagged, fixed length science data packet in its non-volatile mass memory storage. The operating cadence is defined by adjustable parameters, also stored in non-volatile memory within the instrument. Periodically, the RCE executes an on-board sequence to transfer RAD science data packets from the instrument mass storage to the MSL downlink buffer. Infrequently, the RAD instrument operating configuration is modified by updating internal parameter tables and configuration entries.

  5. Fast incorporation of optical flow into active polygons.

    PubMed

    Unal, Gozde; Krim, Hamid; Yezzi, Anthony

    2005-06-01

    In this paper, we first reconsider, in a different light, the addition of a prediction step to active contour-based visual tracking using an optical flow and clarify the local computation of the latter along the boundaries of continuous active contours with appropriate regularizers. We subsequently detail our contribution of computing an optical flow-based prediction step directly from the parameters of an active polygon, and of exploiting it in object tracking. This is in contrast to an explicitly separate computation of the optical flow and its ad hoc application. It also provides an inherent regularization effect resulting from integrating measurements along polygon edges. As a result, we completely avoid the need of adding ad hoc regularizing terms to the optical flow computations, and the inevitably arbitrary associated weighting parameters. This direct integration of optical flow into the active polygon framework distinguishes this technique from most previous contour-based approaches, where regularization terms are theoretically, as well as practically, essential. The greater robustness and speed due to a reduced number of parameters of this technique are additional and appealing features.

  6. Hydrologic Process Regularization for Improved Geoelectrical Monitoring of a Lab-Scale Saline Tracer Experiment

    NASA Astrophysics Data System (ADS)

    Oware, E. K.; Moysey, S. M.

    2016-12-01

    Regularization stabilizes the geophysical imaging problem resulting from sparse and noisy measurements that render solutions unstable and non-unique. Conventional regularization constraints are, however, independent of the physics of the underlying process and often produce smoothed-out tomograms with mass underestimation. Cascaded time-lapse (CTL) is a widely used reconstruction technique for monitoring wherein a tomogram obtained from the background dataset is employed as starting model for the inversion of subsequent time-lapse datasets. In contrast, a proper orthogonal decomposition (POD)-constrained inversion framework enforces physics-based regularization based upon prior understanding of the expected evolution of state variables. The physics-based constraints are represented in the form of POD basis vectors. The basis vectors are constructed from numerically generated training images (TIs) that mimic the desired process. The target can be reconstructed from a small number of selected basis vectors, hence, there is a reduction in the number of inversion parameters compared to the full dimensional space. The inversion involves finding the optimal combination of the selected basis vectors conditioned on the geophysical measurements. We apply the algorithm to 2-D lab-scale saline transport experiments with electrical resistivity (ER) monitoring. We consider two transport scenarios with one and two mass injection points evolving into unimodal and bimodal plume morphologies, respectively. The unimodal plume is consistent with the assumptions underlying the generation of the TIs, whereas bimodality in plume morphology was not conceptualized. We compare difference tomograms retrieved from POD with those obtained from CTL. Qualitative comparisons of the difference tomograms with images of their corresponding dye plumes suggest that POD recovered more compact plumes in contrast to those of CTL. While mass recovery generally deteriorated with increasing number of time-steps, POD outperformed CTL in terms of mass recovery accuracy rates. POD is computationally superior requiring only 2.5 mins to complete each inversion compared to 3 hours for CTL to do the same.

  7. Development of an automated experimental setup for the study of ionic-exchange kinetics. Application to the ionic adsorption, equilibrium attainment and dissolution of apatite compounds.

    PubMed

    Thomann, J M; Gasser, P; Bres, E F; Voegel, J C; Gramain, P

    1990-02-01

    An ion-selective electrode and microcomputer-based experimental setup for the study of ionic-exchange kinetics between a powdered solid and the solution is described. The equipment is composed of easily available commercial devices and a data acquisition and regularization computer program is presented. The system, especially developed to investigate the ionic adsorption, equilibrium attainment and dissolution of hard mineralized tissues, provides good reliable results by taking into account the volume changes of the reacting solution and the electrode behaviour under different experimental conditions, and by avoiding carbonation of the solution. A second computer program, using the regularized data and the experimental parameters, calculates the quantities of protons consumed and calcium released in the case of equilibrium attainment and dissolution of apatite-like compounds. Finally, typical examples of ion-exchange and dissolution kinetics under constant pH of enamel and synthetic hydroxyapatite are examined.

  8. Hemodynamic changes in a rat parietal cortex after endothelin-1-induced middle cerebral artery occlusion monitored by optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Liu, Jian; Ma, Yushu; Dou, Shidan; Wang, Yi; La, Dongsheng; Liu, Jianghong; Ma, Zhenhe

    2016-07-01

    A blockage of the middle cerebral artery (MCA) on the cortical branch will seriously affect the blood supply of the cerebral cortex. Real-time monitoring of MCA hemodynamic parameters is critical for therapy and rehabilitation. Optical coherence tomography (OCT) is a powerful imaging modality that can produce not only structural images but also functional information on the tissue. We use OCT to detect hemodynamic changes after MCA branch occlusion. We injected a selected dose of endothelin-1 (ET-1) at a depth of 1 mm near the MCA and let the blood vessels follow a process first of occlusion and then of slow reperfusion as realistically as possible to simulate local cerebral ischemia. During this period, we used optical microangiography and Doppler OCT to obtain multiple hemodynamic MCA parameters. The change trend of these parameters from before to after ET-1 injection clearly reflects the dynamic regularity of the MCA. These results show the mechanism of the cerebral ischemia-reperfusion process after a transient middle cerebral artery occlusion and confirm that OCT can be used to monitor hemodynamic parameters.

  9. Output-only modal parameter estimator of linear time-varying structural systems based on vector TAR model and least squares support vector machine

    NASA Astrophysics Data System (ADS)

    Zhou, Si-Da; Ma, Yuan-Chen; Liu, Li; Kang, Jie; Ma, Zhi-Sai; Yu, Lei

    2018-01-01

    Identification of time-varying modal parameters contributes to the structural health monitoring, fault detection, vibration control, etc. of the operational time-varying structural systems. However, it is a challenging task because there is not more information for the identification of the time-varying systems than that of the time-invariant systems. This paper presents a vector time-dependent autoregressive model and least squares support vector machine based modal parameter estimator for linear time-varying structural systems in case of output-only measurements. To reduce the computational cost, a Wendland's compactly supported radial basis function is used to achieve the sparsity of the Gram matrix. A Gamma-test-based non-parametric approach of selecting the regularization factor is adapted for the proposed estimator to replace the time-consuming n-fold cross validation. A series of numerical examples have illustrated the advantages of the proposed modal parameter estimator on the suppression of the overestimate and the short data. A laboratory experiment has further validated the proposed estimator.

  10. Efficient Regular Perovskite Solar Cells Based on Pristine [70]Fullerene as Electron-Selective Contact.

    PubMed

    Collavini, Silvia; Kosta, Ivet; Völker, Sebastian F; Cabanero, German; Grande, Hans J; Tena-Zaera, Ramón; Delgado, Juan Luis

    2016-06-08

    [70]Fullerene is presented as an efficient alternative electron-selective contact (ESC) for regular-architecture perovskite solar cells (PSCs). A smart and simple, well-described solution processing protocol for the preparation of [70]- and [60]fullerene-based solar cells, namely the fullerene saturation approach (FSA), allowed us to obtain similar power conversion efficiencies for both fullerene materials (i.e., 10.4 and 11.4 % for [70]- and [60]fullerene-based devices, respectively). Importantly, despite the low electron mobility and significant visible-light absorption of [70]fullerene, the presented protocol allows the employment of [70]fullerene as an efficient ESC. The [70]fullerene film thickness and its solubility in the perovskite processing solutions are crucial parameters, which can be controlled by the use of this simple solution processing protocol. The damage to the [70]fullerene film through dissolution during the perovskite deposition is avoided through the saturation of the perovskite processing solution with [70]fullerene. Additionally, this fullerene-saturation strategy improves the performance of the perovskite film significantly and enhances the power conversion efficiency of solar cells based on different ESCs (i.e., [60]fullerene, [70]fullerene, and TiO2 ). Therefore, this universal solution processing protocol widens the opportunities for the further development of PSCs. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Perceived Effectiveness among College Students of Selected Statistical Measures in Motivating Exercise Behavior

    ERIC Educational Resources Information Center

    Merrill, Ray M.; Chatterley, Amanda; Shields, Eric C.

    2005-01-01

    This study explored the effectiveness of selected statistical measures at motivating or maintaining regular exercise among college students. The study also considered whether ease in understanding these statistical measures was associated with perceived effectiveness at motivating or maintaining regular exercise. Analyses were based on a…

  12. Higher order total variation regularization for EIT reconstruction.

    PubMed

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Zhang, Fan; Mueller-Lisse, Ullrich; Moeller, Knut

    2018-01-08

    Electrical impedance tomography (EIT) attempts to reveal the conductivity distribution of a domain based on the electrical boundary condition. This is an ill-posed inverse problem; its solution is very unstable. Total variation (TV) regularization is one of the techniques commonly employed to stabilize reconstructions. However, it is well known that TV regularization induces staircase effects, which are not realistic in clinical applications. To reduce such artifacts, modified TV regularization terms considering a higher order differential operator were developed in several previous studies. One of them is called total generalized variation (TGV) regularization. TGV regularization has been successively applied in image processing in a regular grid context. In this study, we adapted TGV regularization to the finite element model (FEM) framework for EIT reconstruction. Reconstructions using simulation and clinical data were performed. First results indicate that, in comparison to TV regularization, TGV regularization promotes more realistic images. Graphical abstract Reconstructed conductivity changes located on selected vertical lines. For each of the reconstructed images as well as the ground truth image, conductivity changes located along the selected left and right vertical lines are plotted. In these plots, the notation GT in the legend stands for ground truth, TV stands for total variation method, and TGV stands for total generalized variation method. Reconstructed conductivity distributions from the GREIT algorithm are also demonstrated.

  13. The Role of Visual Eccentricity on Preference for Abstract Symmetry

    PubMed Central

    O’ Sullivan, Noreen; Bertamini, Marco

    2016-01-01

    This study tested preference for abstract patterns, comparing random patterns to a two-fold bilateral symmetry. Stimuli were presented at random locations in the periphery. Preference for bilateral symmetry has been extensively studied in central vision, but evaluation at different locations had not been systematically investigated. Patterns were presented for 200 ms within a large circular region. On each trial participant changed fixation and were instructed to select any location. Eccentricity values were calculated a posteriori as the distance between ocular coordinates at pattern onset and coordinates for the centre of the pattern. Experiment 1 consisted of two Tasks. In Task 1, participants detected pattern regularity as fast as possible. In Task 2 they evaluated their liking for the pattern on a Likert-scale. Results from Task 1 revealed that with our parameters eccentricity did not affect symmetry detection. However, in Task 2, eccentricity predicted more negative evaluation of symmetry, but not random patterns. In Experiment 2 participants were either presented with symmetry or random patterns. Regularity was task-irrelevant in this task. Participants discriminated the proportion of black/white dots within the pattern and then evaluated their liking for the pattern. Even when only one type of regularity was presented and regularity was task-irrelevant, preference evaluation for symmetry decreased with increasing eccentricity, whereas eccentricity did not affect the evaluation of random patterns. We conclude that symmetry appreciation is higher for foveal presentation in a way not fully accounted for by sensitivity. PMID:27124081

  14. The Role of Visual Eccentricity on Preference for Abstract Symmetry.

    PubMed

    Rampone, Giulia; O' Sullivan, Noreen; Bertamini, Marco

    2016-01-01

    This study tested preference for abstract patterns, comparing random patterns to a two-fold bilateral symmetry. Stimuli were presented at random locations in the periphery. Preference for bilateral symmetry has been extensively studied in central vision, but evaluation at different locations had not been systematically investigated. Patterns were presented for 200 ms within a large circular region. On each trial participant changed fixation and were instructed to select any location. Eccentricity values were calculated a posteriori as the distance between ocular coordinates at pattern onset and coordinates for the centre of the pattern. Experiment 1 consisted of two Tasks. In Task 1, participants detected pattern regularity as fast as possible. In Task 2 they evaluated their liking for the pattern on a Likert-scale. Results from Task 1 revealed that with our parameters eccentricity did not affect symmetry detection. However, in Task 2, eccentricity predicted more negative evaluation of symmetry, but not random patterns. In Experiment 2 participants were either presented with symmetry or random patterns. Regularity was task-irrelevant in this task. Participants discriminated the proportion of black/white dots within the pattern and then evaluated their liking for the pattern. Even when only one type of regularity was presented and regularity was task-irrelevant, preference evaluation for symmetry decreased with increasing eccentricity, whereas eccentricity did not affect the evaluation of random patterns. We conclude that symmetry appreciation is higher for foveal presentation in a way not fully accounted for by sensitivity.

  15. Efficient generalized cross-validation with applications to parametric image restoration and resolution enhancement.

    PubMed

    Nguyen, N; Milanfar, P; Golub, G

    2001-01-01

    In many image restoration/resolution enhancement applications, the blurring process, i.e., point spread function (PSF) of the imaging system, is not known or is known only to within a set of parameters. We estimate these PSF parameters for this ill-posed class of inverse problem from raw data, along with the regularization parameters required to stabilize the solution, using the generalized cross-validation method (GCV). We propose efficient approximation techniques based on the Lanczos algorithm and Gauss quadrature theory, reducing the computational complexity of the GCV. Data-driven PSF and regularization parameter estimation experiments with synthetic and real image sequences are presented to demonstrate the effectiveness and robustness of our method.

  16. Exponential series approaches for nonparametric graphical models

    NASA Astrophysics Data System (ADS)

    Janofsky, Eric

    Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.

  17. A fully Galerkin method for the recovery of stiffness and damping parameters in Euler-Bernoulli beam models

    NASA Technical Reports Server (NTRS)

    Smith, R. C.; Bowers, K. L.

    1991-01-01

    A fully Sinc-Galerkin method for recovering the spatially varying stiffness and damping parameters in Euler-Bernoulli beam models is presented. The forward problems are discretized with a sinc basis in both the spatial and temporal domains thus yielding an approximate solution which converges exponentially and is valid on the infinite time interval. Hence the method avoids the time-stepping which is characteristic of many of the forward schemes which are used in parameter recovery algorithms. Tikhonov regularization is used to stabilize the resulting inverse problem, and the L-curve method for determining an appropriate value of the regularization parameter is briefly discussed. Numerical examples are given which demonstrate the applicability of the method for both individual and simultaneous recovery of the material parameters.

  18. Prospect theory reflects selective allocation of attention.

    PubMed

    Pachur, Thorsten; Schulte-Mecklenbeck, Michael; Murphy, Ryan O; Hertwig, Ralph

    2018-02-01

    There is a disconnect in the literature between analyses of risky choice based on cumulative prospect theory (CPT) and work on predecisional information processing. One likely reason is that for expectation models (e.g., CPT), it is often assumed that people behaved only as if they conducted the computations leading to the predicted choice and that the models are thus mute regarding information processing. We suggest that key psychological constructs in CPT, such as loss aversion and outcome and probability sensitivity, can be interpreted in terms of attention allocation. In two experiments, we tested hypotheses about specific links between CPT parameters and attentional regularities. Experiment 1 used process tracing to monitor participants' predecisional attention allocation to outcome and probability information. As hypothesized, individual differences in CPT's loss-aversion, outcome-sensitivity, and probability-sensitivity parameters (estimated from participants' choices) were systematically associated with individual differences in attention allocation to outcome and probability information. For instance, loss aversion was associated with the relative attention allocated to loss and gain outcomes, and a more strongly curved weighting function was associated with less attention allocated to probabilities. Experiment 2 manipulated participants' attention to losses or gains, causing systematic differences in CPT's loss-aversion parameter. This result indicates that attention allocation can to some extent cause choice regularities that are captured by CPT. Our findings demonstrate an as-if model's capacity to reflect characteristics of information processing. We suggest that the observed CPT-attention links can be harnessed to inform the development of process models of risky choice. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  19. Dense motion estimation using regularization constraints on local parametric models.

    PubMed

    Patras, Ioannis; Worring, Marcel; van den Boomgaard, Rein

    2004-11-01

    This paper presents a method for dense optical flow estimation in which the motion field within patches that result from an initial intensity segmentation is parametrized with models of different order. We propose a novel formulation which introduces regularization constraints between the model parameters of neighboring patches. In this way, we provide the additional constraints for very small patches and for patches whose intensity variation cannot sufficiently constrain the estimation of their motion parameters. In order to preserve motion discontinuities, we use robust functions as a regularization mean. We adopt a three-frame approach and control the balance between the backward and forward constraints by a real-valued direction field on which regularization constraints are applied. An iterative deterministic relaxation method is employed in order to solve the corresponding optimization problem. Experimental results show that the proposed method deals successfully with motions large in magnitude, motion discontinuities, and produces accurate piecewise-smooth motion fields.

  20. Transition from regular to irregular reflection of cylindrical converging shock waves over convex obstacles

    NASA Astrophysics Data System (ADS)

    Vignati, F.; Guardone, A.

    2017-11-01

    An analytical model for the evolution of regular reflections of cylindrical converging shock waves over circular-arc obstacles is proposed. The model based on the new (local) parameter, the perceived wedge angle, which substitutes the (global) wedge angle of planar surfaces and accounts for the time-dependent curvature of both the shock and the obstacle at the reflection point, is introduced. The new model compares fairly well with numerical results. Results from numerical simulations of the regular to Mach transition—eventually occurring further downstream along the obstacle—point to the perceived wedge angle as the most significant parameter to identify regular to Mach transitions. Indeed, at the transition point, the value of the perceived wedge angle is between 39° and 42° for all investigated configurations, whereas, e.g., the absolute local wedge angle varies in between 10° and 45° in the same conditions.

  1. El Maestro de Sala Regular de Clases Ante el Proceso de Inclusion del Nino Con Impedimento

    ERIC Educational Resources Information Center

    Rosa Morales, Awilda

    2012-01-01

    The purpose of this research was to describe the experiences of regular class elementary school teachers with the Puerto Rico Department of Education who have worked with handicapped children who have been integrated to the regular classroom. Five elementary level regular class teachers were selected in the northwest zone of Puerto Rico who during…

  2. Evaluation of breathing patterns for respiratory-gated radiation therapy using the respiration regularity index

    NASA Astrophysics Data System (ADS)

    Cheong, Kwang-Ho; Lee, MeYeon; Kang, Sei-Kwon; Yoon, Jai-Woong; Park, SoAh; Hwang, Taejin; Kim, Haeyoung; Kim, KyoungJu; Han, Tae Jin; Bae, Hoonsik

    2015-01-01

    Despite the considerable importance of accurately estimating the respiration regularity of a patient in motion compensation treatment, not to mention the necessity of maintaining that regularity through the following sessions, an effective and simply applicable method by which those goals can be accomplished has rarely been reported. The authors herein propose a simple respiration regularity index based on parameters derived from a correspondingly simplified respiration model. In order to simplify a patient's breathing pattern while preserving the data's intrinsic properties, we defined a respiration model as a cos4( ω( t) · t) wave form with a baseline drift. According to this respiration formula, breathing-pattern fluctuation could be explained using four factors: the sample standard deviation of respiration period ( s f ), the sample standard deviation of amplitude ( s a ) and the results of a simple regression of the baseline drift (slope as β, and standard deviation of residuals as σ r ) of a respiration signal. The overall irregularity ( δ) was defined as , where is a variable newly-derived by using principal component analysis (PCA) for the four fluctuation parameters and has two principal components ( ω 1, ω 2). The proposed respiration regularity index was defined as ρ = ln(1 + (1/ δ))/2, a higher ρ indicating a more regular breathing pattern. We investigated its clinical relevance by comparing it with other known parameters. Subsequently, we applied it to 110 respiration signals acquired from five liver and five lung cancer patients by using real-time position management (RPM; Varian Medical Systems, Palo Alto, CA). Correlations between the regularity of the first session and the remaining fractions were investigated using Pearson's correlation coefficient. Additionally, the respiration regularity was compared between the liver and lung cancer patient groups. The respiration regularity was determined based on ρ; patients with ρ < 0.3 showed worse regularity than the others whereas ρ > 0.7 was suitable for respiratory-gated radiation therapy (RGRT). Fluctuations in the breathing cycle and the amplitude were especially determinative of ρ. If the respiration regularity of a patient's first session was known, it could be estimated through subsequent sessions. Notably, the breathing patterns of the lung cancer patients were more irregular than those of the liver cancer patients. Respiration regularity could be objectively determined by using a composite index, ρ. Such a single-index testing of respiration regularity can facilitate determination of RGRT availability in clinical settings, especially for free-breathing cases.

  3. An entropy regularization method applied to the identification of wave distribution function for an ELF hiss event

    NASA Astrophysics Data System (ADS)

    Prot, Olivier; SantolíK, OndřEj; Trotignon, Jean-Gabriel; Deferaudy, Hervé

    2006-06-01

    An entropy regularization algorithm (ERA) has been developed to compute the wave-energy density from electromagnetic field measurements. It is based on the wave distribution function (WDF) concept. To assess its suitability and efficiency, the algorithm is applied to experimental data that has already been analyzed using other inversion techniques. The FREJA satellite data that is used consists of six spectral matrices corresponding to six time-frequency points of an ELF hiss-event spectrogram. The WDF analysis is performed on these six points and the results are compared with those obtained previously. A statistical stability analysis confirms the stability of the solutions. The WDF computation is fast and without any prespecified parameters. The regularization parameter has been chosen in accordance with the Morozov's discrepancy principle. The Generalized Cross Validation and L-curve criterions are then tentatively used to provide a fully data-driven method. However, these criterions fail to determine a suitable value of the regularization parameter. Although the entropy regularization leads to solutions that agree fairly well with those already published, some differences are observed, and these are discussed in detail. The main advantage of the ERA is to return the WDF that exhibits the largest entropy and to avoid the use of a priori models, which sometimes seem to be more accurate but without any justification.

  4. Regularization of the Perturbed Spatial Restricted Three-Body Problem by L-Transformations

    NASA Astrophysics Data System (ADS)

    Poleshchikov, S. M.

    2018-03-01

    Equations of motion for the perturbed circular restricted three-body problem have been regularized in canonical variables in a moving coordinate system. Two different L-matrices of the fourth order are used in the regularization. Conditions for generalized symplecticity of the constructed transform have been checked. In the unperturbed case, the regular equations have a polynomial structure. The regular equations have been numerically integrated using the Runge-Kutta-Fehlberg method. The results of numerical experiments are given for the Earth-Moon system parameters taking into account the perturbation of the Sun for different L-matrices.

  5. MUSE: MUlti-atlas region Segmentation utilizing Ensembles of registration algorithms and parameters, and locally optimal atlas selection

    PubMed Central

    Ou, Yangming; Resnick, Susan M.; Gur, Ruben C.; Gur, Raquel E.; Satterthwaite, Theodore D.; Furth, Susan; Davatzikos, Christos

    2016-01-01

    Atlas-based automated anatomical labeling is a fundamental tool in medical image segmentation, as it defines regions of interest for subsequent analysis of structural and functional image data. The extensive investigation of multi-atlas warping and fusion techniques over the past 5 or more years has clearly demonstrated the advantages of consensus-based segmentation. However, the common approach is to use multiple atlases with a single registration method and parameter set, which is not necessarily optimal for every individual scan, anatomical region, and problem/data-type. Different registration criteria and parameter sets yield different solutions, each providing complementary information. Herein, we present a consensus labeling framework that generates a broad ensemble of labeled atlases in target image space via the use of several warping algorithms, regularization parameters, and atlases. The label fusion integrates two complementary sources of information: a local similarity ranking to select locally optimal atlases and a boundary modulation term to refine the segmentation consistently with the target image's intensity profile. The ensemble approach consistently outperforms segmentations using individual warping methods alone, achieving high accuracy on several benchmark datasets. The MUSE methodology has been used for processing thousands of scans from various datasets, producing robust and consistent results. MUSE is publicly available both as a downloadable software package, and as an application that can be run on the CBICA Image Processing Portal (https://ipp.cbica.upenn.edu), a web based platform for remote processing of medical images. PMID:26679328

  6. Ideal gas solubilities and solubility selectivities in a binary mixture of room-temperature ionic liquids.

    PubMed

    Finotello, Alexia; Bara, Jason E; Narayan, Suguna; Camper, Dean; Noble, Richard D

    2008-02-28

    This study focuses on the solubility behaviors of CO2, CH4, and N2 gases in binary mixtures of imidazolium-based room-temperature ionic liquids (RTILs) using 1-ethyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide ([C2mim][Tf2N]) and 1-ethyl-3-methylimidazolium tetrafluoroborate ([C2mim][BF4]) at 40 degrees C and low pressures (approximately 1 atm). The mixtures tested were 0, 25, 50, 75, 90, 95, and 100 mol % [C2mim][BF4] in [C2mim][Tf2N]. Results show that regular solution theory (RST) can be used to describe the gas solubility and selectivity behaviors in RTIL mixtures using an average mixture solubility parameter or an average measured mixture molar volume. Interestingly, the solubility selectivity, defined as the ratio of gas mole fractions in the RTIL mixture, of CO2 with N2 or CH4 in pure [C2mim][BF4] can be enhanced by adding 5 mol % [C2mim][Tf2N].

  7. Method of Individual Forecasting of Technical State of Logging Machines

    NASA Astrophysics Data System (ADS)

    Kozlov, V. G.; Gulevsky, V. A.; Skrypnikov, A. V.; Logoyda, V. S.; Menzhulova, A. S.

    2018-03-01

    Development of the model that evaluates the possibility of failure requires the knowledge of changes’ regularities of technical condition parameters of the machines in use. To study the regularities, the need to develop stochastic models that take into account physical essence of the processes of destruction of structural elements of the machines, the technology of their production, degradation and the stochastic properties of the parameters of the technical state and the conditions and modes of operation arose.

  8. A Model of Regularization Parameter Determination in Low-Dose X-Ray CT Reconstruction Based on Dictionary Learning

    PubMed Central

    Zhang, Cheng; Zhang, Tao; Li, Ming; Lu, Yanfei; You, Jiali; Guan, Yihui

    2015-01-01

    In recent years, X-ray computed tomography (CT) is becoming widely used to reveal patient's anatomical information. However, the side effect of radiation, relating to genetic or cancerous diseases, has caused great public concern. The problem is how to minimize radiation dose significantly while maintaining image quality. As a practical application of compressed sensing theory, one category of methods takes total variation (TV) minimization as the sparse constraint, which makes it possible and effective to get a reconstruction image of high quality in the undersampling situation. On the other hand, a preliminary attempt of low-dose CT reconstruction based on dictionary learning seems to be another effective choice. But some critical parameters, such as the regularization parameter, cannot be determined by detecting datasets. In this paper, we propose a reweighted objective function that contributes to a numerical calculation model of the regularization parameter. A number of experiments demonstrate that this strategy performs well with better reconstruction images and saving of a large amount of time. PMID:26550024

  9. Fast parallel MR image reconstruction via B1-based, adaptive restart, iterative soft thresholding algorithms (BARISTA).

    PubMed

    Muckley, Matthew J; Noll, Douglas C; Fessler, Jeffrey A

    2015-02-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms.

  10. Fast Parallel MR Image Reconstruction via B1-based, Adaptive Restart, Iterative Soft Thresholding Algorithms (BARISTA)

    PubMed Central

    Noll, Douglas C.; Fessler, Jeffrey A.

    2014-01-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms. PMID:25330484

  11. Statistical analysis of nonlinearly reconstructed near-infrared tomographic images: Part I--Theory and simulations.

    PubMed

    Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D

    2002-07-01

    Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.

  12. Model-based estimation with boundary side information or boundary regularization [cardiac emission CT].

    PubMed

    Chiao, P C; Rogers, W L; Fessler, J A; Clinthorne, N H; Hero, A O

    1994-01-01

    The authors have previously developed a model-based strategy for joint estimation of myocardial perfusion and boundaries using ECT (emission computed tomography). They have also reported difficulties with boundary estimation in low contrast and low count rate situations. Here they propose using boundary side information (obtainable from high resolution MRI and CT images) or boundary regularization to improve both perfusion and boundary estimation in these situations. To fuse boundary side information into the emission measurements, the authors formulate a joint log-likelihood function to include auxiliary boundary measurements as well as ECT projection measurements. In addition, they introduce registration parameters to align auxiliary boundary measurements with ECT measurements and jointly estimate these parameters with other parameters of interest from the composite measurements. In simulated PET O-15 water myocardial perfusion studies using a simplified model, the authors show that the joint estimation improves perfusion estimation performance and gives boundary alignment accuracy of <0.5 mm even at 0.2 million counts. They implement boundary regularization through formulating a penalized log-likelihood function. They also demonstrate in simulations that simultaneous regularization of the epicardial boundary and myocardial thickness gives comparable perfusion estimation accuracy with the use of boundary side information.

  13. Theoretical Analysis of Penalized Maximum-Likelihood Patlak Parametric Image Reconstruction in Dynamic PET for Lesion Detection.

    PubMed

    Yang, Li; Wang, Guobao; Qi, Jinyi

    2016-04-01

    Detecting cancerous lesions is a major clinical application of emission tomography. In a previous work, we studied penalized maximum-likelihood (PML) image reconstruction for lesion detection in static PET. Here we extend our theoretical analysis of static PET reconstruction to dynamic PET. We study both the conventional indirect reconstruction and direct reconstruction for Patlak parametric image estimation. In indirect reconstruction, Patlak parametric images are generated by first reconstructing a sequence of dynamic PET images, and then performing Patlak analysis on the time activity curves (TACs) pixel-by-pixel. In direct reconstruction, Patlak parametric images are estimated directly from raw sinogram data by incorporating the Patlak model into the image reconstruction procedure. PML reconstruction is used in both the indirect and direct reconstruction methods. We use a channelized Hotelling observer (CHO) to assess lesion detectability in Patlak parametric images. Simplified expressions for evaluating the lesion detectability have been derived and applied to the selection of the regularization parameter value to maximize detection performance. The proposed method is validated using computer-based Monte Carlo simulations. Good agreements between the theoretical predictions and the Monte Carlo results are observed. Both theoretical predictions and Monte Carlo simulation results show the benefit of the indirect and direct methods under optimized regularization parameters in dynamic PET reconstruction for lesion detection, when compared with the conventional static PET reconstruction.

  14. Assessment of gonadotropins and testosterone hormone levels in regular Mitragyna speciosa (Korth.) users.

    PubMed

    Singh, Darshan; Murugaiyah, Vikneswaran; Hamid, Shahrul Bariyah Sahul; Kasinather, Vicknasingam; Chan, Michelle Su Ann; Ho, Eric Tatt Wei; Grundmann, Oliver; Chear, Nelson Jeng Yeou; Mansor, Sharif Mahsufi

    2018-07-15

    Mitragyna speciosa (Korth.) also known as kratom, is a native medicinal plant of Southeast Asia with opioid-like effects. Kratom tea/juice have been traditionally used as a folk remedy and for controlling opiate withdrawal in Malaysia. Long-term opioid use is associated with depletion in testosterone levels. Since kratom is reported to deform sperm morphology and reduce sperm motility, we aimed to clinically investigate the testosterone levels following long-term kratom tea/juice use in regular kratom users. A total of 19 regular kratom users were recruited for this cross-sectional study. A full-blood test was conducted including determination of testosterone level, follicle stimulating hormone (FSH) and luteinizing hormone (LH) profile, as well as hematological and biochemical parameters of participants. We found long-term kratom tea/juice consumption with a daily mitragynine dose of 76.23-94.15 mg did not impair testosterone levels, or gonadotrophins, hematological and biochemical parameters in regular kratom users. Regular kratom tea/juice consumption over prolonged periods (>2 years) was not associated with testosterone impairing effects in humans. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Estimating parameter of influenza transmission using regularized least square

    NASA Astrophysics Data System (ADS)

    Nuraini, N.; Syukriah, Y.; Indratno, S. W.

    2014-02-01

    Transmission process of influenza can be presented in a mathematical model as a non-linear differential equations system. In this model the transmission of influenza is determined by the parameter of contact rate of the infected host and susceptible host. This parameter will be estimated using a regularized least square method where the Finite Element Method and Euler Method are used for approximating the solution of the SIR differential equation. The new infected data of influenza from CDC is used to see the effectiveness of the method. The estimated parameter represents the contact rate proportion of transmission probability in a day which can influence the number of infected people by the influenza. Relation between the estimated parameter and the number of infected people by the influenza is measured by coefficient of correlation. The numerical results show positive correlation between the estimated parameters and the infected people.

  16. Using Bayesian variable selection to analyze regular resolution IV two-level fractional factorial designs

    DOE PAGES

    Chipman, Hugh A.; Hamada, Michael S.

    2016-06-02

    Regular two-level fractional factorial designs have complete aliasing in which the associated columns of multiple effects are identical. Here, we show how Bayesian variable selection can be used to analyze experiments that use such designs. In addition to sparsity and hierarchy, Bayesian variable selection naturally incorporates heredity . This prior information is used to identify the most likely combinations of active terms. We also demonstrate the method on simulated and real experiments.

  17. Using Bayesian variable selection to analyze regular resolution IV two-level fractional factorial designs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chipman, Hugh A.; Hamada, Michael S.

    Regular two-level fractional factorial designs have complete aliasing in which the associated columns of multiple effects are identical. Here, we show how Bayesian variable selection can be used to analyze experiments that use such designs. In addition to sparsity and hierarchy, Bayesian variable selection naturally incorporates heredity . This prior information is used to identify the most likely combinations of active terms. We also demonstrate the method on simulated and real experiments.

  18. Gravitational lensing and ghost images in the regular Bardeen no-horizon spacetimes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schee, Jan; Stuchlík, Zdeněk, E-mail: jan.schee@fpf.slu.cz, E-mail: zdenek.stuchlik@fpf.slu.cz

    We study deflection of light rays and gravitational lensing in the regular Bardeen no-horizon spacetimes. Flatness of these spacetimes in the central region implies existence of interesting optical effects related to photons crossing the gravitational field of the no-horizon spacetimes with low impact parameters. These effects occur due to existence of a critical impact parameter giving maximal deflection of light rays in the Bardeen no-horizon spacetimes. We give the critical impact parameter in dependence on the specific charge of the spacetimes, and discuss 'ghost' direct and indirect images of Keplerian discs, generated by photons with low impact parameters. The ghostmore » direct images can occur only for large inclination angles of distant observers, while ghost indirect images can occur also for small inclination angles. We determine the range of the frequency shift of photons generating the ghost images and determine distribution of the frequency shift across these images. We compare them to those of the standard direct images of the Keplerian discs. The difference of the ranges of the frequency shift on the ghost and direct images could serve as a quantitative measure of the Bardeen no-horizon spacetimes. The regions of the Keplerian discs giving the ghost images are determined in dependence on the specific charge of the no-horizon spacetimes. For comparison we construct direct and indirect (ordinary and ghost) images of Keplerian discs around Reissner-Nördström naked singularities demonstrating a clear qualitative difference to the ghost direct images in the regular Bardeen no-horizon spacetimes. The optical effects related to the low impact parameter photons thus give clear signature of the regular Bardeen no-horizon spacetimes, as no similar phenomena could occur in the black hole or naked singularity spacetimes. Similar direct ghost images have to occur in any regular no-horizon spacetimes having nearly flat central region.« less

  19. How evolution learns to generalise: Using the principles of learning theory to understand the evolution of developmental organisation.

    PubMed

    Kouvaris, Kostas; Clune, Jeff; Kounios, Loizos; Brede, Markus; Watson, Richard A

    2017-04-01

    One of the most intriguing questions in evolution is how organisms exhibit suitable phenotypic variation to rapidly adapt in novel selective environments. Such variability is crucial for evolvability, but poorly understood. In particular, how can natural selection favour developmental organisations that facilitate adaptive evolution in previously unseen environments? Such a capacity suggests foresight that is incompatible with the short-sighted concept of natural selection. A potential resolution is provided by the idea that evolution may discover and exploit information not only about the particular phenotypes selected in the past, but their underlying structural regularities: new phenotypes, with the same underlying regularities, but novel particulars, may then be useful in new environments. If true, we still need to understand the conditions in which natural selection will discover such deep regularities rather than exploiting 'quick fixes' (i.e., fixes that provide adaptive phenotypes in the short term, but limit future evolvability). Here we argue that the ability of evolution to discover such regularities is formally analogous to learning principles, familiar in humans and machines, that enable generalisation from past experience. Conversely, natural selection that fails to enhance evolvability is directly analogous to the learning problem of over-fitting and the subsequent failure to generalise. We support the conclusion that evolving systems and learning systems are different instantiations of the same algorithmic principles by showing that existing results from the learning domain can be transferred to the evolution domain. Specifically, we show that conditions that alleviate over-fitting in learning systems successfully predict which biological conditions (e.g., environmental variation, regularity, noise or a pressure for developmental simplicity) enhance evolvability. This equivalence provides access to a well-developed theoretical framework from learning theory that enables a characterisation of the general conditions for the evolution of evolvability.

  20. Time-Optimized High-Resolution Readout-Segmented Diffusion Tensor Imaging

    PubMed Central

    Reishofer, Gernot; Koschutnig, Karl; Langkammer, Christian; Porter, David; Jehna, Margit; Enzinger, Christian; Keeling, Stephen; Ebner, Franz

    2013-01-01

    Readout-segmented echo planar imaging with 2D navigator-based reacquisition is an uprising technique enabling the sampling of high-resolution diffusion images with reduced susceptibility artifacts. However, low signal from the small voxels and long scan times hamper the clinical applicability. Therefore, we introduce a regularization algorithm based on total variation that is applied directly on the entire diffusion tensor. The spatially varying regularization parameter is determined automatically dependent on spatial variations in signal-to-noise ratio thus, avoiding over- or under-regularization. Information about the noise distribution in the diffusion tensor is extracted from the diffusion weighted images by means of complex independent component analysis. Moreover, the combination of those features enables processing of the diffusion data absolutely user independent. Tractography from in vivo data and from a software phantom demonstrate the advantage of the spatially varying regularization compared to un-regularized data with respect to parameters relevant for fiber-tracking such as Mean Fiber Length, Track Count, Volume and Voxel Count. Specifically, for in vivo data findings suggest that tractography results from the regularized diffusion tensor based on one measurement (16 min) generates results comparable to the un-regularized data with three averages (48 min). This significant reduction in scan time renders high resolution (1×1×2.5 mm3) diffusion tensor imaging of the entire brain applicable in a clinical context. PMID:24019951

  1. A regularization approach to hydrofacies delineation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wohlberg, Brendt; Tartakovsky, Daniel

    2009-01-01

    We consider an inverse problem of identifying complex internal structures of composite (geological) materials from sparse measurements of system parameters and system states. Two conceptual frameworks for identifying internal boundaries between constitutive materials in a composite are considered. A sequential approach relies on support vector machines, nearest neighbor classifiers, or geostatistics to reconstruct boundaries from measurements of system parameters and then uses system states data to refine the reconstruction. A joint approach inverts the two data sets simultaneously by employing a regularization approach.

  2. Artificial neural network model for ozone concentration estimation and Monte Carlo analysis

    NASA Astrophysics Data System (ADS)

    Gao, Meng; Yin, Liting; Ning, Jicai

    2018-07-01

    Air pollution in urban atmosphere directly affects public-health; therefore, it is very essential to predict air pollutant concentrations. Air quality is a complex function of emissions, meteorology and topography, and artificial neural networks (ANNs) provide a sound framework for relating these variables. In this study, we investigated the feasibility of using ANN model with meteorological parameters as input variables to predict ozone concentration in the urban area of Jinan, a metropolis in Northern China. We firstly found that the architecture of network of neurons had little effect on the predicting capability of ANN model. A parsimonious ANN model with 6 routinely monitored meteorological parameters and one temporal covariate (the category of day, i.e. working day, legal holiday and regular weekend) as input variables was identified, where the 7 input variables were selected following the forward selection procedure. Compared with the benchmarking ANN model with 9 meteorological and photochemical parameters as input variables, the predicting capability of the parsimonious ANN model was acceptable. Its predicting capability was also verified in term of warming success ratio during the pollution episodes. Finally, uncertainty and sensitivity analysis were also performed based on Monte Carlo simulations (MCS). It was concluded that the ANN could properly predict the ambient ozone level. Maximum temperature, atmospheric pressure, sunshine duration and maximum wind speed were identified as the predominate input variables significantly influencing the prediction of ambient ozone concentrations.

  3. Estimation of genetic parameters for heat stress, including dominance gene effects, on milk yield in Thai Holstein dairy cattle.

    PubMed

    Boonkum, Wuttigrai; Duangjinda, Monchai

    2015-03-01

    Heat stress in tropical regions is a major cause that strongly negatively affects to milk production in dairy cattle. Genetic selection for dairy heat tolerance is powerful technique to improve genetic performance. Therefore, the current study aimed to estimate genetic parameters and investigate the threshold point of heat stress for milk yield. Data included 52 701 test-day milk yield records for the first parity from 6247 Thai Holstein dairy cattle, covering the period 1990 to 2007. The random regression test day model with EM-REML was used to estimate variance components, genetic parameters and milk production loss. A decline in milk production was found when temperature and humidity index (THI) exceeded a threshold of 74, also it was associated with the high percentage of Holstein genetics. All variance component estimates increased with THI. The estimate of heritability of test-day milk yield was 0.231. Dominance variance as a proportion to additive variance (0.035) indicated that non-additive effects might not be of concern for milk genetics studies in Thai Holstein cattle. Correlations between genetic and permanent environmental effects, for regular conditions and due to heat stress, were - 0.223 and - 0.521, respectively. The heritability and genetic correlations from this study show that simultaneous selection for milk production and heat tolerance is possible. © 2014 Japanese Society of Animal Science.

  4. Modified Denavit-Hartenberg parameters for better location of joint axis systems in robot arms

    NASA Technical Reports Server (NTRS)

    Barker, L. K.

    1986-01-01

    The Denavit-Hartenberg parameters define the relative location of successive joint axis systems in a robot arm. A recent justifiable criticism is that one of these parameters becomes extremely large when two successive joints have near-parallel rotational axes. Geometrically, this parameter then locates a joint axis system at an excessive distance from the robot arm and, computationally, leads to an ill-conditioned transformation matrix. In this paper, a simple modification (which results from constraining a transverse vector between successive joint rotational axes to be normal to one of the rotational axes, instead of both) overcomes this criticism and favorably locates the joint axis system. An example is given for near-parallel rotational axes of the elbow and shoulder joints in a robot arm. The regular and modified parameters are extracted by an algebraic method with simulated measurement data. Unlike the modified parameters, extracted values of the regular parameters are very sensitive to measurement accuracy.

  5. Unified Bayesian Estimator of EEG Reference at Infinity: rREST (Regularized Reference Electrode Standardization Technique)

    PubMed Central

    Hu, Shiang; Yao, Dezhong; Valdes-Sosa, Pedro A.

    2018-01-01

    The choice of reference for the electroencephalogram (EEG) is a long-lasting unsolved issue resulting in inconsistent usages and endless debates. Currently, both the average reference (AR) and the reference electrode standardization technique (REST) are two primary, apparently irreconcilable contenders. We propose a theoretical framework to resolve this reference issue by formulating both (a) estimation of potentials at infinity, and (b) determination of the reference, as a unified Bayesian linear inverse problem, which can be solved by maximum a posterior estimation. We find that AR and REST are very particular cases of this unified framework: AR results from biophysically non-informative prior; while REST utilizes the prior based on the EEG generative model. To allow for simultaneous denoising and reference estimation, we develop the regularized versions of AR and REST, named rAR and rREST, respectively. Both depend on a regularization parameter that is the noise to signal variance ratio. Traditional and new estimators are evaluated with this framework, by both simulations and analysis of real resting EEGs. Toward this end, we leverage the MRI and EEG data from 89 subjects which participated in the Cuban Human Brain Mapping Project. Generated artificial EEGs—with a known ground truth, show that relative error in estimating the EEG potentials at infinity is lowest for rREST. It also reveals that realistic volume conductor models improve the performances of REST and rREST. Importantly, for practical applications, it is shown that an average lead field gives the results comparable to the individual lead field. Finally, it is shown that the selection of the regularization parameter with Generalized Cross-Validation (GCV) is close to the “oracle” choice based on the ground truth. When evaluated with the real 89 resting state EEGs, rREST consistently yields the lowest GCV. This study provides a novel perspective to the EEG reference problem by means of a unified inverse solution framework. It may allow additional principled theoretical formulations and numerical evaluation of performance. PMID:29780302

  6. Fem Simulation of Triple Diffusive Natural Convection Along Inclined Plate in Porous Medium: Prescribed Surface Heat, Solute and Nanoparticles Flux

    NASA Astrophysics Data System (ADS)

    Goyal, M.; Goyal, R.; Bhargava, R.

    2017-12-01

    In this paper, triple diffusive natural convection under Darcy flow over an inclined plate embedded in a porous medium saturated with a binary base fluid containing nanoparticles and two salts is studied. The model used for the nanofluid is the one which incorporates the effects of Brownian motion and thermophoresis. In addition, the thermal energy equations include regular diffusion and cross-diffusion terms. The vertical surface has the heat, mass and nanoparticle fluxes each prescribed as a power law function of the distance along the wall. The boundary layer equations are transformed into a set of ordinary differential equations with the help of group theory transformations. A wide range of parameter values are chosen to bring out the effect of buoyancy ratio, regular Lewis number and modified Dufour parameters of both salts and nanofluid parameters with varying angle of inclinations. The effects of parameters on the velocity, temperature, solutal and nanoparticles volume fraction profiles, as well as on the important parameters of heat and mass transfer, i.e., the reduced Nusselt, regular and nanofluid Sherwood numbers, are discussed. Such problems find application in extrusion of metals, polymers and ceramics, production of plastic films, insulation of wires and liquid packaging.

  7. Differences in anthropometric and ultrasonographic parameters between adolescent girls with regular and irregular menstrual cycles: a case-study of 835 cases.

    PubMed

    Radivojevic, Ubavka D; Lazovic, Gordana B; Kravic-Stevovic, Tamara K; Puzigaca, Zarko D; Canovic, Fadil M; Nikolic, Rajko R; Milicevic, Srboljub M

    2014-08-01

    Exploring the relation between the age, time since menarche, anthropometric parameters and the growth of the uterus and ovaries in postmenarcheal girls. Cross sectional. Department of Human reproduction at a tertiary pediatric referral center. Eight hundred thirty-five adolescent girls. Postmenarcheal girls were classified according to the regularity of their menstrual cycles in 2 groups (regular and irregular cycles) and compared. Anthropometric measurements and ultrasonographic examination of the pelvis was conducted with all participants. Anthropometric and ultrasonographic parameters were evaluated. Results of our study showed that girls with regular and irregular cycles differed in height, weight, body mass index, percentage of body fat and ovarian volumes. The size of the ovaries decreases in the group of girls with regular cycles (r = 0.14; P < .005), while it increases in girls with irregular cycles (r = 0.15; P < .001) with advancing age. Uterine volume in all patients increases gradually with age reaching consistent values at 16 years (r = 0.5; P < .001). Age at menarche, the time elapsed since menarche, the height, weight, body mass index and percentage of body fat in patients correlated with uterine volume. Ovarian volume correlated with patients' weight, BMI and percentage of fat. Uterus continues to grow in postmenarcheal years, with increasing height and weight of girls, regardless of the regularity of cycles. Postmenarcheal girls with irregular cycles were found to have heavier figures and larger ovaries. Copyright © 2014 North American Society for Pediatric and Adolescent Gynecology. Published by Elsevier Inc. All rights reserved.

  8. Multiplicative Multitask Feature Learning

    PubMed Central

    Wang, Xin; Bi, Jinbo; Yu, Shipeng; Sun, Jiangwen; Song, Minghu

    2016-01-01

    We investigate a general framework of multiplicative multitask feature learning which decomposes individual task’s model parameters into a multiplication of two components. One of the components is used across all tasks and the other component is task-specific. Several previous methods can be proved to be special cases of our framework. We study the theoretical properties of this framework when different regularization conditions are applied to the two decomposed components. We prove that this framework is mathematically equivalent to the widely used multitask feature learning methods that are based on a joint regularization of all model parameters, but with a more general form of regularizers. Further, an analytical formula is derived for the across-task component as related to the task-specific component for all these regularizers, leading to a better understanding of the shrinkage effects of different regularizers. Study of this framework motivates new multitask learning algorithms. We propose two new learning formulations by varying the parameters in the proposed framework. An efficient blockwise coordinate descent algorithm is developed suitable for solving the entire family of formulations with rigorous convergence analysis. Simulation studies have identified the statistical properties of data that would be in favor of the new formulations. Extensive empirical studies on various classification and regression benchmark data sets have revealed the relative advantages of the two new formulations by comparing with the state of the art, which provides instructive insights into the feature learning problem with multiple tasks. PMID:28428735

  9. Phillips-Tikhonov regularization with a priori information for neutron emission tomographic reconstruction on Joint European Torus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bielecki, J.; Scholz, M.; Drozdowicz, K.

    A method of tomographic reconstruction of the neutron emissivity in the poloidal cross section of the Joint European Torus (JET, Culham, UK) tokamak was developed. Due to very limited data set (two projection angles, 19 lines of sight only) provided by the neutron emission profile monitor (KN3 neutron camera), the reconstruction is an ill-posed inverse problem. The aim of this work consists in making a contribution to the development of reliable plasma tomography reconstruction methods that could be routinely used at JET tokamak. The proposed method is based on Phillips-Tikhonov regularization and incorporates a priori knowledge of the shape ofmore » normalized neutron emissivity profile. For the purpose of the optimal selection of the regularization parameters, the shape of normalized neutron emissivity profile is approximated by the shape of normalized electron density profile measured by LIDAR or high resolution Thomson scattering JET diagnostics. In contrast with some previously developed methods of ill-posed plasma tomography reconstruction problem, the developed algorithms do not include any post-processing of the obtained solution and the physical constrains on the solution are imposed during the regularization process. The accuracy of the method is at first evaluated by several tests with synthetic data based on various plasma neutron emissivity models (phantoms). Then, the method is applied to the neutron emissivity reconstruction for JET D plasma discharge #85100. It is demonstrated that this method shows good performance and reliability and it can be routinely used for plasma neutron emissivity reconstruction on JET.« less

  10. Intraventricular vector flow mapping—a Doppler-based regularized problem with automatic model selection

    NASA Astrophysics Data System (ADS)

    Assi, Kondo Claude; Gay, Etienne; Chnafa, Christophe; Mendez, Simon; Nicoud, Franck; Abascal, Juan F. P. J.; Lantelme, Pierre; Tournoux, François; Garcia, Damien

    2017-09-01

    We propose a regularized least-squares method for reconstructing 2D velocity vector fields within the left ventricular cavity from single-view color Doppler echocardiographic images. Vector flow mapping is formulated as a quadratic optimization problem based on an {{\\ell }2} -norm minimization of a cost function composed of a Doppler data-fidelity term and a regularizer. The latter contains three physically interpretable expressions related to 2D mass conservation, Dirichlet boundary conditions, and smoothness. A finite difference discretization of the continuous problem was adopted in a polar coordinate system, leading to a sparse symmetric positive-definite system. The three regularization parameters were determined automatically by analyzing the L-hypersurface, a generalization of the L-curve. The performance of the proposed method was numerically evaluated using (1) a synthetic flow composed of a mixture of divergence-free and curl-free flow fields and (2) simulated flow data from a patient-specific CFD (computational fluid dynamics) model of a human left heart. The numerical evaluations showed that the vector flow fields reconstructed from the Doppler components were in good agreement with the original velocities, with a relative error less than 20%. It was also demonstrated that a perturbation of the domain contour has little effect on the rebuilt velocity fields. The capability of our intraventricular vector flow mapping (iVFM) algorithm was finally illustrated on in vivo echocardiographic color Doppler data acquired in patients. The vortex that forms during the rapid filling was clearly deciphered. This improved iVFM algorithm is expected to have a significant clinical impact in the assessment of diastolic function.

  11. Using Tranformation Group Priors and Maximum Relative Entropy for Bayesian Glaciological Inversions

    NASA Astrophysics Data System (ADS)

    Arthern, R. J.; Hindmarsh, R. C. A.; Williams, C. R.

    2014-12-01

    One of the key advances that has allowed better simulations of the large ice sheets of Greenland and Antarctica has been the use of inverse methods. These have allowed poorly known parameters such as the basal drag coefficient and ice viscosity to be constrained using a wide variety of satellite observations. Inverse methods used by glaciologists have broadly followed one of two related approaches. The first is minimization of a cost function that describes the misfit to the observations, often accompanied by some kind of explicit or implicit regularization that promotes smallness or smoothness in the inverted parameters. The second approach is a probabilistic framework that makes use of Bayes' theorem to update prior assumptions about the probability of parameters, making use of data with known error estimates. Both approaches have much in common and questions of regularization often map onto implicit choices of prior probabilities that are made explicit in the Bayesian framework. In both approaches questions can arise that seem to demand subjective input. What should the functional form of the cost function be if there are alternatives? What kind of regularization should be applied, and how much? How should the prior probability distribution for a parameter such as basal slipperiness be specified when we know so little about the details of the subglacial environment? Here we consider some approaches that have been used to address these questions and discuss ways that probabilistic prior information used for regularizing glaciological inversions might be specified with greater objectivity.

  12. Sensitivity regularization of the Cramér-Rao lower bound to minimize B1 nonuniformity effects in quantitative magnetization transfer imaging.

    PubMed

    Boudreau, Mathieu; Pike, G Bruce

    2018-05-07

    To develop and validate a regularization approach of optimizing B 1 insensitivity of the quantitative magnetization transfer (qMT) pool-size ratio (F). An expression describing the impact of B 1 inaccuracies on qMT fitting parameters was derived using a sensitivity analysis. To simultaneously optimize for robustness against noise and B 1 inaccuracies, the optimization condition was defined as the Cramér-Rao lower bound (CRLB) regularized by the B 1 -sensitivity expression for the parameter of interest (F). The qMT protocols were iteratively optimized from an initial search space, with and without B 1 regularization. Three 10-point qMT protocols (Uniform, CRLB, CRLB+B 1 regularization) were compared using Monte Carlo simulations for a wide range of conditions (e.g., SNR, B 1 inaccuracies, tissues). The B 1 -regularized CRLB optimization protocol resulted in the best robustness of F against B 1 errors, for a wide range of SNR and for both white matter and gray matter tissues. For SNR = 100, this protocol resulted in errors of less than 1% in mean F values for B 1 errors ranging between -10 and 20%, the range of B 1 values typically observed in vivo in the human head at field strengths of 3 T and less. Both CRLB-optimized protocols resulted in the lowest σ F values for all SNRs and did not increase in the presence of B 1 inaccuracies. This work demonstrates a regularized optimization approach for improving the robustness of auxiliary measurements (e.g., B 1 ) sensitivity of qMT parameters, particularly the pool-size ratio (F). Predicting substantially less B 1 sensitivity using protocols optimized with this method, B 1 mapping could even be omitted for qMT studies primarily interested in F. © 2018 International Society for Magnetic Resonance in Medicine.

  13. On the modeling and nonlinear dynamics of autonomous Silva-Young type chaotic oscillators with flat power spectrum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kengne, Jacques; Kenmogne, Fabien

    2014-12-15

    The nonlinear dynamics of fourth-order Silva-Young type chaotic oscillators with flat power spectrum recently introduced by Tamaseviciute and collaborators is considered. In this type of oscillators, a pair of semiconductor diodes in an anti-parallel connection acts as the nonlinear component necessary for generating chaotic oscillations. Based on the Shockley diode equation and an appropriate selection of the state variables, a smooth mathematical model (involving hyperbolic sine and cosine functions) is derived for a better description of both the regular and chaotic dynamics of the system. The complex behavior of the oscillator is characterized in terms of its parameters by usingmore » time series, bifurcation diagrams, Lyapunov exponents' plots, Poincaré sections, and frequency spectra. It is shown that the onset of chaos is achieved via the classical period-doubling and symmetry restoring crisis scenarios. Some PSPICE simulations of the nonlinear dynamics of the oscillator are presented in order to confirm the ability of the proposed mathematical model to accurately describe/predict both the regular and chaotic behaviors of the oscillator.« less

  14. Spectral X-ray Radiography for Safeguards at Nuclear Fuel Fabrication Facilities: A Feasibility Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilbert, Andrew J.; McDonald, Benjamin S.; Smith, Leon E.

    The methods currently used by the International Atomic Energy Agency to account for nuclear materials at fuel fabrication facilities are time consuming and require in-field chemistry and operation by experts. Spectral X-ray radiography, along with advanced inverse algorithms, is an alternative inspection that could be completed noninvasively, without any in-field chemistry, with inspections of tens of seconds. The proposed inspection system and algorithms are presented here. The inverse algorithm uses total variation regularization and adaptive regularization parameter selection with the unbiased predictive risk estimator. Performance of the system is quantified with simulated X-ray inspection data and sensitivity of the outputmore » is tested against various inspection system instabilities. Material quantification from a fully-characterized inspection system is shown to be very accurate, with biases on nuclear material estimations of < 0.02%. It is shown that the results are sensitive to variations in the fuel powder sample density and detector pixel gain, which increase biases to 1%. Options to mitigate these inaccuracies are discussed.« less

  15. Fast Spatial Resolution Analysis of Quadratic Penalized Least-Squares Image Reconstruction With Separate Real and Imaginary Roughness Penalty: Application to fMRI.

    PubMed

    Olafsson, Valur T; Noll, Douglas C; Fessler, Jeffrey A

    2018-02-01

    Penalized least-squares iterative image reconstruction algorithms used for spatial resolution-limited imaging, such as functional magnetic resonance imaging (fMRI), commonly use a quadratic roughness penalty to regularize the reconstructed images. When used for complex-valued images, the conventional roughness penalty regularizes the real and imaginary parts equally. However, these imaging methods sometimes benefit from separate penalties for each part. The spatial smoothness from the roughness penalty on the reconstructed image is dictated by the regularization parameter(s). One method to set the parameter to a desired smoothness level is to evaluate the full width at half maximum of the reconstruction method's local impulse response. Previous work has shown that when using the conventional quadratic roughness penalty, one can approximate the local impulse response using an FFT-based calculation. However, that acceleration method cannot be applied directly for separate real and imaginary regularization. This paper proposes a fast and stable calculation for this case that also uses FFT-based calculations to approximate the local impulse responses of the real and imaginary parts. This approach is demonstrated with a quadratic image reconstruction of fMRI data that uses separate roughness penalties for the real and imaginary parts.

  16. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.

    PubMed

    Dazard, Jean-Eudes; Rao, J Sunil

    2012-07-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.

  17. Parameter identification in ODE models with oscillatory dynamics: a Fourier regularization approach

    NASA Astrophysics Data System (ADS)

    Chiara D'Autilia, Maria; Sgura, Ivonne; Bozzini, Benedetto

    2017-12-01

    In this paper we consider a parameter identification problem (PIP) for data oscillating in time, that can be described in terms of the dynamics of some ordinary differential equation (ODE) model, resulting in an optimization problem constrained by the ODEs. In problems with this type of data structure, simple application of the direct method of control theory (discretize-then-optimize) yields a least-squares cost function exhibiting multiple ‘low’ minima. Since in this situation any optimization algorithm is liable to fail in the approximation of a good solution, here we propose a Fourier regularization approach that is able to identify an iso-frequency manifold {{ S}} of codimension-one in the parameter space \

  18. Space-dependent perfusion coefficient estimation in a 2D bioheat transfer problem

    NASA Astrophysics Data System (ADS)

    Bazán, Fermín S. V.; Bedin, Luciano; Borges, Leonardo S.

    2017-05-01

    In this work, a method for estimating the space-dependent perfusion coefficient parameter in a 2D bioheat transfer model is presented. In the method, the bioheat transfer model is transformed into a time-dependent semidiscrete system of ordinary differential equations involving perfusion coefficient values as parameters, and the estimation problem is solved through a nonlinear least squares technique. In particular, the bioheat problem is solved by the method of lines based on a highly accurate pseudospectral approach, and perfusion coefficient values are estimated by the regularized Gauss-Newton method coupled with a proper regularization parameter. The performance of the method on several test problems is illustrated numerically.

  19. Semantic Drift in Espresso-style Bootstrapping: Graph-theoretic Analysis and Evaluation in Word Sense Disambiguation

    NASA Astrophysics Data System (ADS)

    Komachi, Mamoru; Kudo, Taku; Shimbo, Masashi; Matsumoto, Yuji

    Bootstrapping has a tendency, called semantic drift, to select instances unrelated to the seed instances as the iteration proceeds. We demonstrate the semantic drift of Espresso-style bootstrapping has the same root as the topic drift of Kleinberg's HITS, using a simplified graph-based reformulation of bootstrapping. We confirm that two graph-based algorithms, the von Neumann kernels and the regularized Laplacian, can reduce the effect of semantic drift in the task of word sense disambiguation (WSD) on Senseval-3 English Lexical Sample Task. Proposed algorithms achieve superior performance to Espresso and previous graph-based WSD methods, even though the proposed algorithms have less parameters and are easy to calibrate.

  20. Iterative Nonlocal Total Variation Regularization Method for Image Restoration

    PubMed Central

    Xu, Huanyu; Sun, Quansen; Luo, Nan; Cao, Guo; Xia, Deshen

    2013-01-01

    In this paper, a Bregman iteration based total variation image restoration algorithm is proposed. Based on the Bregman iteration, the algorithm splits the original total variation problem into sub-problems that are easy to solve. Moreover, non-local regularization is introduced into the proposed algorithm, and a method to choose the non-local filter parameter locally and adaptively is proposed. Experiment results show that the proposed algorithms outperform some other regularization methods. PMID:23776560

  1. Dynamic Cross-Entropy.

    PubMed

    Aur, Dorian; Vila-Rodriguez, Fidel

    2017-01-01

    Complexity measures for time series have been used in many applications to quantify the regularity of one dimensional time series, however many dynamical systems are spatially distributed multidimensional systems. We introduced Dynamic Cross-Entropy (DCE) a novel multidimensional complexity measure that quantifies the degree of regularity of EEG signals in selected frequency bands. Time series generated by discrete logistic equations with varying control parameter r are used to test DCE measures. Sliding window DCE analyses are able to reveal specific period doubling bifurcations that lead to chaos. A similar behavior can be observed in seizures triggered by electroconvulsive therapy (ECT). Sample entropy data show the level of signal complexity in different phases of the ictal ECT. The transition to irregular activity is preceded by the occurrence of cyclic regular behavior. A significant increase of DCE values in successive order from high frequencies in gamma to low frequencies in delta band reveals several phase transitions into less ordered states, possible chaos in the human brain. To our knowledge there are no reliable techniques able to reveal the transition to chaos in case of multidimensional times series. In addition, DCE based on sample entropy appears to be robust to EEG artifacts compared to DCE based on Shannon entropy. The applied technique may offer new approaches to better understand nonlinear brain activity. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. SU-E-J-67: Evaluation of Breathing Patterns for Respiratory-Gated Radiation Therapy Using Respiration Regularity Index

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheong, K; Lee, M; Kang, S

    2014-06-01

    Purpose: Despite the importance of accurately estimating the respiration regularity of a patient in motion compensation treatment, an effective and simply applicable method has rarely been reported. The authors propose a simple respiration regularity index based on parameters derived from a correspondingly simplified respiration model. Methods: In order to simplify a patient's breathing pattern while preserving the data's intrinsic properties, we defined a respiration model as a power of cosine form with a baseline drift. According to this respiration formula, breathing-pattern fluctuation could be explained using four factors: sample standard deviation of respiration period, sample standard deviation of amplitude andmore » the results of simple regression of the baseline drift (slope and standard deviation of residuals of a respiration signal. Overall irregularity (δ) was defined as a Euclidean norm of newly derived variable using principal component analysis (PCA) for the four fluctuation parameters. Finally, the proposed respiration regularity index was defined as ρ=ln(1+(1/ δ))/2, a higher ρ indicating a more regular breathing pattern. Subsequently, we applied it to simulated and clinical respiration signals from real-time position management (RPM; Varian Medical Systems, Palo Alto, CA) and investigated respiration regularity. Moreover, correlations between the regularity of the first session and the remaining fractions were investigated using Pearson's correlation coefficient. Results: The respiration regularity was determined based on ρ; patients with ρ<0.3 showed worse regularity than the others, whereas ρ>0.7 was suitable for respiratory-gated radiation therapy (RGRT). Fluctuations in breathing cycle and amplitude were especially determinative of ρ. If the respiration regularity of a patient's first session was known, it could be estimated through subsequent sessions. Conclusions: Respiration regularity could be objectively determined using a respiration regularity index, ρ. Such single-index testing of respiration regularity can facilitate determination of RGRT availability in clinical settings, especially for free-breathing cases. This work was supported by a Korea Science and Engineering Foundation (KOSEF) grant funded by the Korean Ministry of Science, ICT and Future Planning (No. 2013043498)« less

  3. Regularization Methods for High-Dimensional Instrumental Variables Regression With an Application to Genetical Genomics

    PubMed Central

    Lin, Wei; Feng, Rui; Li, Hongzhe

    2014-01-01

    In genetical genomics studies, it is important to jointly analyze gene expression data and genetic variants in exploring their associations with complex traits, where the dimensionality of gene expressions and genetic variants can both be much larger than the sample size. Motivated by such modern applications, we consider the problem of variable selection and estimation in high-dimensional sparse instrumental variables models. To overcome the difficulty of high dimensionality and unknown optimal instruments, we propose a two-stage regularization framework for identifying and estimating important covariate effects while selecting and estimating optimal instruments. The methodology extends the classical two-stage least squares estimator to high dimensions by exploiting sparsity using sparsity-inducing penalty functions in both stages. The resulting procedure is efficiently implemented by coordinate descent optimization. For the representative L1 regularization and a class of concave regularization methods, we establish estimation, prediction, and model selection properties of the two-stage regularized estimators in the high-dimensional setting where the dimensionality of co-variates and instruments are both allowed to grow exponentially with the sample size. The practical performance of the proposed method is evaluated by simulation studies and its usefulness is illustrated by an analysis of mouse obesity data. Supplementary materials for this article are available online. PMID:26392642

  4. A universal deep learning approach for modeling the flow of patients under different severities.

    PubMed

    Jiang, Shancheng; Chin, Kwai-Sang; Tsui, Kwok L

    2018-02-01

    The Accident and Emergency Department (A&ED) is the frontline for providing emergency care in hospitals. Unfortunately, relative A&ED resources have failed to keep up with continuously increasing demand in recent years, which leads to overcrowding in A&ED. Knowing the fluctuation of patient arrival volume in advance is a significant premise to relieve this pressure. Based on this motivation, the objective of this study is to explore an integrated framework with high accuracy for predicting A&ED patient flow under different triage levels, by combining a novel feature selection process with deep neural networks. Administrative data is collected from an actual A&ED and categorized into five groups based on different triage levels. A genetic algorithm (GA)-based feature selection algorithm is improved and implemented as a pre-processing step for this time-series prediction problem, in order to explore key features affecting patient flow. In our improved GA, a fitness-based crossover is proposed to maintain the joint information of multiple features during iterative process, instead of traditional point-based crossover. Deep neural networks (DNN) is employed as the prediction model to utilize their universal adaptability and high flexibility. In the model-training process, the learning algorithm is well-configured based on a parallel stochastic gradient descent algorithm. Two effective regularization strategies are integrated in one DNN framework to avoid overfitting. All introduced hyper-parameters are optimized efficiently by grid-search in one pass. As for feature selection, our improved GA-based feature selection algorithm has outperformed a typical GA and four state-of-the-art feature selection algorithms (mRMR, SAFS, VIFR, and CFR). As for the prediction accuracy of proposed integrated framework, compared with other frequently used statistical models (GLM, seasonal-ARIMA, ARIMAX, and ANN) and modern machine models (SVM-RBF, SVM-linear, RF, and R-LASSO), the proposed integrated "DNN-I-GA" framework achieves higher prediction accuracy on both MAPE and RMSE metrics in pairwise comparisons. The contribution of our study is two-fold. Theoretically, the traditional GA-based feature selection process is improved to have less hyper-parameters and higher efficiency, and the joint information of multiple features is maintained by fitness-based crossover operator. The universal property of DNN is further enhanced by merging different regularization strategies. Practically, features selected by our improved GA can be used to acquire an underlying relationship between patient flows and input features. Predictive values are significant indicators of patients' demand and can be used by A&ED managers to make resource planning and allocation. High accuracy achieved by the present framework in different cases enhances the reliability of downstream decision makings. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Analysis of the iteratively regularized Gauss-Newton method under a heuristic rule

    NASA Astrophysics Data System (ADS)

    Jin, Qinian; Wang, Wei

    2018-03-01

    The iteratively regularized Gauss-Newton method is one of the most prominent regularization methods for solving nonlinear ill-posed inverse problems when the data is corrupted by noise. In order to produce a useful approximate solution, this iterative method should be terminated properly. The existing a priori and a posteriori stopping rules require accurate information on the noise level, which may not be available or reliable in practical applications. In this paper we propose a heuristic selection rule for this regularization method, which requires no information on the noise level. By imposing certain conditions on the noise, we derive a posteriori error estimates on the approximate solutions under various source conditions. Furthermore, we establish a convergence result without using any source condition. Numerical results are presented to illustrate the performance of our heuristic selection rule.

  6. A regularity result for fixed points, with applications to linear response

    NASA Astrophysics Data System (ADS)

    Sedro, Julien

    2018-04-01

    In this paper, we show a series of abstract results on fixed point regularity with respect to a parameter. They are based on a Taylor development taking into account a loss of regularity phenomenon, typically occurring for composition operators acting on spaces of functions with finite regularity. We generalize this approach to higher order differentiability, through the notion of an n-graded family. We then give applications to the fixed point of a nonlinear map, and to linear response in the context of (uniformly) expanding dynamics (theorem 3 and corollary 2), in the spirit of Gouëzel-Liverani.

  7. Controlled wavelet domain sparsity for x-ray tomography

    NASA Astrophysics Data System (ADS)

    Purisha, Zenith; Rimpeläinen, Juho; Bubba, Tatiana; Siltanen, Samuli

    2018-01-01

    Tomographic reconstruction is an ill-posed inverse problem that calls for regularization. One possibility is to require sparsity of the unknown in an orthonormal wavelet basis. This, in turn, can be achieved by variational regularization, where the penalty term is the sum of the absolute values of the wavelet coefficients. The primal-dual fixed point algorithm showed that the minimizer of the variational regularization functional can be computed iteratively using a soft-thresholding operation. Choosing the soft-thresholding parameter \

  8. Towards adjoint-based inversion for rheological parameters in nonlinear viscous mantle flow

    NASA Astrophysics Data System (ADS)

    Worthen, Jennifer; Stadler, Georg; Petra, Noemi; Gurnis, Michael; Ghattas, Omar

    2014-09-01

    We address the problem of inferring mantle rheological parameter fields from surface velocity observations and instantaneous nonlinear mantle flow models. We formulate this inverse problem as an infinite-dimensional nonlinear least squares optimization problem governed by nonlinear Stokes equations. We provide expressions for the gradient of the cost functional of this optimization problem with respect to two spatially-varying rheological parameter fields: the viscosity prefactor and the exponent of the second invariant of the strain rate tensor. Adjoint (linearized) Stokes equations, which are characterized by a 4th order anisotropic viscosity tensor, facilitates efficient computation of the gradient. A quasi-Newton method for the solution of this optimization problem is presented, which requires the repeated solution of both nonlinear forward Stokes and linearized adjoint Stokes equations. For the solution of the nonlinear Stokes equations, we find that Newton’s method is significantly more efficient than a Picard fixed point method. Spectral analysis of the inverse operator given by the Hessian of the optimization problem reveals that the numerical eigenvalues collapse rapidly to zero, suggesting a high degree of ill-posedness of the inverse problem. To overcome this ill-posedness, we employ Tikhonov regularization (favoring smooth parameter fields) or total variation (TV) regularization (favoring piecewise-smooth parameter fields). Solution of two- and three-dimensional finite element-based model inverse problems show that a constant parameter in the constitutive law can be recovered well from surface velocity observations. Inverting for a spatially-varying parameter field leads to its reasonable recovery, in particular close to the surface. When inferring two spatially varying parameter fields, only an effective viscosity field and the total viscous dissipation are recoverable. Finally, a model of a subducting plate shows that a localized weak zone at the plate boundary can be partially recovered, especially with TV regularization.

  9. Computing sensitivity and selectivity in parallel factor analysis and related multiway techniques: the need for further developments in net analyte signal theory.

    PubMed

    Olivieri, Alejandro C

    2005-08-01

    Sensitivity and selectivity are important figures of merit in multiway analysis, regularly employed for comparison of the analytical performance of methods and for experimental design and planning. They are especially interesting in the second-order advantage scenario, where the latter property allows for the analysis of samples with a complex background, permitting analyte determination even in the presence of unsuspected interferences. Since no general theory exists for estimating the multiway sensitivity, Monte Carlo numerical calculations have been developed for estimating variance inflation factors, as a convenient way of assessing both sensitivity and selectivity parameters for the popular parallel factor (PARAFAC) analysis and also for related multiway techniques. When the second-order advantage is achieved, the existing expressions derived from net analyte signal theory are only able to adequately cover cases where a single analyte is calibrated using second-order instrumental data. However, they fail for certain multianalyte cases, or when third-order data are employed, calling for an extension of net analyte theory. The results have strong implications in the planning of multiway analytical experiments.

  10. Selecting focal species as surrogates for imperiled species using relative sensitivities derived from occupancy analysis

    USGS Publications Warehouse

    Silvano, Amy; Guyer, Craig; Steury, Todd; Grand, James B.

    2017-01-01

    Most imperiled species are rare or elusive and difficult to detect, which makes gathering data to estimate their response to habitat restoration a challenge. We used a repeatable, systematic method for selecting focal species using relative sensitivities derived from occupancy analysis. Our objective was to select suites of focal species that would be useful as surrogates when predicting effects of restoration of habitat characteristics preferred by imperiled species. We developed 27 habitat profiles that represent general habitat relationships for 118 imperiled species. We identified 23 regularly encountered species that were sensitive to important aspects of those profiles. We validated our approach by examining the correlation between estimated probabilities of occupancy for species of concern and focal species selected using our method. Occupancy rates of focal species were more related to occupancy rates of imperiled species when they were sensitive to more of the parameters appearing in profiles of imperiled species. We suggest that this approach can be an effective means of predicting responses by imperiled species to proposed management actions. However, adequate monitoring will be required to determine the effectiveness of using focal species to guide management actions.

  11. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data

    PubMed Central

    Dazard, Jean-Eudes; Rao, J. Sunil

    2012-01-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput “omics” data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel “similarity statistic”-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called ‘MVR’ (‘Mean-Variance Regularization’), downloadable from the CRAN website. PMID:22711950

  12. Implementing the Regular Education Initiative in Secondary Schools: A Different Ball Game.

    ERIC Educational Resources Information Center

    Schumaker, Jean B.; Deshler, Donald D.

    1988-01-01

    The article reviews potential barriers to implementing the Regular Education Initiative (REI) in secondary schools and then discusses a set of factors central to developing a workable partnership, one that is compatible with the goals of the REI but that also responds to the unique parameters of secondary schools. (Author/DB)

  13. An interior-point method for total variation regularized positron emission tomography image reconstruction

    NASA Astrophysics Data System (ADS)

    Bai, Bing

    2012-03-01

    There has been a lot of work on total variation (TV) regularized tomographic image reconstruction recently. Many of them use gradient-based optimization algorithms with a differentiable approximation of the TV functional. In this paper we apply TV regularization in Positron Emission Tomography (PET) image reconstruction. We reconstruct the PET image in a Bayesian framework, using Poisson noise model and TV prior functional. The original optimization problem is transformed to an equivalent problem with inequality constraints by adding auxiliary variables. Then we use an interior point method with logarithmic barrier functions to solve the constrained optimization problem. In this method, a series of points approaching the solution from inside the feasible region are found by solving a sequence of subproblems characterized by an increasing positive parameter. We use preconditioned conjugate gradient (PCG) algorithm to solve the subproblems directly. The nonnegativity constraint is enforced by bend line search. The exact expression of the TV functional is used in our calculations. Simulation results show that the algorithm converges fast and the convergence is insensitive to the values of the regularization and reconstruction parameters.

  14. Feasibility of inverse problem solution for determination of city emission function from night sky radiance measurements

    NASA Astrophysics Data System (ADS)

    Petržala, Jaromír

    2018-07-01

    The knowledge of the emission function of a city is crucial for simulation of sky glow in its vicinity. The indirect methods to achieve this function from radiances measured over a part of the sky have been recently developed. In principle, such methods represent an ill-posed inverse problem. This paper deals with the theoretical feasibility study of various approaches to solving of given inverse problem. Particularly, it means testing of fitness of various stabilizing functionals within the Tikhonov's regularization. Further, the L-curve and generalized cross validation methods were investigated as indicators of an optimal regularization parameter. At first, we created the theoretical model for calculation of a sky spectral radiance in the form of a functional of an emission spectral radiance. Consequently, all the mentioned approaches were examined in numerical experiments with synthetical data generated for the fictitious city and loaded by random errors. The results demonstrate that the second order Tikhonov's regularization method together with regularization parameter choice by the L-curve maximum curvature criterion provide solutions which are in good agreement with the supposed model emission functions.

  15. Propulsion and trapping of microparticles by active cilia arrays.

    PubMed

    Bhattacharya, Amitabh; Buxton, Gavin A; Usta, O Berk; Balazs, Anna C

    2012-02-14

    We model the transport of a microscopic particle via a regular array of beating elastic cilia, whose tips experience an adhesive interaction with the particle's surface. At optimal adhesion strength, the average particle velocity is maximized. Using simulations spanning a range of cilia stiffness and cilia-particle adhesion strength, we explore the parameter space over which the particle can be "released", "propelled", or "trapped" by the cilia. We use a lower-order model to predict parameters for which the cilia are able to "propel" the particle. This is the first study that shows how both stiffness and adhesion strength are crucial for manipulation of particles by active cilia arrays. These results can facilitate the design of synthetic cilia that integrate adhesive and hydrodynamic interactions to selectively repel or trap particulates. Surfaces that are effective at repelling particulates are valuable for antifouling applications, while surfaces that can trap and, thus, remove particulates from the solution are useful for efficient filtration systems.

  16. Preference mapping of lemon lime carbonated beverages with regular and diet beverage consumers.

    PubMed

    Leksrisompong, P P; Lopetcharat, K; Guthrie, B; Drake, M A

    2013-02-01

    The drivers of liking of lemon-lime carbonated beverages were investigated with regular and diet beverage consumers. Ten beverages were selected from a category survey of commercial beverages using a D-optimal procedure. Beverages were subjected to consumer testing (n = 101 regular beverage consumers, n = 100 diet beverage consumers). Segmentation of consumers was performed on overall liking scores followed by external preference mapping of selected samples. Diet beverage consumers liked 2 diet beverages more than regular beverage consumers. There were no differences in the overall liking scores between diet and regular beverage consumers for other products except for a sparkling beverage sweetened with juice which was more liked by regular beverage consumers. Three subtle but distinct consumer preference clusters were identified. Two segments had evenly distributed diet and regular beverage consumers but one segment had a greater percentage of regular beverage consumers (P < 0.05). The 3 preference segments were named: cluster 1 (C1) sweet taste and carbonation mouthfeel lovers, cluster 2 (C2) carbonation mouthfeel lovers, sweet and bitter taste acceptors, and cluster 3 (C3) bitter taste avoiders, mouthfeel and sweet taste lovers. User status (diet or regular beverage consumers) did not have a large impact on carbonated beverage liking. Instead, mouthfeel attributes were major drivers of liking when these beverages were tested in a blind tasting. Preference mapping of lemon-lime carbonated beverage with diet and regular beverage consumers allowed the determination of drivers of liking of both populations. The understanding of how mouthfeel attributes, aromatics, and basic tastes impact liking or disliking of products was achieved. Preference drivers established in this study provide product developers of carbonated lemon-lime beverages with additional information to develop beverages that may be suitable for different groups of consumers. © 2013 Institute of Food Technologists®

  17. Deflection of light by rotating regular black holes using the Gauss-Bonnet theorem

    NASA Astrophysics Data System (ADS)

    Jusufi, Kimet; Övgün, Ali; Saavedra, Joel; Vásquez, Yerko; González, P. A.

    2018-06-01

    In this paper, we study the weak gravitational lensing in the spacetime of rotating regular black hole geometries such as Ayon-Beato-García (ABG), Bardeen, and Hayward black holes. We calculate the deflection angle of light using the Gauss-Bonnet theorem (GBT) and show that the deflection of light can be viewed as a partially topological effect in which the deflection angle can be calculated by considering a domain outside of the light ray applied to the black hole optical geometries. Then, we demonstrate also the deflection angle via the geodesics formalism for these black holes to verify our results and explore the differences with the Kerr solution. These black holes have, in addition to the total mass and rotation parameter, different parameters of electric charge, magnetic charge, and deviation parameter. We find that the deflection of light has correction terms coming from these parameters, which generalizes the Kerr deflection angle.

  18. Ultrasonographic anatomy of the healthy southern tigrina ( Leopardus guttulus) abdomen: comparison with domestic cat references.

    PubMed

    Müller, Thiago R; Marcelino, Raquel S; de Souza, Livia P; Teixeira, Carlos R; Mamprim, Maria J

    2017-02-01

    Objectives The aim of the study was to describe the normal abdominal echoanatomy of the tigrina and to compare it with the abdominal echoanatomy of the domestic cat. Reference intervals for the normal abdominal ultrasonographic anatomy of individual species are important for accurate diagnoses and interpretation of routine health examinations. The hypothesis was that the echoanatomy of the tigrina was similar to that of the domestic cat. Methods Eighteen clinically healthy tigrina were selected for abdominal ultrasound examination, in order to obtain normal parameters of the bladder, spleen, adrenal gland, kidney, gastrointestinal tract, liver and gall bladder, and Doppler parameters of liver and kidney vessels. Results The splenic parenchyma was consistently hyperechoic to the kidneys and liver. The liver, kidneys and spleen had similar echotexture, shape and dimensions when compared with the domestic cat. The gall bladder was lobulated and surrounded by a clearly visualized thin, smooth, regular echogenic wall. The adrenal glands had a bilobulated shape. The urinary bladder had a thin echogenic wall. The Doppler parameters of the portal vein and renal artery were similar to the domestic cat. Conclusions and relevance The results support the hypothesis that the ultrasonographic parameters of the abdominal viscera of the southern tigrina are similar to those of the domestic cat.

  19. Ill-posed problem and regularization in reconstruction of radiobiological parameters from serial tumor imaging data

    NASA Astrophysics Data System (ADS)

    Chvetsov, Alevei V.; Sandison, George A.; Schwartz, Jeffrey L.; Rengan, Ramesh

    2015-11-01

    The main objective of this article is to improve the stability of reconstruction algorithms for estimation of radiobiological parameters using serial tumor imaging data acquired during radiation therapy. Serial images of tumor response to radiation therapy represent a complex summation of several exponential processes as treatment induced cell inactivation, tumor growth rates, and the rate of cell loss. Accurate assessment of treatment response would require separation of these processes because they define radiobiological determinants of treatment response and, correspondingly, tumor control probability. However, the estimation of radiobiological parameters using imaging data can be considered an inverse ill-posed problem because a sum of several exponentials would produce the Fredholm integral equation of the first kind which is ill posed. Therefore, the stability of reconstruction of radiobiological parameters presents a problem even for the simplest models of tumor response. To study stability of the parameter reconstruction problem, we used a set of serial CT imaging data for head and neck cancer and a simplest case of a two-level cell population model of tumor response. Inverse reconstruction was performed using a simulated annealing algorithm to minimize a least squared objective function. Results show that the reconstructed values of cell surviving fractions and cell doubling time exhibit significant nonphysical fluctuations if no stabilization algorithms are applied. However, after applying a stabilization algorithm based on variational regularization, the reconstruction produces statistical distributions for survival fractions and doubling time that are comparable to published in vitro data. This algorithm is an advance over our previous work where only cell surviving fractions were reconstructed. We conclude that variational regularization allows for an increase in the number of free parameters in our model which enables development of more-advanced parameter reconstruction algorithms.

  20. Filtering techniques for efficient inversion of two-dimensional Nuclear Magnetic Resonance data

    NASA Astrophysics Data System (ADS)

    Bortolotti, V.; Brizi, L.; Fantazzini, P.; Landi, G.; Zama, F.

    2017-10-01

    The inversion of two-dimensional Nuclear Magnetic Resonance (NMR) data requires the solution of a first kind Fredholm integral equation with a two-dimensional tensor product kernel and lower bound constraints. For the solution of this ill-posed inverse problem, the recently presented 2DUPEN algorithm [V. Bortolotti et al., Inverse Problems, 33(1), 2016] uses multiparameter Tikhonov regularization with automatic choice of the regularization parameters. In this work, I2DUPEN, an improved version of 2DUPEN that implements Mean Windowing and Singular Value Decomposition filters, is deeply tested. The reconstruction problem with filtered data is formulated as a compressed weighted least squares problem with multi-parameter Tikhonov regularization. Results on synthetic and real 2D NMR data are presented with the main purpose to deeper analyze the separate and combined effects of these filtering techniques on the reconstructed 2D distribution.

  1. Slow dynamics and regularization phenomena in ensembles of chaotic neurons

    NASA Astrophysics Data System (ADS)

    Rabinovich, M. I.; Varona, P.; Torres, J. J.; Huerta, R.; Abarbanel, H. D. I.

    1999-02-01

    We have explored the role of calcium concentration dynamics in the generation of chaos and in the regularization of the bursting oscillations using a minimal neural circuit of two coupled model neurons. In regions of the control parameter space where the slowest component, namely the calcium concentration in the endoplasmic reticulum, weakly depends on the other variables, this model is analogous to three dimensional systems as found in [1] or [2]. These are minimal models that describe the fundamental characteristics of the chaotic spiking-bursting behavior observed in real neurons. We have investigated different regimes of cooperative behavior in large assemblies of such units using lattice of non-identical Hindmarsh-Rose neurons electrically coupled with parameters chosen randomly inside the chaotic region. We study the regularization mechanisms in large assemblies and the development of several spatio-temporal patterns as a function of the interconnectivity among nearest neighbors.

  2. Dimensional regularization of the IR divergences in the Fokker action of point-particle binaries at the fourth post-Newtonian order

    NASA Astrophysics Data System (ADS)

    Bernard, Laura; Blanchet, Luc; Bohé, Alejandro; Faye, Guillaume; Marsat, Sylvain

    2017-11-01

    The Fokker action of point-particle binaries at the fourth post-Newtonian (4PN) approximation of general relativity has been determined previously. However two ambiguity parameters associated with infrared (IR) divergencies of spatial integrals had to be introduced. These two parameters were fixed by comparison with gravitational self-force (GSF) calculations of the conserved energy and periastron advance for circular orbits in the test-mass limit. In the present paper together with a companion paper, we determine both these ambiguities from first principle, by means of dimensional regularization. Our computation is thus entirely defined within the dimensional regularization scheme, for treating at once the IR and ultra-violet (UV) divergencies. In particular, we obtain crucial contributions coming from the Einstein-Hilbert part of the action and from the nonlocal tail term in arbitrary dimensions, which resolve the ambiguities.

  3. Circular geodesic of Bardeen and Ayon-Beato-Garcia regular black-hole and no-horizon spacetimes

    NASA Astrophysics Data System (ADS)

    Stuchlík, Zdeněk; Schee, Jan

    2015-12-01

    In this paper, we study circular geodesic motion of test particles and photons in the Bardeen and Ayon-Beato-Garcia (ABG) geometry describing spherically symmetric regular black-hole or no-horizon spacetimes. While the Bardeen geometry is not exact solution of Einstein's equations, the ABG spacetime is related to self-gravitating charged sources governed by Einstein's gravity and nonlinear electrodynamics. They both are characterized by the mass parameter m and the charge parameter g. We demonstrate that in similarity to the Reissner-Nordstrom (RN) naked singularity spacetimes an antigravity static sphere should exist in all the no-horizon Bardeen and ABG solutions that can be surrounded by a Keplerian accretion disc. However, contrary to the RN naked singularity spacetimes, the ABG no-horizon spacetimes with parameter g/m > 2 can contain also an additional inner Keplerian disc hidden under the static antigravity sphere. Properties of the geodesic structure are reflected by simple observationally relevant optical phenomena. We give silhouette of the regular black-hole and no-horizon spacetimes, and profiled spectral lines generated by Keplerian rings radiating at a fixed frequency and located in strong gravity region at or nearby the marginally stable circular geodesics. We demonstrate that the profiled spectral lines related to the regular black-holes are qualitatively similar to those of the Schwarzschild black-holes, giving only small quantitative differences. On the other hand, the regular no-horizon spacetimes give clear qualitative signatures of their presence while compared to the Schwarschild spacetimes. Moreover, it is possible to distinguish the Bardeen and ABG no-horizon spacetimes, if the inclination angle to the observer is known.

  4. Invariant models in the inversion of gravity and magnetic fields and their derivatives

    NASA Astrophysics Data System (ADS)

    Ialongo, Simone; Fedi, Maurizio; Florio, Giovanni

    2014-11-01

    In potential field inversion problems we usually solve underdetermined systems and realistic solutions may be obtained by introducing a depth-weighting function in the objective function. The choice of the exponent of such power-law is crucial. It was suggested to determine it from the field-decay due to a single source-block; alternatively it has been defined as the structural index of the investigated source distribution. In both cases, when k-order derivatives of the potential field are considered, the depth-weighting exponent has to be increased by k with respect that of the potential field itself, in order to obtain consistent source model distributions. We show instead that invariant and realistic source-distribution models are obtained using the same depth-weighting exponent for the magnetic field and for its k-order derivatives. A similar behavior also occurs in the gravity case. In practice we found that the depth weighting-exponent is invariant for a given source-model and equal to that of the corresponding magnetic field, in the magnetic case, and of the 1st derivative of the gravity field, in the gravity case. In the case of the regularized inverse problem, with depth-weighting and general constraints, the mathematical demonstration of such invariance is difficult, because of its non-linearity, and of its variable form, due to the different constraints used. However, tests performed on a variety of synthetic cases seem to confirm the invariance of the depth-weighting exponent. A final consideration regards the role of the regularization parameter; we show that the regularization can severely affect the depth to the source because the estimated depth tends to increase proportionally with the size of the regularization parameter. Hence, some care is needed in handling the combined effect of the regularization parameter and depth weighting.

  5. Gene selection for microarray data classification via subspace learning and manifold regularization.

    PubMed

    Tang, Chang; Cao, Lijuan; Zheng, Xiao; Wang, Minhui

    2017-12-19

    With the rapid development of DNA microarray technology, large amount of genomic data has been generated. Classification of these microarray data is a challenge task since gene expression data are often with thousands of genes but a small number of samples. In this paper, an effective gene selection method is proposed to select the best subset of genes for microarray data with the irrelevant and redundant genes removed. Compared with original data, the selected gene subset can benefit the classification task. We formulate the gene selection task as a manifold regularized subspace learning problem. In detail, a projection matrix is used to project the original high dimensional microarray data into a lower dimensional subspace, with the constraint that the original genes can be well represented by the selected genes. Meanwhile, the local manifold structure of original data is preserved by a Laplacian graph regularization term on the low-dimensional data space. The projection matrix can serve as an importance indicator of different genes. An iterative update algorithm is developed for solving the problem. Experimental results on six publicly available microarray datasets and one clinical dataset demonstrate that the proposed method performs better when compared with other state-of-the-art methods in terms of microarray data classification. Graphical Abstract The graphical abstract of this work.

  6. The Asymptotic Distribution of Ability Estimates: Beyond Dichotomous Items and Unidimensional IRT Models

    ERIC Educational Resources Information Center

    Sinharay, Sandip

    2015-01-01

    The maximum likelihood estimate (MLE) of the ability parameter of an item response theory model with known item parameters was proved to be asymptotically normally distributed under a set of regularity conditions for tests involving dichotomous items and a unidimensional ability parameter (Klauer, 1990; Lord, 1983). This article first considers…

  7. Parameters influencing the physical activity of patients with a history of coronary revascularization.

    PubMed

    Acar, Burak; Yayla, Cagri; Gucuk Ipek, Esra; Unal, Sefa; Ertem, Ahmet Goktug; Burak, Cengiz; Senturk, Bihter; Bayraktar, Fatih; Kara, Meryem; Demirkan, Burcu; Guray, Yesim

    2017-10-01

    Coronary artery disease is the leading cause of mortality worldwide. Regular physical activity is part of a comprehensive management strategy for these patients. We investigated the parameters that influence physical activity in patients with a history of coronary revascularization. We included outpatients with a history of coronary revascularization at least six months prior to enrollment. Data on physical activity, demographics, and clinical characteristics were collected via a questionnaire. A total of 202 consecutive outpatients (age 61.3±11.2 years, 73% male) were enrolled. One hundred and four (51%) patients had previous percutaneous coronary intervention, 67 (33%) had coronary bypass graft surgery, and 31 (15%) had both procedures. Only 46 patients (23%) engaged in regular physical activity. Patients were classified into two subgroups according to their physical activity. There were no significant differences between subgroups in terms of age, comorbid conditions or revascularization type. Multivariate regression analysis revealed that low education level (OR=3.26, 95% CI: 1.31-8.11, p=0.01), and lack of regular follow-up (OR=2.95, 95% CI: 1.01-8.61, p=0.04) were independent predictors of non-adherence to regular physical activity among study subjects. Regular exercise rates were lower in outpatients with previous coronary revascularization. Education level and regular follow-up visits were associated with adherence to physical activity in these patients. Copyright © 2017 Sociedade Portuguesa de Cardiologia. Publicado por Elsevier España, S.L.U. All rights reserved.

  8. Robust design optimization using the price of robustness, robust least squares and regularization methods

    NASA Astrophysics Data System (ADS)

    Bukhari, Hassan J.

    2017-12-01

    In this paper a framework for robust optimization of mechanical design problems and process systems that have parametric uncertainty is presented using three different approaches. Robust optimization problems are formulated so that the optimal solution is robust which means it is minimally sensitive to any perturbations in parameters. The first method uses the price of robustness approach which assumes the uncertain parameters to be symmetric and bounded. The robustness for the design can be controlled by limiting the parameters that can perturb.The second method uses the robust least squares method to determine the optimal parameters when data itself is subjected to perturbations instead of the parameters. The last method manages uncertainty by restricting the perturbation on parameters to improve sensitivity similar to Tikhonov regularization. The methods are implemented on two sets of problems; one linear and the other non-linear. This methodology will be compared with a prior method using multiple Monte Carlo simulation runs which shows that the approach being presented in this paper results in better performance.

  9. Inverse problems-based maximum likelihood estimation of ground reflectivity for selected regions of interest from stripmap SAR data [Regularized maximum likelihood estimation of ground reflectivity from stripmap SAR data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    West, R. Derek; Gunther, Jacob H.; Moon, Todd K.

    In this study, we derive a comprehensive forward model for the data collected by stripmap synthetic aperture radar (SAR) that is linear in the ground reflectivity parameters. It is also shown that if the noise model is additive, then the forward model fits into the linear statistical model framework, and the ground reflectivity parameters can be estimated by statistical methods. We derive the maximum likelihood (ML) estimates for the ground reflectivity parameters in the case of additive white Gaussian noise. Furthermore, we show that obtaining the ML estimates of the ground reflectivity requires two steps. The first step amounts tomore » a cross-correlation of the data with a model of the data acquisition parameters, and it is shown that this step has essentially the same processing as the so-called convolution back-projection algorithm. The second step is a complete system inversion that is capable of mitigating the sidelobes of the spatially variant impulse responses remaining after the correlation processing. We also state the Cramer-Rao lower bound (CRLB) for the ML ground reflectivity estimates.We show that the CRLB is linked to the SAR system parameters, the flight path of the SAR sensor, and the image reconstruction grid.We demonstrate the ML image formation and the CRLB bound for synthetically generated data.« less

  10. IPMP Global Fit - A one-step direct data analysis tool for predictive microbiology.

    PubMed

    Huang, Lihan

    2017-12-04

    The objective of this work is to develop and validate a unified optimization algorithm for performing one-step global regression analysis of isothermal growth and survival curves for determination of kinetic parameters in predictive microbiology. The algorithm is incorporated with user-friendly graphical interfaces (GUIs) to develop a data analysis tool, the USDA IPMP-Global Fit. The GUIs are designed to guide the users to easily navigate through the data analysis process and properly select the initial parameters for different combinations of mathematical models. The software is developed for one-step kinetic analysis to directly construct tertiary models by minimizing the global error between the experimental observations and mathematical models. The current version of the software is specifically designed for constructing tertiary models with time and temperature as the independent model parameters in the package. The software is tested with a total of 9 different combinations of primary and secondary models for growth and survival of various microorganisms. The results of data analysis show that this software provides accurate estimates of kinetic parameters. In addition, it can be used to improve the experimental design and data collection for more accurate estimation of kinetic parameters. IPMP-Global Fit can be used in combination with the regular USDA-IPMP for solving the inverse problems and developing tertiary models in predictive microbiology. Published by Elsevier B.V.

  11. Inverse problems-based maximum likelihood estimation of ground reflectivity for selected regions of interest from stripmap SAR data [Regularized maximum likelihood estimation of ground reflectivity from stripmap SAR data

    DOE PAGES

    West, R. Derek; Gunther, Jacob H.; Moon, Todd K.

    2016-12-01

    In this study, we derive a comprehensive forward model for the data collected by stripmap synthetic aperture radar (SAR) that is linear in the ground reflectivity parameters. It is also shown that if the noise model is additive, then the forward model fits into the linear statistical model framework, and the ground reflectivity parameters can be estimated by statistical methods. We derive the maximum likelihood (ML) estimates for the ground reflectivity parameters in the case of additive white Gaussian noise. Furthermore, we show that obtaining the ML estimates of the ground reflectivity requires two steps. The first step amounts tomore » a cross-correlation of the data with a model of the data acquisition parameters, and it is shown that this step has essentially the same processing as the so-called convolution back-projection algorithm. The second step is a complete system inversion that is capable of mitigating the sidelobes of the spatially variant impulse responses remaining after the correlation processing. We also state the Cramer-Rao lower bound (CRLB) for the ML ground reflectivity estimates.We show that the CRLB is linked to the SAR system parameters, the flight path of the SAR sensor, and the image reconstruction grid.We demonstrate the ML image formation and the CRLB bound for synthetically generated data.« less

  12. A space-frequency multiplicative regularization for force reconstruction problems

    NASA Astrophysics Data System (ADS)

    Aucejo, M.; De Smet, O.

    2018-05-01

    Dynamic forces reconstruction from vibration data is an ill-posed inverse problem. A standard approach to stabilize the reconstruction consists in using some prior information on the quantities to identify. This is generally done by including in the formulation of the inverse problem a regularization term as an additive or a multiplicative constraint. In the present article, a space-frequency multiplicative regularization is developed to identify mechanical forces acting on a structure. The proposed regularization strategy takes advantage of one's prior knowledge of the nature and the location of excitation sources, as well as that of their spectral contents. Furthermore, it has the merit to be free from the preliminary definition of any regularization parameter. The validity of the proposed regularization procedure is assessed numerically and experimentally. It is more particularly pointed out that properly exploiting the space-frequency characteristics of the excitation field to identify can improve the quality of the force reconstruction.

  13. Consistent Partial Least Squares Path Modeling via Regularization.

    PubMed

    Jung, Sunho; Park, JaeHong

    2018-01-01

    Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.

  14. Nutrition and health in hotel staff on different shift patterns.

    PubMed

    Seibt, R; Süße, T; Spitzer, S; Hunger, B; Rudolf, M

    2015-08-01

    Limited research is available that examines the nutritional behaviour and health of hotel staff working alternating and regular shifts. To analyse the nutritional behaviour and health of employees working in alternating and regular shifts. The study used an ex post facto cross-sectional analysis to compare the nutritional behaviour and health parameters of workers with alternating shifts and regular shift workers. Nutritional behaviour was assessed with the Food Frequency Questionnaire. Body dimensions (body mass index, waist hip ratio, fat mass and active cell mass), metabolic values (glucose, triglyceride, total cholesterol and low- and high-density lipoprotein), diseases and health complaints were included as health parameters. Participants worked in alternating (n = 53) and regular shifts (n = 97). The average age of subjects was 35 ± 10 years. There was no significant difference in nutritional behaviour, most surveyed body dimensions or metabolic values between the two groups. However, alternating shift workers had significantly lower fat mass and higher active cell mass but nevertheless reported more pronounced health complaints. Sex and age were also confirmed as influencing the surveyed parameters. Shift-dependent nutritional problems were not conspicuously apparent in this sample of hotel industry workers. Health parameters did not show significantly negative attributes for alternating shift workers. Conceivably, both groups could have the same level of knowledge on the health effects of nutrition and comparable opportunities to apply this. Further studies on nutritional and health behaviour in the hotel industry are necessary in order to create validated screening programmes. © The Author 2015. Published by Oxford University Press on behalf of the Society of Occupational Medicine. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. Stochastic differential equations as a tool to regularize the parameter estimation problem for continuous time dynamical systems given discrete time measurements.

    PubMed

    Leander, Jacob; Lundh, Torbjörn; Jirstrand, Mats

    2014-05-01

    In this paper we consider the problem of estimating parameters in ordinary differential equations given discrete time experimental data. The impact of going from an ordinary to a stochastic differential equation setting is investigated as a tool to overcome the problem of local minima in the objective function. Using two different models, it is demonstrated that by allowing noise in the underlying model itself, the objective functions to be minimized in the parameter estimation procedures are regularized in the sense that the number of local minima is reduced and better convergence is achieved. The advantage of using stochastic differential equations is that the actual states in the model are predicted from data and this will allow the prediction to stay close to data even when the parameters in the model is incorrect. The extended Kalman filter is used as a state estimator and sensitivity equations are provided to give an accurate calculation of the gradient of the objective function. The method is illustrated using in silico data from the FitzHugh-Nagumo model for excitable media and the Lotka-Volterra predator-prey system. The proposed method performs well on the models considered, and is able to regularize the objective function in both models. This leads to parameter estimation problems with fewer local minima which can be solved by efficient gradient-based methods. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  16. Regularities of the sorption of 1,2,3,4-tetrahydroquinoline derivatives under conditions of reversed phase HPLC

    NASA Astrophysics Data System (ADS)

    Nekrasova, N. A.; Kurbatova, S. V.; Zemtsova, M. N.

    2016-12-01

    Regularities of the sorption of 1,2,3,4-tetrahydroquinoline derivatives on octadecylsilyl silica gel and porous graphitic carbon from aqueous acetonitrile solutions were investigated. The effect the molecular structure and physicochemical parameters of the sorbates have on their retention characteristics under conditions of reversed phase HPLC are analyzed.

  17. Per Linguam: A Journal of Language Learning, Vol. 1-3, 1985-1987.

    ERIC Educational Resources Information Center

    van der Vyver, D. H., Ed.

    1987-01-01

    Regular issues of "Per Linguam" appear twice a year. The document consists of the six regular issues for the years 1985, 1986, and 1987. These issues contain the following 32 articles: (1) "SALT in South Africa: Needs and Parameters" (van der Vyver); (2) "An Analysis of SALT in Practice" (Botha); (3) "SALT and…

  18. Model-Averaged ℓ1 Regularization using Markov Chain Monte Carlo Model Composition

    PubMed Central

    Fraley, Chris; Percival, Daniel

    2014-01-01

    Bayesian Model Averaging (BMA) is an effective technique for addressing model uncertainty in variable selection problems. However, current BMA approaches have computational difficulty dealing with data in which there are many more measurements (variables) than samples. This paper presents a method for combining ℓ1 regularization and Markov chain Monte Carlo model composition techniques for BMA. By treating the ℓ1 regularization path as a model space, we propose a method to resolve the model uncertainty issues arising in model averaging from solution path point selection. We show that this method is computationally and empirically effective for regression and classification in high-dimensional datasets. We apply our technique in simulations, as well as to some applications that arise in genomics. PMID:25642001

  19. Multivariate statistical techniques for the evaluation of groundwater quality of Amaravathi River Basin: South India

    NASA Astrophysics Data System (ADS)

    Loganathan, K.; Ahamed, A. Jafar

    2017-12-01

    The study of groundwater in Amaravathi River basin of Karur District resulted in large geochemical data set. A total of 24 water samples were collected and analyzed for physico-chemical parameters, and the abundance of cation and anion concentrations was in the following order: Na+ > Ca2+ > Mg2+ > K+ = Cl- > HCO3 - > SO4 2-. Correlation matrix shows that the basic ionic chemistry is influenced by Na+, Ca2+, Mg2+, and Cl-, and also suggests that the samples contain Na+-Cl-, Ca2+-Cl- an,d mixed Ca2+-Mg2+-Cl- types of water. HCO3 -, SO4 2-, and F- association is less than that of other parameters due to poor or less available of bearing minerals. PCA extracted six components, which are accountable for the data composition explaining 81% of the total variance of the data set and allowed to set the selected parameters according to regular features as well as to evaluate the frequency of each group on the overall variation in water quality. Cluster analysis results show that groundwater quality does not vary extensively as a function of seasons, but shows two main clusters.

  20. On the buckling of an elastic holey column

    PubMed Central

    Hazel, A. L.; Pihler-Puzović, D.

    2017-01-01

    We report the results of a numerical and theoretical study of buckling in elastic columns containing a line of holes. Buckling is a common failure mode of elastic columns under compression, found over scales ranging from metres in buildings and aircraft to tens of nanometers in DNA. This failure usually occurs through lateral buckling, described for slender columns by Euler’s theory. When the column is perforated with a regular line of holes, a new buckling mode arises, in which adjacent holes collapse in orthogonal directions. In this paper, we firstly elucidate how this alternate hole buckling mode coexists and interacts with classical Euler buckling modes, using finite-element numerical calculations with bifurcation tracking. We show how the preferred buckling mode is selected by the geometry, and discuss the roles of localized (hole-scale) and global (column-scale) buckling. Secondly, we develop a novel predictive model for the buckling of columns perforated with large holes. This model is derived without arbitrary fitting parameters, and quantitatively predicts the critical strain for buckling. We extend the model to sheets perforated with a regular array of circular holes and use it to provide quantitative predictions of their buckling. PMID:29225498

  1. [Meals consumption among thirteen years olds and selected family socio-economic correlates].

    PubMed

    Korzycka-Stalmach, Magdalena; Mikiel-Kostyra, Krystyna; Oblacińska, Anna; Jodkowska, Maria; Wojdan-Godek, Elzbieta

    2010-01-01

    To analyse the influence of selected family socioeconomic factors on the regularity of meals consumption among 13-years aged adolescents. Group of 605 13-years olds identified in the prospective cohort study in 2008 was analysed. Data was gathered with use of posted questionnaires. On the basis of information given by children the regularity (4-5 times a week) of meals consumption on school days and eating meals with parents were correlated with parents' educational level, occupational status and perceived family wealth. The study also recognised the distinction between urban and rural residents. Most questionnaires were filled out by mothers (95%), only 5% by fathers. In urban area, the mother's occupation and the perceived family wealth, correlate with children meals consumption and eating meals with parents. Children whose mothers have a job eat breakfast 1.5 times and supper 3 times less regularly, than children whose mothers don't work. Children from poor families eat breakfast 14 times less regularly than children from rich families as well as eat supper 3 times less regularly than children from average wealthy families. In the rural area, the regularity of meals consumption significantly influence the mother's education. Children whose mothers have a secondary education, compared with children of mothers with basic education, are 4 times more likely to eat dinner and supper regularly. The family socioeconomic factors significantly correlate with regularity of 13-years olds meals consumption and regularity of family meals. The place of residence involve the different factors influencing meals consumption habits. It was shown that children and fathers were too little engaged in family life, including family meals preparation and consumption.

  2. Validation of Fourier analysis of videokeratographic data.

    PubMed

    Sideroudi, Haris; Labiris, Georgios; Ditzel, Fienke; Tsaragli, Efi; Georgatzoglou, Kimonas; Siganos, Haralampos; Kozobolis, Vassilios

    2017-06-15

    The aim was to assess the repeatability of Fourier transfom analysis of videokeratographic data using Pentacam in normal (CG), keratoconic (KC) and post-CXL (CXL) corneas. This was a prospective, clinic-based, observational study. One randomly selected eye from all study participants was included in the analysis: 62 normal eyes (CG group), 33 keratoconus eyes (KC group), while 34 eyes, which had already received CXL treatment, formed the CXL group. Fourier analysis of keratometric data were obtained using Pentacam, by two different operators within each of two sessions. Precision, repeatability and Intraclass Correlation Coefficient (ICC), were calculated for evaluating intrassesion and intersession repeatability for the following parameters: Spherical Component (SphRmin, SphEcc), Maximum Decentration (Max Dec), Regular Astigmatism, and Irregularitiy (Irr). Bland-Altman analysis was used for assessing interobserver repeatability. All parameters were presented to be repeatable, reliable and reproductible in all groups. Best intrasession and intersession repeatability and reliability were detected for parameters SphRmin, SphEcc and Max Dec parameters for both operators using ICC (intrasession: ICC > 98%, intersession: ICC > 94.7%) and within subject standard deviation. Best precision and lowest range of agreement was found for the SphRmin parameter (CG: 0.05, KC: 0.16, and CXL: 0.2) in all groups, while the lowest repeatability, reliability and reproducibility was detected for the Irr parameter. The Pentacam system provides accurate measurements of Fourier tranform keratometric data. A single Pentacam scan will be sufficient for most clinical applications.

  3. Alpha models for rotating Navier-Stokes equations in geophysics with nonlinear dispersive regularization

    NASA Astrophysics Data System (ADS)

    Kim, Bong-Sik

    Three dimensional (3D) Navier-Stokes-alpha equations are considered for uniformly rotating geophysical fluid flows (large Coriolis parameter f = 2O). The Navier-Stokes-alpha equations are a nonlinear dispersive regularization of usual Navier-Stokes equations obtained by Lagrangian averaging. The focus is on the existence and global regularity of solutions of the 3D rotating Navier-Stokes-alpha equations and the uniform convergence of these solutions to those of the original 3D rotating Navier-Stokes equations for large Coriolis parameters f as alpha → 0. Methods are based on fast singular oscillating limits and results are obtained for periodic boundary conditions for all domain aspect ratios, including the case of three wave resonances which yields nonlinear "2½-dimensional" limit resonant equations for f → 0. The existence and global regularity of solutions of limit resonant equations is established, uniformly in alpha. Bootstrapping from global regularity of the limit equations, the existence of a regular solution of the full 3D rotating Navier-Stokes-alpha equations for large f for an infinite time is established. Then, the uniform convergence of a regular solution of the 3D rotating Navier-Stokes-alpha equations (alpha ≠ 0) to the one of the original 3D rotating NavierStokes equations (alpha = 0) for f large but fixed as alpha → 0 follows; this implies "shadowing" of trajectories of the limit dynamical systems by those of the perturbed alpha-dynamical systems. All the estimates are uniform in alpha, in contrast with previous estimates in the literature which blow up as alpha → 0. Finally, the existence of global attractors as well as exponential attractors is established for large f and the estimates are uniform in alpha.

  4. Research on regularized mean-variance portfolio selection strategy with modified Roy safety-first principle.

    PubMed

    Atta Mills, Ebenezer Fiifi Emire; Yan, Dawen; Yu, Bo; Wei, Xinyuan

    2016-01-01

    We propose a consolidated risk measure based on variance and the safety-first principle in a mean-risk portfolio optimization framework. The safety-first principle to financial portfolio selection strategy is modified and improved. Our proposed models are subjected to norm regularization to seek near-optimal stable and sparse portfolios. We compare the cumulative wealth of our preferred proposed model to a benchmark, S&P 500 index for the same period. Our proposed portfolio strategies have better out-of-sample performance than the selected alternative portfolio rules in literature and control the downside risk of the portfolio returns.

  5. Iterative image reconstruction for PROPELLER-MRI using the nonuniform fast fourier transform.

    PubMed

    Tamhane, Ashish A; Anastasio, Mark A; Gui, Minzhi; Arfanakis, Konstantinos

    2010-07-01

    To investigate an iterative image reconstruction algorithm using the nonuniform fast Fourier transform (NUFFT) for PROPELLER (Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction) MRI. Numerical simulations, as well as experiments on a phantom and a healthy human subject were used to evaluate the performance of the iterative image reconstruction algorithm for PROPELLER, and compare it with that of conventional gridding. The trade-off between spatial resolution, signal to noise ratio, and image artifacts, was investigated for different values of the regularization parameter. The performance of the iterative image reconstruction algorithm in the presence of motion was also evaluated. It was demonstrated that, for a certain range of values of the regularization parameter, iterative reconstruction produced images with significantly increased signal to noise ratio, reduced artifacts, for similar spatial resolution, compared with gridding. Furthermore, the ability to reduce the effects of motion in PROPELLER-MRI was maintained when using the iterative reconstruction approach. An iterative image reconstruction technique based on the NUFFT was investigated for PROPELLER MRI. For a certain range of values of the regularization parameter, the new reconstruction technique may provide PROPELLER images with improved image quality compared with conventional gridding. (c) 2010 Wiley-Liss, Inc.

  6. Iterative Image Reconstruction for PROPELLER-MRI using the NonUniform Fast Fourier Transform

    PubMed Central

    Tamhane, Ashish A.; Anastasio, Mark A.; Gui, Minzhi; Arfanakis, Konstantinos

    2013-01-01

    Purpose To investigate an iterative image reconstruction algorithm using the non-uniform fast Fourier transform (NUFFT) for PROPELLER (Periodically Rotated Overlapping parallEL Lines with Enhanced Reconstruction) MRI. Materials and Methods Numerical simulations, as well as experiments on a phantom and a healthy human subject were used to evaluate the performance of the iterative image reconstruction algorithm for PROPELLER, and compare it to that of conventional gridding. The trade-off between spatial resolution, signal to noise ratio, and image artifacts, was investigated for different values of the regularization parameter. The performance of the iterative image reconstruction algorithm in the presence of motion was also evaluated. Results It was demonstrated that, for a certain range of values of the regularization parameter, iterative reconstruction produced images with significantly increased SNR, reduced artifacts, for similar spatial resolution, compared to gridding. Furthermore, the ability to reduce the effects of motion in PROPELLER-MRI was maintained when using the iterative reconstruction approach. Conclusion An iterative image reconstruction technique based on the NUFFT was investigated for PROPELLER MRI. For a certain range of values of the regularization parameter the new reconstruction technique may provide PROPELLER images with improved image quality compared to conventional gridding. PMID:20578028

  7. Joint image and motion reconstruction for PET using a B-spline motion model.

    PubMed

    Blume, Moritz; Navab, Nassir; Rafecas, Magdalena

    2012-12-21

    We present a novel joint image and motion reconstruction method for PET. The method is based on gated data and reconstructs an image together with a motion function. The motion function can be used to transform the reconstructed image to any of the input gates. All available events (from all gates) are used in the reconstruction. The presented method uses a B-spline motion model, together with a novel motion regularization procedure that does not need a regularization parameter (which is usually extremely difficult to adjust). Several image and motion grid levels are used in order to reduce the reconstruction time. In a simulation study, the presented method is compared to a recently proposed joint reconstruction method. While the presented method provides comparable reconstruction quality, it is much easier to use since no regularization parameter has to be chosen. Furthermore, since the B-spline discretization of the motion function depends on fewer parameters than a displacement field, the presented method is considerably faster and consumes less memory than its counterpart. The method is also applied to clinical data, for which a novel purely data-driven gating approach is presented.

  8. An adaptive surface filter for airborne laser scanning point clouds by means of regularization and bending energy

    NASA Astrophysics Data System (ADS)

    Hu, Han; Ding, Yulin; Zhu, Qing; Wu, Bo; Lin, Hui; Du, Zhiqiang; Zhang, Yeting; Zhang, Yunsheng

    2014-06-01

    The filtering of point clouds is a ubiquitous task in the processing of airborne laser scanning (ALS) data; however, such filtering processes are difficult because of the complex configuration of the terrain features. The classical filtering algorithms rely on the cautious tuning of parameters to handle various landforms. To address the challenge posed by the bundling of different terrain features into a single dataset and to surmount the sensitivity of the parameters, in this study, we propose an adaptive surface filter (ASF) for the classification of ALS point clouds. Based on the principle that the threshold should vary in accordance to the terrain smoothness, the ASF embeds bending energy, which quantitatively depicts the local terrain structure to self-adapt the filter threshold automatically. The ASF employs a step factor to control the data pyramid scheme in which the processing window sizes are reduced progressively, and the ASF gradually interpolates thin plate spline surfaces toward the ground with regularization to handle noise. Using the progressive densification strategy, regularization and self-adaption, both performance improvement and resilience to parameter tuning are achieved. When tested against the benchmark datasets provided by ISPRS, the ASF performs the best in comparison with all other filtering methods, yielding an average total error of 2.85% when optimized and 3.67% when using the same parameter set.

  9. Dynamic 2D self-phase-map Nyquist ghost correction for simultaneous multi-slice echo planar imaging.

    PubMed

    Yarach, Uten; Tung, Yi-Hang; Setsompop, Kawin; In, Myung-Ho; Chatnuntawech, Itthi; Yakupov, Renat; Godenschweger, Frank; Speck, Oliver

    2018-02-09

    To develop a reconstruction pipeline that intrinsically accounts for both simultaneous multislice echo planar imaging (SMS-EPI) reconstruction and dynamic slice-specific Nyquist ghosting correction in time-series data. After 1D slice-group average phase correction, the separate polarity (i.e., even and odd echoes) SMS-EPI data were unaliased by slice GeneRalized Autocalibrating Partial Parallel Acquisition. Both the slice-unaliased even and odd echoes were jointly reconstructed using a model-based framework, extended for SMS-EPI reconstruction that estimates a 2D self-phase map, corrects dynamic slice-specific phase errors, and combines data from all coils and echoes to obtain the final images. The percentage ghost-to-signal ratios (%GSRs) and its temporal variations for MB3R y 2 with a field of view/4 shift in a human brain obtained by the proposed dynamic 2D and standard 1D phase corrections were 1.37 ± 0.11 and 2.66 ± 0.16, respectively. Even with a large regularization parameter λ applied in the proposed reconstruction, the smoothing effect in fMRI activation maps was comparable to a very small Gaussian kernel size 1 × 1 × 1 mm 3 . The proposed reconstruction pipeline reduced slice-specific phase errors in SMS-EPI, resulting in reduction of GSR. It is applicable for functional MRI studies because the smoothing effect caused by the regularization parameter selection can be minimal in a blood-oxygen-level-dependent activation map. © 2018 International Society for Magnetic Resonance in Medicine.

  10. Regularity of a renewal process estimated from binary data.

    PubMed

    Rice, John D; Strawderman, Robert L; Johnson, Brent A

    2017-10-09

    Assessment of the regularity of a sequence of events over time is important for clinical decision-making as well as informing public health policy. Our motivating example involves determining the effect of an intervention on the regularity of HIV self-testing behavior among high-risk individuals when exact self-testing times are not recorded. Assuming that these unobserved testing times follow a renewal process, the goals of this work are to develop suitable methods for estimating its distributional parameters when only the presence or absence of at least one event per subject in each of several observation windows is recorded. We propose two approaches to estimation and inference: a likelihood-based discrete survival model using only time to first event; and a potentially more efficient quasi-likelihood approach based on the forward recurrence time distribution using all available data. Regularity is quantified and estimated by the coefficient of variation (CV) of the interevent time distribution. Focusing on the gamma renewal process, where the shape parameter of the corresponding interevent time distribution has a monotone relationship with its CV, we conduct simulation studies to evaluate the performance of the proposed methods. We then apply them to our motivating example, concluding that the use of text message reminders significantly improves the regularity of self-testing, but not its frequency. A discussion on interesting directions for further research is provided. © 2017, The International Biometric Society.

  11. Towards the mechanical characterization of abdominal wall by inverse analysis.

    PubMed

    Simón-Allué, R; Calvo, B; Oberai, A A; Barbone, P E

    2017-02-01

    The aim of this study is to characterize the passive mechanical behaviour of abdominal wall in vivo in an animal model using only external cameras and numerical analysis. The main objective lies in defining a methodology that provides in vivo information of a specific patient without altering mechanical properties. It is demonstrated in the mechanical study of abdomen for hernia purposes. Mechanical tests consisted on pneumoperitoneum tests performed on New Zealand rabbits, where inner pressure was varied from 0mmHg to 12mmHg. Changes in the external abdominal surface were recorded and several points were tracked. Based on their coordinates we reconstructed a 3D finite element model of the abdominal wall, considering an incompressible hyperelastic material model defined by two parameters. The spatial distributions of these parameters (shear modulus and non linear parameter) were calculated by inverse analysis, using two different types of regularization: Total Variation Diminishing (TVD) and Tikhonov (H 1 ). After solving the inverse problem, the distribution of the material parameters were obtained along the abdominal surface. Accuracy of the results was evaluated for the last level of pressure. Results revealed a higher value of the shear modulus in a wide stripe along the craneo-caudal direction, associated with the presence of linea alba in conjunction with fascias and rectus abdominis. Non linear parameter distribution was smoother and the location of higher values varied with the regularization type. Both regularizations proved to yield in an accurate predicted displacement field, but H 1 obtained a smoother material parameter distribution while TVD included some discontinuities. The methodology here presented was able to characterize in vivo the passive non linear mechanical response of the abdominal wall. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Predicting Response to Neoadjuvant Chemoradiotherapy in Esophageal Cancer with Textural Features Derived from Pretreatment 18F-FDG PET/CT Imaging.

    PubMed

    Beukinga, Roelof J; Hulshoff, Jan B; van Dijk, Lisanne V; Muijs, Christina T; Burgerhof, Johannes G M; Kats-Ugurlu, Gursah; Slart, Riemer H J A; Slump, Cornelis H; Mul, Véronique E M; Plukker, John Th M

    2017-05-01

    Adequate prediction of tumor response to neoadjuvant chemoradiotherapy (nCRT) in esophageal cancer (EC) patients is important in a more personalized treatment. The current best clinical method to predict pathologic complete response is SUV max in 18 F-FDG PET/CT imaging. To improve the prediction of response, we constructed a model to predict complete response to nCRT in EC based on pretreatment clinical parameters and 18 F-FDG PET/CT-derived textural features. Methods: From a prospectively maintained single-institution database, we reviewed 97 consecutive patients with locally advanced EC and a pretreatment 18 F-FDG PET/CT scan between 2009 and 2015. All patients were treated with nCRT (carboplatin/paclitaxel/41.4 Gy) followed by esophagectomy. We analyzed clinical, geometric, and pretreatment textural features extracted from both 18 F-FDG PET and CT. The current most accurate prediction model with SUV max as a predictor variable was compared with 6 different response prediction models constructed using least absolute shrinkage and selection operator regularized logistic regression. Internal validation was performed to estimate the model's performances. Pathologic response was defined as complete versus incomplete response (Mandard tumor regression grade system 1 vs. 2-5). Results: Pathologic examination revealed 19 (19.6%) complete and 78 (80.4%) incomplete responders. Least absolute shrinkage and selection operator regularization selected the clinical parameters: histologic type and clinical T stage, the 18 F-FDG PET-derived textural feature long run low gray level emphasis, and the CT-derived textural feature run percentage. Introducing these variables to a logistic regression analysis showed areas under the receiver-operating-characteristic curve (AUCs) of 0.78 compared with 0.58 in the SUV max model. The discrimination slopes were 0.17 compared with 0.01, respectively. After internal validation, the AUCs decreased to 0.74 and 0.54, respectively. Conclusion: The predictive values of the constructed models were superior to the standard method (SUV max ). These results can be considered as an initial step in predicting tumor response to nCRT in locally advanced EC. Further research in refining the predictive value of these models is needed to justify omission of surgery. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  13. Prediction-Oriented Marker Selection (PROMISE): With Application to High-Dimensional Regression.

    PubMed

    Kim, Soyeon; Baladandayuthapani, Veerabhadran; Lee, J Jack

    2017-06-01

    In personalized medicine, biomarkers are used to select therapies with the highest likelihood of success based on an individual patient's biomarker/genomic profile. Two goals are to choose important biomarkers that accurately predict treatment outcomes and to cull unimportant biomarkers to reduce the cost of biological and clinical verifications. These goals are challenging due to the high dimensionality of genomic data. Variable selection methods based on penalized regression (e.g., the lasso and elastic net) have yielded promising results. However, selecting the right amount of penalization is critical to simultaneously achieving these two goals. Standard approaches based on cross-validation (CV) typically provide high prediction accuracy with high true positive rates but at the cost of too many false positives. Alternatively, stability selection (SS) controls the number of false positives, but at the cost of yielding too few true positives. To circumvent these issues, we propose prediction-oriented marker selection (PROMISE), which combines SS with CV to conflate the advantages of both methods. Our application of PROMISE with the lasso and elastic net in data analysis shows that, compared to CV, PROMISE produces sparse solutions, few false positives, and small type I + type II error, and maintains good prediction accuracy, with a marginal decrease in the true positive rates. Compared to SS, PROMISE offers better prediction accuracy and true positive rates. In summary, PROMISE can be applied in many fields to select regularization parameters when the goals are to minimize false positives and maximize prediction accuracy.

  14. Clustering by soft-constraint affinity propagation: applications to gene-expression data.

    PubMed

    Leone, Michele; Sumedha; Weigt, Martin

    2007-10-15

    Similarity-measure-based clustering is a crucial problem appearing throughout scientific data analysis. Recently, a powerful new algorithm called Affinity Propagation (AP) based on message-passing techniques was proposed by Frey and Dueck (2007a). In AP, each cluster is identified by a common exemplar all other data points of the same cluster refer to, and exemplars have to refer to themselves. Albeit its proved power, AP in its present form suffers from a number of drawbacks. The hard constraint of having exactly one exemplar per cluster restricts AP to classes of regularly shaped clusters, and leads to suboptimal performance, e.g. in analyzing gene expression data. This limitation can be overcome by relaxing the AP hard constraints. A new parameter controls the importance of the constraints compared to the aim of maximizing the overall similarity, and allows to interpolate between the simple case where each data point selects its closest neighbor as an exemplar and the original AP. The resulting soft-constraint affinity propagation (SCAP) becomes more informative, accurate and leads to more stable clustering. Even though a new a priori free parameter is introduced, the overall dependence of the algorithm on external tuning is reduced, as robustness is increased and an optimal strategy for parameter selection emerges more naturally. SCAP is tested on biological benchmark data, including in particular microarray data related to various cancer types. We show that the algorithm efficiently unveils the hierarchical cluster structure present in the data sets. Further on, it allows to extract sparse gene expression signatures for each cluster.

  15. Continuous time limits of the utterance selection model

    NASA Astrophysics Data System (ADS)

    Michaud, Jérôme

    2017-02-01

    In this paper we derive alternative continuous time limits of the utterance selection model (USM) for language change [G. J. Baxter et al., Phys. Rev. E 73, 046118 (2006), 10.1103/PhysRevE.73.046118]. This is motivated by the fact that the Fokker-Planck continuous time limit derived in the original version of the USM is only valid for a small range of parameters. We investigate the consequences of relaxing these constraints on parameters. Using the normal approximation of the multinomial approximation, we derive a continuous time limit of the USM in the form of a weak-noise stochastic differential equation. We argue that this weak noise, not captured by the Kramers-Moyal expansion, cannot be neglected. We then propose a coarse-graining procedure, which takes the form of a stochastic version of the heterogeneous mean field approximation. This approximation groups the behavior of nodes of the same degree, reducing the complexity of the problem. With the help of this approximation, we study in detail two simple families of networks: the regular networks and the star-shaped networks. The analysis reveals and quantifies a finite-size effect of the dynamics. If we increase the size of the network by keeping all the other parameters constant, we transition from a state where conventions emerge to a state where no convention emerges. Furthermore, we show that the degree of a node acts as a time scale. For heterogeneous networks such as star-shaped networks, the time scale difference can become very large, leading to a noisier behavior of highly connected nodes.

  16. Real-Time Gait Cycle Parameter Recognition Using a Wearable Accelerometry System

    PubMed Central

    Yang, Che-Chang; Hsu, Yeh-Liang; Shih, Kao-Shang; Lu, Jun-Ming

    2011-01-01

    This paper presents the development of a wearable accelerometry system for real-time gait cycle parameter recognition. Using a tri-axial accelerometer, the wearable motion detector is a single waist-mounted device to measure trunk accelerations during walking. Several gait cycle parameters, including cadence, step regularity, stride regularity and step symmetry can be estimated in real-time by using autocorrelation procedure. For validation purposes, five Parkinson’s disease (PD) patients and five young healthy adults were recruited in an experiment. The gait cycle parameters among the two subject groups of different mobility can be quantified and distinguished by the system. Practical considerations and limitations for implementing the autocorrelation procedure in such a real-time system are also discussed. This study can be extended to the future attempts in real-time detection of disabling gaits, such as festinating or freezing of gait in PD patients. Ambulatory rehabilitation, gait assessment and personal telecare for people with gait disorders are also possible applications. PMID:22164019

  17. Fractional-order TV-L2 model for image denoising

    NASA Astrophysics Data System (ADS)

    Chen, Dali; Sun, Shenshen; Zhang, Congrong; Chen, YangQuan; Xue, Dingyu

    2013-10-01

    This paper proposes a new fractional order total variation (TV) denoising method, which provides a much more elegant and effective way of treating problems of the algorithm implementation, ill-posed inverse, regularization parameter selection and blocky effect. Two fractional order TV-L2 models are constructed for image denoising. The majorization-minimization (MM) algorithm is used to decompose these two complex fractional TV optimization problems into a set of linear optimization problems which can be solved by the conjugate gradient algorithm. The final adaptive numerical procedure is given. Finally, we report experimental results which show that the proposed methodology avoids the blocky effect and achieves state-of-the-art performance. In addition, two medical image processing experiments are presented to demonstrate the validity of the proposed methodology.

  18. URS DataBase: universe of RNA structures and their motifs.

    PubMed

    Baulin, Eugene; Yacovlev, Victor; Khachko, Denis; Spirin, Sergei; Roytberg, Mikhail

    2016-01-01

    The Universe of RNA Structures DataBase (URSDB) stores information obtained from all RNA-containing PDB entries (2935 entries in October 2015). The content of the database is updated regularly. The database consists of 51 tables containing indexed data on various elements of the RNA structures. The database provides a web interface allowing user to select a subset of structures with desired features and to obtain various statistical data for a selected subset of structures or for all structures. In particular, one can easily obtain statistics on geometric parameters of base pairs, on structural motifs (stems, loops, etc.) or on different types of pseudoknots. The user can also view and get information on an individual structure or its selected parts, e.g. RNA-protein hydrogen bonds. URSDB employs a new original definition of loops in RNA structures. That definition fits both pseudoknot-free and pseudoknotted secondary structures and coincides with the classical definition in case of pseudoknot-free structures. To our knowledge, URSDB is the first database supporting searches based on topological classification of pseudoknots and on extended loop classification.Database URL: http://server3.lpm.org.ru/urs/. © The Author(s) 2016. Published by Oxford University Press.

  19. URS DataBase: universe of RNA structures and their motifs

    PubMed Central

    Baulin, Eugene; Yacovlev, Victor; Khachko, Denis; Spirin, Sergei; Roytberg, Mikhail

    2016-01-01

    The Universe of RNA Structures DataBase (URSDB) stores information obtained from all RNA-containing PDB entries (2935 entries in October 2015). The content of the database is updated regularly. The database consists of 51 tables containing indexed data on various elements of the RNA structures. The database provides a web interface allowing user to select a subset of structures with desired features and to obtain various statistical data for a selected subset of structures or for all structures. In particular, one can easily obtain statistics on geometric parameters of base pairs, on structural motifs (stems, loops, etc.) or on different types of pseudoknots. The user can also view and get information on an individual structure or its selected parts, e.g. RNA–protein hydrogen bonds. URSDB employs a new original definition of loops in RNA structures. That definition fits both pseudoknot-free and pseudoknotted secondary structures and coincides with the classical definition in case of pseudoknot-free structures. To our knowledge, URSDB is the first database supporting searches based on topological classification of pseudoknots and on extended loop classification. Database URL: http://server3.lpm.org.ru/urs/ PMID:27242032

  20. Analysis of the Tikhonov regularization to retrieve thermal conductivity depth-profiles from infrared thermography data

    NASA Astrophysics Data System (ADS)

    Apiñaniz, Estibaliz; Mendioroz, Arantza; Salazar, Agustín; Celorrio, Ricardo

    2010-09-01

    We analyze the ability of the Tikhonov regularization to retrieve different shapes of in-depth thermal conductivity profiles, usually encountered in hardened materials, from surface temperature data. Exponential, oscillating, and sigmoidal profiles are studied. By performing theoretical experiments with added white noises, the influence of the order of the Tikhonov functional and of the parameters that need to be tuned to carry out the inversion are investigated. The analysis shows that the Tikhonov regularization is very well suited to reconstruct smooth profiles but fails when the conductivity exhibits steep slopes. We check a natural alternative regularization, the total variation functional, which gives much better results for sigmoidal profiles. Accordingly, a strategy to deal with real data is proposed in which we introduce this total variation regularization. This regularization is applied to the inversion of real data corresponding to a case hardened AISI1018 steel plate, giving much better anticorrelation of the retrieved conductivity with microindentation test data than the Tikhonov regularization. The results suggest that this is a promising way to improve the reliability of local inversion methods.

  1. Consistent Partial Least Squares Path Modeling via Regularization

    PubMed Central

    Jung, Sunho; Park, JaeHong

    2018-01-01

    Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present. PMID:29515491

  2. The evolution of pattern camouflage strategies in waterfowl and game birds.

    PubMed

    Marshall, Kate L A; Gluckman, Thanh-Lan

    2015-05-01

    Visual patterns are common in animals. A broad survey of the literature has revealed that different patterns have distinct functions. Irregular patterns (e.g., stipples) typically function in static camouflage, whereas regular patterns (e.g., stripes) have a dual function in both motion camouflage and communication. Moreover, irregular and regular patterns located on different body regions ("bimodal" patterning) can provide an effective compromise between camouflage and communication and/or enhanced concealment via both static and motion camouflage. Here, we compared the frequency of these three pattern types and traced their evolutionary history using Bayesian comparative modeling in aquatic waterfowl (Anseriformes: 118 spp.), which typically escape predators by flight, and terrestrial game birds (Galliformes: 170 spp.), which mainly use a "sit and hide" strategy to avoid predation. Given these life histories, we predicted that selection would favor regular patterning in Anseriformes and irregular or bimodal patterning in Galliformes and that pattern function complexity should increase over the course of evolution. Regular patterns were predominant in Anseriformes whereas regular and bimodal patterns were most frequent in Galliformes, suggesting that patterns with multiple functions are broadly favored by selection over patterns with a single function in static camouflage. We found that the first patterns to evolve were either regular or bimodal in Anseriformes and either irregular or regular in Galliformes. In both orders, irregular patterns could evolve into regular patterns but not the reverse. Our hypothesis of increasing complexity in pattern camouflage function was supported in Galliformes but not in Anseriformes. These results reveal a trajectory of pattern evolution linked to increasing function complexity in Galliformes although not in Anseriformes, suggesting that both ecology and function complexity can have a profound influence on pattern evolution.

  3. The evolution of pattern camouflage strategies in waterfowl and game birds

    PubMed Central

    Marshall, Kate L A; Gluckman, Thanh-Lan

    2015-01-01

    Visual patterns are common in animals. A broad survey of the literature has revealed that different patterns have distinct functions. Irregular patterns (e.g., stipples) typically function in static camouflage, whereas regular patterns (e.g., stripes) have a dual function in both motion camouflage and communication. Moreover, irregular and regular patterns located on different body regions (“bimodal” patterning) can provide an effective compromise between camouflage and communication and/or enhanced concealment via both static and motion camouflage. Here, we compared the frequency of these three pattern types and traced their evolutionary history using Bayesian comparative modeling in aquatic waterfowl (Anseriformes: 118 spp.), which typically escape predators by flight, and terrestrial game birds (Galliformes: 170 spp.), which mainly use a “sit and hide” strategy to avoid predation. Given these life histories, we predicted that selection would favor regular patterning in Anseriformes and irregular or bimodal patterning in Galliformes and that pattern function complexity should increase over the course of evolution. Regular patterns were predominant in Anseriformes whereas regular and bimodal patterns were most frequent in Galliformes, suggesting that patterns with multiple functions are broadly favored by selection over patterns with a single function in static camouflage. We found that the first patterns to evolve were either regular or bimodal in Anseriformes and either irregular or regular in Galliformes. In both orders, irregular patterns could evolve into regular patterns but not the reverse. Our hypothesis of increasing complexity in pattern camouflage function was supported in Galliformes but not in Anseriformes. These results reveal a trajectory of pattern evolution linked to increasing function complexity in Galliformes although not in Anseriformes, suggesting that both ecology and function complexity can have a profound influence on pattern evolution. PMID:26045950

  4. Oligomeric complexes of some heteroaromatic ligands and aromatic diamines with rhodium and molybdenum tetracarboxylates: 13C and 15N CPMAS NMR and density functional theory studies.

    PubMed

    Leniak, Arkadiusz; Kamieński, Bohdan; Jaźwiński, Jarosław

    2015-05-01

    Seven new oligomeric complexes of 4,4'-bipyridine; 3,3'-bipyridine; benzene-1,4-diamine; benzene-1,3-diamine; benzene-1,2-diamine; and benzidine with rhodium tetraacetate, as well as 4,4'-bipyridine with molybdenum tetraacetate, have been obtained and investigated by elemental analysis and solid-state nuclear magnetic resonance spectroscopy, (13)C and (15)N CPMAS NMR. The known complexes of pyrazine with rhodium tetrabenzoate, benzoquinone with rhodium tetrapivalate, 4,4'-bipyridine with molybdenum tetrakistrifluoroacetate and the 1 : 1 complex of 2,2'-bipyridine with rhodium tetraacetate exhibiting axial-equatorial ligation mode have been obtained as well for comparison purposes. Elemental analysis revealed 1 : 1 complex stoichiometry of all complexes. The (15)N CPMAS NMR spectra of all new complexes consist of one narrow signal, indicating regular uniform structures. Benzidine forms a heterogeneous material, probably containing linear oligomers and products of further reactions. The complexes were characterized by the parameter complexation shift Δδ (Δδ = δcomplex  - δligand). This parameter ranged from around -40 to -90 ppm in the case of heteroaromatic ligands, from around -12 to -22 ppm for diamines and from -16 to -31 ppm for the complexes of molybdenum tetracarboxylates with 4,4'-bipyridine. The experimental results have been supported by a density functional theory computation of (15)N NMR chemical shifts and complexation shifts at the non-relativistic Becke, three-parameter, Perdew-Wang 91/[6-311++G(2d,p), Stuttgart] and GGA-PBE/QZ4P levels of theory and at the relativistic scalar and spin-orbit zeroth order regular approximation/GGA-PBE/QZ4P level of theory. Nucleus-independent chemical shifts have been calculated for the selected compounds. Copyright © 2015 John Wiley & Sons, Ltd.

  5. Regularities in Spearman's Law of Diminishing Returns.

    ERIC Educational Resources Information Center

    Jensen, Arthur R.

    2003-01-01

    Examined the assumption that Spearman's law acts unsystematically and approximately uniformly for various subtests of cognitive ability in an IQ test battery when high- and low-ability IQ groups are selected. Data from national standardization samples for Wechsler adult and child IQ tests affirm regularities in Spearman's "Law of Diminishing…

  6. Examination of a social problem-solving intervention to treat selective mutism.

    PubMed

    O'Reilly, Mark; McNally, Deirdre; Sigafoos, Jeff; Lancioni, Giulio E; Green, Vanessa; Edrisinha, Chaturi; Machalicek, Wendy; Sorrells, Audrey; Lang, Russell; Didden, Robert

    2008-03-01

    The authors examined the use of a social problem-solving intervention to treat selective mutism with 2 sisters in an elementary school setting. Both girls were taught to answer teacher questions in front of their classroom peers during regular classroom instruction. Each girl received individualized instruction from a therapist and was taught to discriminate salient social cues, select an appropriate social response, perform the response, and evaluate her performance. The girls generalized the skills to their respective regular classrooms and maintained the skills for up to 3 months after the removal of the intervention. Experimental control was demonstrated using a multiple baseline design across participants. Limitations of this study and issues for future research are discussed.

  7. Soliton solutions to the fifth-order Korteweg-de Vries equation and their applications to surface and internal water waves

    NASA Astrophysics Data System (ADS)

    Khusnutdinova, K. R.; Stepanyants, Y. A.; Tranter, M. R.

    2018-02-01

    We study solitary wave solutions of the fifth-order Korteweg-de Vries equation which contains, besides the traditional quadratic nonlinearity and third-order dispersion, additional terms including cubic nonlinearity and fifth order linear dispersion, as well as two nonlinear dispersive terms. An exact solitary wave solution to this equation is derived, and the dependence of its amplitude, width, and speed on the parameters of the governing equation is studied. It is shown that the derived solution can represent either an embedded or regular soliton depending on the equation parameters. The nonlinear dispersive terms can drastically influence the existence of solitary waves, their nature (regular or embedded), profile, polarity, and stability with respect to small perturbations. We show, in particular, that in some cases embedded solitons can be stable even with respect to interactions with regular solitons. The results obtained are applicable to surface and internal waves in fluids, as well as to waves in other media (plasma, solid waveguides, elastic media with microstructure, etc.).

  8. Non-Steroidal Anti-Inflammatory Drugs and Cardiovascular Outcomes in Women: Results from the Women’s Health Initiative

    PubMed Central

    Bavry, Anthony A.; Thomas, Fridtjof; Allison, Matthew; Johnson, Karen C.; Howard, Barbara V.; Hlatky, Mark; Manson, JoAnn E.; Limacher, Marian C.

    2014-01-01

    Background Conclusive data regarding cardiovascular (CV) toxicity of non-steroidal anti-inflammatory drugs (NSAIDs) are sparse. We hypothesized that regular NSAID use is associated with increased risk for CV events in post-menopausal women, and that this association is stronger with greater cyclooxygenase (cox)-2 compared with cox-1 inhibition. Methods and Results Post-menopausal women enrolled in the Women’s Health Initiative (WHI) were classified as regular users or non-users of non-aspirin NSAIDs. Cox regression examined NSAID use as a time-varying covariate and its association with the primary outcome of total CV disease defined as CV death, nonfatal myocardial infarction, or nonfatal stroke. Secondary analyses considered the association of selective cox-2 inhibitors (e.g., celecoxib), non-selective agents with cox-2>cox-1 inhibition (e.g., naproxen), and non-selective agents with cox-1>cox-2 inhibition (e.g., ibuprofen) with the primary outcome. Overall, 160,801 participants were available for analysis (mean follow-up 11.2 years). Regular NSAID use at some point in time was reported by 53,142 participants. Regular NSAID use was associated with an increased hazard for CV events versus no NSAID use (HR=1.10[95% CI 1.06–1.15], Pitalic>0.001). Selective cox-2 inhibitors were associated with a modest increased hazard for CV events (HR=1.13[1.04–1.23], P=0.004; celecoxib only HR=1.13[1.01–1.27], P=0.031). Among aspirin users, concomitant selective cox-2 inhibitor use was no longer associated with increased hazard for CV events. There was an increased risk for agents with cox-2>cox-1 inhibition (HR=1.17[1.10–1.24], Pbold>0.001; naproxen only HR=1.22[1.12–1.34], P<0.001). This harmful association remained among concomitant aspirin users. We did not observe a risk elevation for agents with cox-1>cox-2 inhibition (HR=1.01[0.95–1.07], P=0.884; ibuprofen only HR=1.00[0.93–1.07], P=0.996). Conclusions Regular use of selective cox-2 inhibitors and non-selective NSAIDs with cox-2>cox-1 inhibition showed a modestly increased hazard for CV events. Non-selective agents with cox-1>cox-2 inhibition were not associated with increased CV risk. Clinical Trial Registration www.clinicaltrials.gov NCT00000611 PMID:25006185

  9. Analytic continuation of quantum Monte Carlo data by stochastic analytical inference.

    PubMed

    Fuchs, Sebastian; Pruschke, Thomas; Jarrell, Mark

    2010-05-01

    We present an algorithm for the analytic continuation of imaginary-time quantum Monte Carlo data which is strictly based on principles of Bayesian statistical inference. Within this framework we are able to obtain an explicit expression for the calculation of a weighted average over possible energy spectra, which can be evaluated by standard Monte Carlo simulations, yielding as by-product also the distribution function as function of the regularization parameter. Our algorithm thus avoids the usual ad hoc assumptions introduced in similar algorithms to fix the regularization parameter. We apply the algorithm to imaginary-time quantum Monte Carlo data and compare the resulting energy spectra with those from a standard maximum-entropy calculation.

  10. Water Residence Time estimation by 1D deconvolution in the form of a l2 -regularized inverse problem with smoothness, positivity and causality constraints

    NASA Astrophysics Data System (ADS)

    Meresescu, Alina G.; Kowalski, Matthieu; Schmidt, Frédéric; Landais, François

    2018-06-01

    The Water Residence Time distribution is the equivalent of the impulse response of a linear system allowing the propagation of water through a medium, e.g. the propagation of rain water from the top of the mountain towards the aquifers. We consider the output aquifer levels as the convolution between the input rain levels and the Water Residence Time, starting with an initial aquifer base level. The estimation of Water Residence Time is important for a better understanding of hydro-bio-geochemical processes and mixing properties of wetlands used as filters in ecological applications, as well as protecting fresh water sources for wells from pollutants. Common methods of estimating the Water Residence Time focus on cross-correlation, parameter fitting and non-parametric deconvolution methods. Here we propose a 1D full-deconvolution, regularized, non-parametric inverse problem algorithm that enforces smoothness and uses constraints of causality and positivity to estimate the Water Residence Time curve. Compared to Bayesian non-parametric deconvolution approaches, it has a fast runtime per test case; compared to the popular and fast cross-correlation method, it produces a more precise Water Residence Time curve even in the case of noisy measurements. The algorithm needs only one regularization parameter to balance between smoothness of the Water Residence Time and accuracy of the reconstruction. We propose an approach on how to automatically find a suitable value of the regularization parameter from the input data only. Tests on real data illustrate the potential of this method to analyze hydrological datasets.

  11. Geometry characteristics modeling and process optimization in coaxial laser inside wire cladding

    NASA Astrophysics Data System (ADS)

    Shi, Jianjun; Zhu, Ping; Fu, Geyan; Shi, Shihong

    2018-05-01

    Coaxial laser inside wire cladding method is very promising as it has a very high efficiency and a consistent interaction between the laser and wire. In this paper, the energy and mass conservation law, and the regression algorithm are used together for establishing the mathematical models to study the relationship between the layer geometry characteristics (width, height and cross section area) and process parameters (laser power, scanning velocity and wire feeding speed). At the selected parameter ranges, the predicted values from the models are compared with the experimental measured results, and there is minor error existing, but they reflect the same regularity. From the models, it is seen the width of the cladding layer is proportional to both the laser power and wire feeding speed, while it firstly increases and then decreases with the increasing of the scanning velocity. The height of the cladding layer is proportional to the scanning velocity and feeding speed and inversely proportional to the laser power. The cross section area increases with the increasing of feeding speed and decreasing of scanning velocity. By using the mathematical models, the geometry characteristics of the cladding layer can be predicted by the known process parameters. Conversely, the process parameters can be calculated by the targeted geometry characteristics. The models are also suitable for multi-layer forming process. By using the optimized process parameters calculated from the models, a 45 mm-high thin-wall part is formed with smooth side surfaces.

  12. 5 CFR 532.215 - Establishments included in regular appropriated fund surveys.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... in surveys shall be selected under standard probability sample selection procedures. In areas with... establishment list drawn under statistical sampling procedures. [55 FR 46142, Nov. 1, 1990] ...

  13. Estimation of actual evapotranspiration in the Nagqu river basin of the Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Zou, Mijun; Zhong, Lei; Ma, Yaoming; Hu, Yuanyuan; Feng, Lu

    2018-05-01

    As a critical component of the energy and water cycle, terrestrial actual evapotranspiration (ET) can be influenced by many factors. This study was mainly devoted to providing accurate and continuous estimations of actual ET for the Tibetan Plateau (TP) and analyzing the effects of its impact factors. In this study, summer observational data from the Coordinated Enhanced Observing Period (CEOP) Asia-Australia Monsoon Project (CAMP) on the Tibetan Plateau (CAMP/Tibet) for 2003 to 2004 was selected to determine actual ET and investigate its relationship with energy, hydrological, and dynamical parameters. Multiple-layer air temperature, relative humidity, net radiation flux, wind speed, precipitation, and soil moisture were used to estimate actual ET. The regression model simulation results were validated with independent data retrieved using the combinatory method. The results suggested that significant correlations exist between actual ET and hydro-meteorological parameters in the surface layer of the Nagqu river basin, among which the most important factors are energy-related elements (net radiation flux and air temperature). The results also suggested that how ET is eventually affected by precipitation and two-layer wind speed difference depends on whether their positive or negative feedback processes have a more important role. The multivariate linear regression method provided reliable estimations of actual ET; thus, 6-parameter simplified schemes and 14-parameter regular schemes were established.

  14. Rank-Optimized Logistic Matrix Regression toward Improved Matrix Data Classification.

    PubMed

    Zhang, Jianguang; Jiang, Jianmin

    2018-02-01

    While existing logistic regression suffers from overfitting and often fails in considering structural information, we propose a novel matrix-based logistic regression to overcome the weakness. In the proposed method, 2D matrices are directly used to learn two groups of parameter vectors along each dimension without vectorization, which allows the proposed method to fully exploit the underlying structural information embedded inside the 2D matrices. Further, we add a joint [Formula: see text]-norm on two parameter matrices, which are organized by aligning each group of parameter vectors in columns. This added co-regularization term has two roles-enhancing the effect of regularization and optimizing the rank during the learning process. With our proposed fast iterative solution, we carried out extensive experiments. The results show that in comparison to both the traditional tensor-based methods and the vector-based regression methods, our proposed solution achieves better performance for matrix data classifications.

  15. Modeling of polychromatic attenuation using computed tomography reconstructed images

    NASA Technical Reports Server (NTRS)

    Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Yen, S. Y.; Napel, S.

    1999-01-01

    This paper presents a procedure for estimating an accurate model of the CT imaging process including spectral effects. As raw projection data are typically unavailable to the end-user, we adopt a post-processing approach that utilizes the reconstructed images themselves. This approach includes errors from x-ray scatter and the nonidealities of the built-in soft tissue correction into the beam characteristics, which is crucial to beam hardening correction algorithms that are designed to be applied directly to CT reconstructed images. We formulate this approach as a quadratic programming problem and propose two different methods, dimension reduction and regularization, to overcome ill conditioning in the model. For the regularization method we use a statistical procedure, Cross Validation, to select the regularization parameter. We have constructed step-wedge phantoms to estimate the effective beam spectrum of a GE CT-I scanner. Using the derived spectrum, we computed the attenuation ratios for the wedge phantoms and found that the worst case modeling error is less than 3% of the corresponding attenuation ratio. We have also built two test (hybrid) phantoms to evaluate the effective spectrum. Based on these test phantoms, we have shown that the effective beam spectrum provides an accurate model for the CT imaging process. Last, we used a simple beam hardening correction experiment to demonstrate the effectiveness of the estimated beam profile for removing beam hardening artifacts. We hope that this estimation procedure will encourage more independent research on beam hardening corrections and will lead to the development of application-specific beam hardening correction algorithms.

  16. 5 CFR 302.401 - Selection and appointment.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... reemployment, reemployment, or regular list on which candidates have not received numerical scores, an agency... candidates have received numerical scores, the agency must make its selection for each vacancy from not more...

  17. 5 CFR 302.401 - Selection and appointment.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... reemployment, reemployment, or regular list on which candidates have not received numerical scores, an agency... candidates have received numerical scores, the agency must make its selection for each vacancy from not more...

  18. 5 CFR 302.401 - Selection and appointment.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... reemployment, reemployment, or regular list on which candidates have not received numerical scores, an agency... candidates have received numerical scores, the agency must make its selection for each vacancy from not more...

  19. [Clinical and morphological variants of diverticular disease in colon].

    PubMed

    Levchenko, S V; Lazebnik, L B; Potapova, V B; Rogozina, V A

    2013-01-01

    Our own results of two-stage research are presented in the article. The first stage contains the retrospective analysis of 3682 X-ray examining of large bowel which were conducted in 2002-2004 to define the structure of colon disease and to determine gender differences. The second stage is prospective research which took place from 2003 to 2012 and 486 patients with diverticular disease were regularly observed. Following parameters were estimated: dynamics of complaints, life quality, clinical symptoms. Multiple X-ray and endoscopic examining were done with estimation of quantity and size of diverticula, changes of colon mucosa, comparison of X-ray and endoscopic methods in prognosis of complications. Two basic clinical morphological variants of diverticular disease (DD) of colon are made out as a result of our research. There are IBD-like and DD with ischemic component. The variants differ by pain characteristics, presence of accompanying diseases, life quality parameters and description of colon mucosa morphological research. We suppose that different ethiopathogenetic factors of development of both variants mentioned above influence the disease prognosis and selection of treatment.

  20. Parameter selection with the Hotelling observer in linear iterative image reconstruction for breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Rose, Sean D.; Roth, Jacob; Zimmerman, Cole; Reiser, Ingrid; Sidky, Emil Y.; Pan, Xiaochuan

    2018-03-01

    In this work we investigate an efficient implementation of a region-of-interest (ROI) based Hotelling observer (HO) in the context of parameter optimization for detection of a rod signal at two orientations in linear iterative image reconstruction for DBT. Our preliminary results suggest that ROI-HO performance trends may be efficiently estimated by modeling only the 2D plane perpendicular to the detector and containing the X-ray source trajectory. In addition, the ROI-HO is seen to exhibit orientation dependent trends in detectability as a function of the regularization strength employed in reconstruction. To further investigate the ROI-HO performance in larger 3D system models, we present and validate an iterative methodology for calculating the ROI-HO. Lastly, we present a real data study investigating the correspondence between ROI-HO performance trends and signal conspicuity. Conspicuity of signals in real data reconstructions is seen to track well with trends in ROI-HO detectability. In particular, we observe orientation dependent conspicuity matching the orientation dependent detectability of the ROI-HO.

  1. Modelling topographic potential for erosion and deposition using GIS

    Treesearch

    Helena Mitasova; Louis R. Iverson

    1996-01-01

    Modelling of erosion and deposition in complex terrain within a geographical information system (GIS) requires a high resolution digital elevation model (DEM), reliable estimation of topographic parameters, and formulation of erosion models adequate for digital representation of spatially distributed parameters. Regularized spline with tension was integrated within a...

  2. Size-distribution analysis of macromolecules by sedimentation velocity ultracentrifugation and lamm equation modeling.

    PubMed

    Schuck, P

    2000-03-01

    A new method for the size-distribution analysis of polymers by sedimentation velocity analytical ultracentrifugation is described. It exploits the ability of Lamm equation modeling to discriminate between the spreading of the sedimentation boundary arising from sample heterogeneity and from diffusion. Finite element solutions of the Lamm equation for a large number of discrete noninteracting species are combined with maximum entropy regularization to represent a continuous size-distribution. As in the program CONTIN, the parameter governing the regularization constraint is adjusted by variance analysis to a predefined confidence level. Estimates of the partial specific volume and the frictional ratio of the macromolecules are used to calculate the diffusion coefficients, resulting in relatively high-resolution sedimentation coefficient distributions c(s) or molar mass distributions c(M). It can be applied to interference optical data that exhibit systematic noise components, and it does not require solution or solvent plateaus to be established. More details on the size-distribution can be obtained than from van Holde-Weischet analysis. The sensitivity to the values of the regularization parameter and to the shape parameters is explored with the help of simulated sedimentation data of discrete and continuous model size distributions, and by applications to experimental data of continuous and discrete protein mixtures.

  3. Hypocaloric diet and regular moderate aerobic exercise is an effective strategy to reduce anthropometric parameters and oxidative stress in obese patients.

    PubMed

    Gutierrez-Lopez, Liliana; Garcia-Sanchez, Jose Ruben; Rincon-Viquez, Maria de Jesus; Lara-Padilla, Eleazar; Sierra-Vargas, Martha P; Olivares-Corichi, Ivonne M

    2012-01-01

    Studies show that diet and exercise are important in the treatment of obesity. The aim of this study was to determine whether additional regular moderate aerobic exercise during a treatment with hypocaloric diet has a beneficial effect on oxidative stress and molecular damage in the obese patient. Oxidative stress of 16 normal-weight (NW) and 32 obese 1 (O1) subjects (BMI 30-34.9 kg/m(2)) were established by biomarkers of oxidative stress in plasma. Recombinant human insulin was incubated with blood from NW or O1 subjects, and the molecular damage to the hormone was analyzed. Two groups of treatment, hypocaloric diet (HD) and hypocaloric diet plus regular moderate aerobic exercise (HDMAE), were formed, and their effects in obese subjects were analyzed. The data showed the presence of oxidative stress in O1 subjects. Molecular damage and polymerization of insulin was observed more frequently in the blood from O1 subjects. The treatment of O1 subjects with HD decreased the anthropometric parameters as well as oxidative stress and molecular damage, which was more effectively prevented by the treatment with HDMAE. HD and HDMAE treatments decreased anthropometric parameters, oxidative stress, and molecular damage in O1 subjects. Copyright © 2012 S. Karger GmbH, Freiburg.

  4. Investigation of Image Reconstruction Parameters of the Mediso nanoScan PC Small-Animal PET/CT Scanner for Two Different Positron Emitters Under NEMA NU 4-2008 Standards.

    PubMed

    Gaitanis, Anastasios; Kastis, George A; Vlastou, Elena; Bouziotis, Penelope; Verginis, Panayotis; Anagnostopoulos, Constantinos D

    2017-08-01

    The Tera-Tomo 3D image reconstruction algorithm (a version of OSEM), provided with the Mediso nanoScan® PC (PET8/2) small-animal positron emission tomograph (PET)/x-ray computed tomography (CT) scanner, has various parameter options such as total level of regularization, subsets, and iterations. Also, the acquisition time in PET plays an important role. This study aims to assess the performance of this new small-animal PET/CT scanner for different acquisition times and reconstruction parameters, for 2-deoxy-2-[ 18 F]fluoro-D-glucose ([ 18 F]FDG) and Ga-68, under the NEMA NU 4-2008 standards. Various image quality metrics were calculated for different realizations of [ 18 F]FDG and Ga-68 filled image quality (IQ) phantoms. [ 18 F]FDG imaging produced improved images over Ga-68. The best compromise for the optimization of all image quality factors is achieved for at least 30 min acquisition and image reconstruction with 52 iteration updates combined with a high regularization level. A high regularization level at 52 iteration updates and 30 min acquisition time were found to optimize most of the figures of merit investigated.

  5. Integrative analysis of gene expression and copy number alterations using canonical correlation analysis.

    PubMed

    Soneson, Charlotte; Lilljebjörn, Henrik; Fioretos, Thoas; Fontes, Magnus

    2010-04-15

    With the rapid development of new genetic measurement methods, several types of genetic alterations can be quantified in a high-throughput manner. While the initial focus has been on investigating each data set separately, there is an increasing interest in studying the correlation structure between two or more data sets. Multivariate methods based on Canonical Correlation Analysis (CCA) have been proposed for integrating paired genetic data sets. The high dimensionality of microarray data imposes computational difficulties, which have been addressed for instance by studying the covariance structure of the data, or by reducing the number of variables prior to applying the CCA. In this work, we propose a new method for analyzing high-dimensional paired genetic data sets, which mainly emphasizes the correlation structure and still permits efficient application to very large data sets. The method is implemented by translating a regularized CCA to its dual form, where the computational complexity depends mainly on the number of samples instead of the number of variables. The optimal regularization parameters are chosen by cross-validation. We apply the regularized dual CCA, as well as a classical CCA preceded by a dimension-reducing Principal Components Analysis (PCA), to a paired data set of gene expression changes and copy number alterations in leukemia. Using the correlation-maximizing methods, regularized dual CCA and PCA+CCA, we show that without pre-selection of known disease-relevant genes, and without using information about clinical class membership, an exploratory analysis singles out two patient groups, corresponding to well-known leukemia subtypes. Furthermore, the variables showing the highest relevance to the extracted features agree with previous biological knowledge concerning copy number alterations and gene expression changes in these subtypes. Finally, the correlation-maximizing methods are shown to yield results which are more biologically interpretable than those resulting from a covariance-maximizing method, and provide different insight compared to when each variable set is studied separately using PCA. We conclude that regularized dual CCA as well as PCA+CCA are useful methods for exploratory analysis of paired genetic data sets, and can be efficiently implemented also when the number of variables is very large.

  6. More academics in regular schools? The effect of regular versus special school placement on academic skills in Dutch primary school students with Down syndrome.

    PubMed

    de Graaf, G; van Hove, G; Haveman, M

    2013-01-01

    Studies from the UK have shown that children with Down syndrome acquire more academic skills in regular education. Does this likewise hold true for the Dutch situation, even after the effect of selective placement has been taken into account? In 2006, an extensive questionnaire was sent to 160 parents of (specially and regularly placed) children with Down syndrome (born 1993-2000) in primary education in the Netherlands with a response rate of 76%. Questions were related to the child's school history, academic and non-academic skills, intelligence quotient, parental educational level, the extent to which parents worked on academics with their child at home, and the amount of academic instructional time at school. Academic skills were predicted with the other variables as independents. For the children in regular schools much more time proved to be spent on academics. Academic performance appeared to be predicted reasonably well on the basis of age, non-academic skills, parental educational level and the extent to which parents worked at home on academics. However, more variance could be predicted when the total amount of years that the child spent in regular education was added, especially regarding reading and to a lesser extent regarding writing and math. In addition, we could prove that this finding could not be accounted for by endogenity. Regularly placed children with Down syndrome learn more academics. However, this is not a straight consequence of inclusive placement and age alone, but is also determined by factors such as cognitive functioning, non-academic skills, parental educational level and the extent to which parents worked at home on academics. Nevertheless, it could be proven that the more advanced academic skills of the regularly placed children are not only due to selective placement. The positive effect of regular school on academics appeared to be most pronounced for reading skills. © 2011 The Authors. Journal of Intellectual Disability Research © 2011 Blackwell Publishing Ltd.

  7. Identification of Cell Type-Specific Differences in Erythropoietin Receptor Signaling in Primary Erythroid and Lung Cancer Cells

    PubMed Central

    Salopiata, Florian; Depner, Sofia; Wäsch, Marvin; Böhm, Martin E.; Mücke, Oliver; Plass, Christoph; Lehmann, Wolf D.; Kreutz, Clemens; Timmer, Jens; Klingmüller, Ursula

    2016-01-01

    Lung cancer, with its most prevalent form non-small-cell lung carcinoma (NSCLC), is one of the leading causes of cancer-related deaths worldwide, and is commonly treated with chemotherapeutic drugs such as cisplatin. Lung cancer patients frequently suffer from chemotherapy-induced anemia, which can be treated with erythropoietin (EPO). However, studies have indicated that EPO not only promotes erythropoiesis in hematopoietic cells, but may also enhance survival of NSCLC cells. Here, we verified that the NSCLC cell line H838 expresses functional erythropoietin receptors (EPOR) and that treatment with EPO reduces cisplatin-induced apoptosis. To pinpoint differences in EPO-induced survival signaling in erythroid progenitor cells (CFU-E, colony forming unit-erythroid) and H838 cells, we combined mathematical modeling with a method for feature selection, the L1 regularization. Utilizing an example model and simulated data, we demonstrated that this approach enables the accurate identification and quantification of cell type-specific parameters. We applied our strategy to quantitative time-resolved data of EPO-induced JAK/STAT signaling generated by quantitative immunoblotting, mass spectrometry and quantitative real-time PCR (qRT-PCR) in CFU-E and H838 cells as well as H838 cells overexpressing human EPOR (H838-HA-hEPOR). The established parsimonious mathematical model was able to simultaneously describe the data sets of CFU-E, H838 and H838-HA-hEPOR cells. Seven cell type-specific parameters were identified that included for example parameters for nuclear translocation of STAT5 and target gene induction. Cell type-specific differences in target gene induction were experimentally validated by qRT-PCR experiments. The systematic identification of pathway differences and sensitivities of EPOR signaling in CFU-E and H838 cells revealed potential targets for intervention to selectively inhibit EPO-induced signaling in the tumor cells but leave the responses in erythroid progenitor cells unaffected. Thus, the proposed modeling strategy can be employed as a general procedure to identify cell type-specific parameters and to recommend treatment strategies for the selective targeting of specific cell types. PMID:27494133

  8. Improvement in Running Economy after 6 Weeks of Plyometric Training.

    ERIC Educational Resources Information Center

    Turner, Amanda M.; Owings, Matt; Schwane, James A.

    2003-01-01

    Investigated whether a 6-week regimen of plyometric training would improve running economy. Data were collected on 18 regular but not highly trained distance runners who participated in either regular running training or plyometric training. Results indicated that 6 weeks of plyometric training improved running economy at selected speeds in this…

  9. Weighted low-rank sparse model via nuclear norm minimization for bearing fault detection

    NASA Astrophysics Data System (ADS)

    Du, Zhaohui; Chen, Xuefeng; Zhang, Han; Yang, Boyuan; Zhai, Zhi; Yan, Ruqiang

    2017-07-01

    It is a fundamental task in the machine fault diagnosis community to detect impulsive signatures generated by the localized faults of bearings. The main goal of this paper is to exploit the low-rank physical structure of periodic impulsive features and further establish a weighted low-rank sparse model for bearing fault detection. The proposed model mainly consists of three basic components: an adaptive partition window, a nuclear norm regularization and a weighted sequence. Firstly, due to the periodic repetition mechanism of impulsive feature, an adaptive partition window could be designed to transform the impulsive feature into a data matrix. The highlight of partition window is to accumulate all local feature information and align them. Then, all columns of the data matrix share similar waveforms and a core physical phenomenon arises, i.e., these singular values of the data matrix demonstrates a sparse distribution pattern. Therefore, a nuclear norm regularization is enforced to capture that sparse prior. However, the nuclear norm regularization treats all singular values equally and thus ignores one basic fact that larger singular values have more information volume of impulsive features and should be preserved as much as possible. Therefore, a weighted sequence with adaptively tuning weights inversely proportional to singular amplitude is adopted to guarantee the distribution consistence of large singular values. On the other hand, the proposed model is difficult to solve due to its non-convexity and thus a new algorithm is developed to search one satisfying stationary solution through alternatively implementing one proximal operator operation and least-square fitting. Moreover, the sensitivity analysis and selection principles of algorithmic parameters are comprehensively investigated through a set of numerical experiments, which shows that the proposed method is robust and only has a few adjustable parameters. Lastly, the proposed model is applied to the wind turbine (WT) bearing fault detection and its effectiveness is sufficiently verified. Compared with the current popular bearing fault diagnosis techniques, wavelet analysis and spectral kurtosis, our model achieves a higher diagnostic accuracy.

  10. Disparities in Regular Source of Dental Care among Mothers of Medicaid-Enrolled Preschool Children

    PubMed Central

    Grembowski, David; Spiekerman, Charles; Milgrom, Peter

    2008-01-01

    For mothers of Medicaid children aged 3 to 6 years, we examined whether mothers’ characteristics and local supply of dentists and public dental clinics are associated with having a regular source of dental care. Disproportionate stratified sampling by racial/ethnic group selected 11,305 children aged 3 to 6 in Medicaid in Washington state. Mothers (N=4,373) completed a mixed-mode survey that was combined with dental supply measures. Results reveal 38% of mothers had a regular dental place and 27% had a regular dentist. Dental insurance, greater education, income, and length of residence and better mental health were associated with having a regular place or dentist for Black, Hispanic and White mothers, along with increased supply of private dentists and safety net clinics for White and Hispanic mothers. Mothers lacking a regular source of dental care may increase oral health disparities in their children. PMID:17982208

  11. Acoustic and elastic waveform inversion best practices

    NASA Astrophysics Data System (ADS)

    Modrak, Ryan T.

    Reaching the global minimum of a waveform misfit function requires careful choices about the nonlinear optimization, preconditioning and regularization methods underlying an inversion. Because waveform inversion problems are susceptible to erratic convergence, one or two test cases are not enough to reliably inform such decisions. We identify best practices instead using two global, one regional and four near-surface acoustic test problems. To obtain meaningful quantitative comparisons, we carry out hundreds acoustic inversions, varying one aspect of the implementation at a time. Comparing nonlinear optimization algorithms, we find that L-BFGS provides computational savings over nonlinear conjugate gradient methods in a wide variety of test cases. Comparing preconditioners, we show that a new diagonal scaling derived from the adjoint of the forward operator provides better performance than two conventional preconditioning schemes. Comparing regularization strategies, we find that projection, convolution, Tikhonov regularization, and total variation regularization are effective in different contexts. Besides these issues, reliability and efficiency in waveform inversion depend on close numerical attention and care. Implementation details have a strong effect on computational cost, regardless of the chosen material parameterization or nonlinear optimization algorithm. Building on the acoustic inversion results, we carry out elastic experiments with four test problems, three objective functions, and four material parameterizations. The choice of parameterization for isotropic elastic media is found to be more complicated than previous studies suggests, with "wavespeed-like'' parameters performing well with phase-based objective functions and Lame parameters performing well with amplitude-based objective functions. Reliability and efficiency can be even harder to achieve in transversely isotropic elastic inversions because rotation angle parameters describing fast-axis direction are difficult to recover. Using Voigt or Chen-Tromp parameters avoids the need to include rotation angles explicitly and provides an effective strategy for anisotropic inversion. The need for flexible and portable workflow management tools for seismic inversion also poses a major challenge. In a final chapter, the software used to the carry out the above experiments is described and instructions for reproducing experimental results are given.

  12. The cost of uniqueness in groundwater model calibration

    NASA Astrophysics Data System (ADS)

    Moore, Catherine; Doherty, John

    2006-04-01

    Calibration of a groundwater model requires that hydraulic properties be estimated throughout a model domain. This generally constitutes an underdetermined inverse problem, for which a solution can only be found when some kind of regularization device is included in the inversion process. Inclusion of regularization in the calibration process can be implicit, for example through the use of zones of constant parameter value, or explicit, for example through solution of a constrained minimization problem in which parameters are made to respect preferred values, or preferred relationships, to the degree necessary for a unique solution to be obtained. The "cost of uniqueness" is this: no matter which regularization methodology is employed, the inevitable consequence of its use is a loss of detail in the calibrated field. This, in turn, can lead to erroneous predictions made by a model that is ostensibly "well calibrated". Information made available as a by-product of the regularized inversion process allows the reasons for this loss of detail to be better understood. In particular, it is easily demonstrated that the estimated value for an hydraulic property at any point within a model domain is, in fact, a weighted average of the true hydraulic property over a much larger area. This averaging process causes loss of resolution in the estimated field. Where hydraulic conductivity is the hydraulic property being estimated, high averaging weights exist in areas that are strategically disposed with respect to measurement wells, while other areas may contribute very little to the estimated hydraulic conductivity at any point within the model domain, this possibly making the detection of hydraulic conductivity anomalies in these latter areas almost impossible. A study of the post-calibration parameter field covariance matrix allows further insights into the loss of system detail incurred through the calibration process to be gained. A comparison of pre- and post-calibration parameter covariance matrices shows that the latter often possess a much smaller spectral bandwidth than the former. It is also demonstrated that, as an inevitable consequence of the fact that a calibrated model cannot replicate every detail of the true system, model-to-measurement residuals can show a high degree of spatial correlation, a fact which must be taken into account when assessing these residuals either qualitatively, or quantitatively in the exploration of model predictive uncertainty. These principles are demonstrated using a synthetic case in which spatial parameter definition is based on pilot points, and calibration is implemented using both zones of piecewise constancy and constrained minimization regularization.

  13. Adaptively Tuned Iterative Low Dose CT Image Denoising

    PubMed Central

    Hashemi, SayedMasoud; Paul, Narinder S.; Beheshti, Soosan; Cobbold, Richard S. C.

    2015-01-01

    Improving image quality is a critical objective in low dose computed tomography (CT) imaging and is the primary focus of CT image denoising. State-of-the-art CT denoising algorithms are mainly based on iterative minimization of an objective function, in which the performance is controlled by regularization parameters. To achieve the best results, these should be chosen carefully. However, the parameter selection is typically performed in an ad hoc manner, which can cause the algorithms to converge slowly or become trapped in a local minimum. To overcome these issues a noise confidence region evaluation (NCRE) method is used, which evaluates the denoising residuals iteratively and compares their statistics with those produced by additive noise. It then updates the parameters at the end of each iteration to achieve a better match to the noise statistics. By combining NCRE with the fundamentals of block matching and 3D filtering (BM3D) approach, a new iterative CT image denoising method is proposed. It is shown that this new denoising method improves the BM3D performance in terms of both the mean square error and a structural similarity index. Moreover, simulations and patient results show that this method preserves the clinically important details of low dose CT images together with a substantial noise reduction. PMID:26089972

  14. Tridimensional assessment of adductor spasmodic dysphonia pre- and post-treatment with Botulinum toxin.

    PubMed

    Dejonckere, P H; Neumann, K J; Moerman, M B J; Martens, J P; Giordano, A; Manfredi, C

    2012-04-01

    Spasmodic dysphonia voices form, in the same way as substitution voices, a particular category of dysphonia that seems not suited for a standardized basic multidimensional assessment protocol, like the one proposed by the European Laryngological Society. Thirty-three exhaustive analyses were performed on voices of 19 patients diagnosed with adductor spasmodic dysphonia (SD), before and after treatment with Botulinum toxin. The speech material consisted of 40 short sentences phonetically selected for constant voicing. Seven perceptual parameters (traditional and dedicated) were blindly rated by a panel of experienced clinicians. Nine acoustic measures (mainly based on voicing evidence and periodicity) were achieved by a special analysis program suited for strongly irregular signals and validated with synthesized deviant voices. Patients also filled in a VHI-questionnaire. Significant improvement is shown by all three approaches. The traditional GRB perceptual parameters appear to be adequate for these patients. Conversely, the special acoustic analysis program is successful in objectivating the improved regularity of vocal fold vibration: the basic jitter remains the most valuable parameter, when reliably quantified. The VHI is well suited for the voice-related quality of life. Nevertheless, when considering pre-therapy and post-therapy changes, the current study illustrates a complete lack of correlation between the perceptual, acoustic, and self-assessment dimensions. Assessment of SD-voices needs to be tridimensional.

  15. Semi-automated brain tumor segmentation on multi-parametric MRI using regularized non-negative matrix factorization.

    PubMed

    Sauwen, Nicolas; Acou, Marjan; Sima, Diana M; Veraart, Jelle; Maes, Frederik; Himmelreich, Uwe; Achten, Eric; Huffel, Sabine Van

    2017-05-04

    Segmentation of gliomas in multi-parametric (MP-)MR images is challenging due to their heterogeneous nature in terms of size, appearance and location. Manual tumor segmentation is a time-consuming task and clinical practice would benefit from (semi-) automated segmentation of the different tumor compartments. We present a semi-automated framework for brain tumor segmentation based on non-negative matrix factorization (NMF) that does not require prior training of the method. L1-regularization is incorporated into the NMF objective function to promote spatial consistency and sparseness of the tissue abundance maps. The pathological sources are initialized through user-defined voxel selection. Knowledge about the spatial location of the selected voxels is combined with tissue adjacency constraints in a post-processing step to enhance segmentation quality. The method is applied to an MP-MRI dataset of 21 high-grade glioma patients, including conventional, perfusion-weighted and diffusion-weighted MRI. To assess the effect of using MP-MRI data and the L1-regularization term, analyses are also run using only conventional MRI and without L1-regularization. Robustness against user input variability is verified by considering the statistical distribution of the segmentation results when repeatedly analyzing each patient's dataset with a different set of random seeding points. Using L1-regularized semi-automated NMF segmentation, mean Dice-scores of 65%, 74 and 80% are found for active tumor, the tumor core and the whole tumor region. Mean Hausdorff distances of 6.1 mm, 7.4 mm and 8.2 mm are found for active tumor, the tumor core and the whole tumor region. Lower Dice-scores and higher Hausdorff distances are found without L1-regularization and when only considering conventional MRI data. Based on the mean Dice-scores and Hausdorff distances, segmentation results are competitive with state-of-the-art in literature. Robust results were found for most patients, although careful voxel selection is mandatory to avoid sub-optimal segmentation.

  16. Bypassing the Limits of Ll Regularization: Convex Sparse Signal Processing Using Non-Convex Regularization

    NASA Astrophysics Data System (ADS)

    Parekh, Ankit

    Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal decomposition technique for an important biomedical signal processing problem: the detection of sleep spindles and K-complexes in human sleep electroencephalography (EEG). We propose a non-linear model for the EEG consisting of three components: (1) a transient (sparse piecewise constant) component, (2) a low-frequency component, and (3) an oscillatory component. The oscillatory component admits a sparse time-frequency representation. Using a convex objective function, we propose a fast non-linear optimization algorithm to estimate the three components in the proposed signal model. The low-frequency and oscillatory components are then used to estimate the K-complexes and sleep spindles respectively. The proposed detection method is shown to outperform several state-of-the-art automated sleep spindles detection methods.

  17. The Assessment of Selectivity in Different Quadrupole-Orbitrap Mass Spectrometry Acquisition Modes

    NASA Astrophysics Data System (ADS)

    Berendsen, Bjorn J. A.; Wegh, Robin S.; Meijer, Thijs; Nielen, Michel W. F.

    2015-02-01

    Selectivity of the confirmation of identity in liquid chromatography (tandem) mass spectrometry using Q-Orbitrap instrumentation was assessed using different acquisition modes based on a representative experimental data set constructed from 108 samples, including six different matrix extracts and containing over 100 analytes each. Single stage full scan, all ion fragmentation, and product ion scanning were applied. By generating reconstructed ion chromatograms using unit mass window in targeted MS2, selected reaction monitoring (SRM), regularly applied using triple-quadrupole instruments, was mimicked. This facilitated the comparison of single stage full scan, all ion fragmentation, (mimicked) SRM, and product ion scanning applying a mass window down to 1 ppm. Single factor Analysis of Variance was carried out on the variance (s2) of the mass error to determine which factors and interactions are significant parameters with respect to selectivity. We conclude that selectivity is related to the target compound (mainly the mass defect), the matrix, sample clean-up, concentration, and mass resolution. Selectivity of the different instrumental configurations was quantified by counting the number of interfering peaks observed in the chromatograms. We conclude that precursor ion selection significantly contributes to selectivity: monitoring of a single product ion at high mass accuracy with a 1 Da precursor ion window proved to be equally selective or better to monitoring two transition products in mimicked SRM. In contrast, monitoring a single fragment in all ion fragmentation mode results in significantly lower selectivity versus mimicked SRM. After a thorough inter-laboratory evaluation study, the results of this study can be used for a critical reassessment of the current identification points system and contribute to the next generation of evidence-based and robust performance criteria in residue analysis and sports doping.

  18. Using Predictive Uncertainty Analysis to Assess Hydrologic Model Performance for a Watershed in Oregon

    NASA Astrophysics Data System (ADS)

    Brannan, K. M.; Somor, A.

    2016-12-01

    A variety of statistics are used to assess watershed model performance but these statistics do not directly answer the question: what is the uncertainty of my prediction. Understanding predictive uncertainty is important when using a watershed model to develop a Total Maximum Daily Load (TMDL). TMDLs are a key component of the US Clean Water Act and specify the amount of a pollutant that can enter a waterbody when the waterbody meets water quality criteria. TMDL developers use watershed models to estimate pollutant loads from nonpoint sources of pollution. We are developing a TMDL for bacteria impairments in a watershed in the Coastal Range of Oregon. We setup an HSPF model of the watershed and used the calibration software PEST to estimate HSPF hydrologic parameters and then perform predictive uncertainty analysis of stream flow. We used Monte-Carlo simulation to run the model with 1,000 different parameter sets and assess predictive uncertainty. In order to reduce the chance of specious parameter sets, we accounted for the relationships among parameter values by using mathematically-based regularization techniques and an estimate of the parameter covariance when generating random parameter sets. We used a novel approach to select flow data for predictive uncertainty analysis. We set aside flow data that occurred on days that bacteria samples were collected. We did not use these flows in the estimation of the model parameters. We calculated a percent uncertainty for each flow observation based 1,000 model runs. We also used several methods to visualize results with an emphasis on making the data accessible to both technical and general audiences. We will use the predictive uncertainty estimates in the next phase of our work, simulating bacteria fate and transport in the watershed.

  19. The Effects of 3-Month Skill-Based and Plyometric Conditioning on Fitness Parameters in Junior Female Volleyball Players.

    PubMed

    Idrizovic, Kemal; Gjinovci, Bahri; Sekulic, Damir; Uljevic, Ognjen; João, Paulo Vicente; Spasic, Miodrag; Sattler, Tine

    2018-02-24

    This study compared the effects of skill-based and plyometric conditioning (both performed in addition to regular volleyball training twice a week for 12 wk) on fitness parameters in female junior volleyball players. The participants [n = 47; age: 16.6 (0.6) y; mass: 59.4 (8.1) kg; height: 175.1 (3.0) cm] were randomized into a plyometric (n = 13), a skill-based (n = 17), and a control (n = 17) groups. The variables included body height, body mass, calf girth, calf skinfold, corrected calf girth, countermovement jump, 20-m-sprint, medicine ball toss, and sit-and-reach test. Two-way analysis of variance (time × group) effects for time were significant (P < .05) for all variables except body mass. Significant group × time interactions were observed for calf skinfold [η 2  = .14; medium effect size (ES)], 20-m sprint (η 2  = .09; small ES), countermovement jump (η 2  = .29; large ES), medicine ball (η 2  = .58; large ES), with greater gains (reduction of skinfold) for plyometric group, and sit-and-reach (η 2  = .35; large ES), with greater gains in plyometric and skill-based groups. The magnitude-based inference indicated positive changes in 1) medicine ball toss and countermovement jump for all groups; 2) sit-and-reach for the plyometric and skill-based groups; and 3) 20-m sprint, calf girth, calf skinfold, and corrected calf girth for plyometric group only. Selected variables can be improved by adding 2 plyometric training sessions throughout the period of 12 weeks. Additional skill-based conditioning did not contribute to improvement in the studied variables compared with regular volleyball training.

  20. Hypothesis of Lithocoding: Origin of the Genetic Code as a "Double Jigsaw Puzzle" of Nucleobase-Containing Molecules and Amino Acids Assembled by Sequential Filling of Apatite Mineral Cellules.

    PubMed

    Skoblikow, Nikolai E; Zimin, Andrei A

    2016-05-01

    The hypothesis of direct coding, assuming the direct contact of pairs of coding molecules with amino acid side chains in hollow unit cells (cellules) of a regular crystal-structure mineral is proposed. The coding nucleobase-containing molecules in each cellule (named "lithocodon") partially shield each other; the remaining free space determines the stereochemical character of the filling side chain. Apatite-group minerals are considered as the most preferable for this type of coding (named "lithocoding"). A scheme of the cellule with certain stereometric parameters, providing for the isomeric selection of contacting molecules is proposed. We modelled the filling of cellules with molecules involved in direct coding, with the possibility of coding by their single combination for a group of stereochemically similar amino acids. The regular ordered arrangement of cellules enables the polymerization of amino acids and nucleobase-containing molecules in the same direction (named "lithotranslation") preventing the shift of coding. A table of the presumed "LithoCode" (possible and optimal lithocodon assignments for abiogenically synthesized α-amino acids involved in lithocoding and lithotranslation) is proposed. The magmatic nature of the mineral, abiogenic synthesis of organic molecules and polymerization events are considered within the framework of the proposed "volcanic scenario".

  1. Strength training for children and adolescents.

    PubMed

    Faigenbaum, A D

    2000-10-01

    The potential benefits of youth strength training extend beyond an increase in muscular strength and may include favorable changes in selected health- and fitness-related measures. If appropriate training guidelines are followed, regular participation in a youth strength-training program has the potential to increase bone mineral density, improve motor performance skills, enhance sports performance, and better prepare our young athletes for the demands of practice and competition. Despite earlier concerns regarding the safety and efficacy of youth strength training, current public health objectives now aim to increase the number of boys and girls age 6 and older who regularly participate in physical activities that enhance and maintain muscular fitness. Parents, teachers, coaches, and healthcare providers should realize that youth strength training is a specialized method of conditioning that can offer enormous benefit but at the same time can result in serious injury if established guidelines are not followed. With qualified instruction, competent supervision, and an appropriate progression of the volume and intensity of training, children and adolescents cannot only learn advanced strength training exercises but can feel good about their performances, and have fun. Additional clinical trails involving children and adolescents are needed to further explore the acute and chronic effects of strength training on a variety of anatomical, physiological, and psychological parameters.

  2. On regularizing the MCTDH equations of motion

    NASA Astrophysics Data System (ADS)

    Meyer, Hans-Dieter; Wang, Haobin

    2018-03-01

    The Multiconfiguration Time-Dependent Hartree (MCTDH) approach leads to equations of motion (EOM) which become singular when there are unoccupied so-called single-particle functions (SPFs). Starting from a Hartree product, all SPFs, except the first one, are unoccupied initially. To solve the MCTDH-EOMs numerically, one therefore has to remove the singularity by a regularization procedure. Usually the inverse of a density matrix is regularized. Here we argue and show that regularizing the coefficient tensor, which in turn regularizes the density matrix as well, leads to an improved performance of the EOMs. The initially unoccupied SPFs are rotated faster into their "correct direction" in Hilbert space and the final results are less sensitive to the choice of the value of the regularization parameter. For a particular example (a spin-boson system studied with a transformed Hamiltonian), we could even show that only with the new regularization scheme could one obtain correct results. Finally, in Appendix A, a new integration scheme for the MCTDH-EOMs developed by Lubich and co-workers is discussed. It is argued that this scheme does not solve the problem of the unoccupied natural orbitals because this scheme ignores the latter and does not propagate them at all.

  3. Stark width regularities within spectral series of the lithium isoelectronic sequence

    NASA Astrophysics Data System (ADS)

    Tapalaga, Irinel; Trklja, Nora; Dojčinović, Ivan P.; Purić, Jagoš

    2018-03-01

    Stark width regularities within spectral series of the lithium isoelectronic sequence have been studied in an approach that includes both neutrals and ions. The influence of environmental conditions and certain atomic parameters on the Stark widths of spectral lines has been investigated. This study gives a simple model for the calculation of Stark broadening data for spectral lines within the lithium isoelectronic sequence. The proposed model requires fewer parameters than any other model. The obtained relations were used for predictions of Stark widths for transitions that have not yet been measured or calculated. In the framework of the present research, three algorithms for fast data processing have been made and they enable quality control and provide verification of the theoretically calculated results.

  4. On a full Bayesian inference for force reconstruction problems

    NASA Astrophysics Data System (ADS)

    Aucejo, M.; De Smet, O.

    2018-05-01

    In a previous paper, the authors introduced a flexible methodology for reconstructing mechanical sources in the frequency domain from prior local information on both their nature and location over a linear and time invariant structure. The proposed approach was derived from Bayesian statistics, because of its ability in mathematically accounting for experimenter's prior knowledge. However, since only the Maximum a Posteriori estimate was computed, the posterior uncertainty about the regularized solution given the measured vibration field, the mechanical model and the regularization parameter was not assessed. To answer this legitimate question, this paper fully exploits the Bayesian framework to provide, from a Markov Chain Monte Carlo algorithm, credible intervals and other statistical measures (mean, median, mode) for all the parameters of the force reconstruction problem.

  5. Optimizing phonon space in the phonon-coupling model

    NASA Astrophysics Data System (ADS)

    Tselyaev, V.; Lyutorovich, N.; Speth, J.; Reinhard, P.-G.

    2017-08-01

    We present a new scheme to select the most relevant phonons in the phonon-coupling model, named here the time-blocking approximation (TBA). The new criterion, based on the phonon-nucleon coupling strengths rather than on B (E L ) values, is more selective and thus produces much smaller phonon spaces in the TBA. This is beneficial in two respects: first, it curbs the computational cost, and second, it reduces the danger of double counting in the expansion basis of the TBA. We use here the TBA in a form where the coupling strength is regularized to keep the given Hartree-Fock ground state stable. The scheme is implemented in a random-phase approximation and TBA code based on the Skyrme energy functional. We first explore carefully the cutoff dependence with the new criterion and can work out a natural (optimal) cutoff parameter. Then we use the freshly developed and tested scheme for a survey of giant resonances and low-lying collective states in six doubly magic nuclei looking also at the dependence of the results when varying the Skyrme parametrization.

  6. Can Ebola virus evolve to be less virulent in humans?

    PubMed

    Sofonea, M T; Aldakak, L; Boullosa, L F V V; Alizon, S

    2018-03-01

    Understanding Ebola virus (EBOV) virulence evolution not only is timely but also raises specific questions because it causes one of the most virulent human infections and it is capable of transmission after the death of its host. Using a compartmental epidemiological model that captures three transmission routes (by regular contact, via dead bodies and by sexual contact), we infer the evolutionary dynamics of case fatality ratio on the scale of an outbreak and in the long term. Our major finding is that the virus's specific life cycle imposes selection for high levels of virulence and that this pattern is robust to parameter variations in biological ranges. In addition to shedding a new light on the ultimate causes of EBOV's high virulence, these results generate testable predictions and contribute to informing public health policies. In particular, burial management stands out as the most appropriate intervention since it decreases the R0 of the epidemics, while imposing selection for less virulent strains. © 2017 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2017 European Society For Evolutionary Biology.

  7. Correlation between Glutathione Peroxidase Activity and Anthropometrical Parameters in Adolescents with Down Syndrome

    ERIC Educational Resources Information Center

    Ordonez, F. J.; Rosety-Rodriguez, M.

    2007-01-01

    Since we have recently found that regular exercise increased erythrocyte antioxidant enzyme activities such as glutathione peroxidase (GPX) in adolescents with Down syndrome, these programs may be recommended. This study was designed to assess the role of anthropometrical parameters as easy, economic and non-invasive biomarkers of GPX. Thirty-one…

  8. Study of the method of water-injected meat identifying based on low-field nuclear magnetic resonance

    NASA Astrophysics Data System (ADS)

    Xu, Jianmei; Lin, Qing; Yang, Fang; Zheng, Zheng; Ai, Zhujun

    2018-01-01

    The aim of this study to apply low-field nuclear magnetic resonance technique was to study regular variation of the transverse relaxation spectral parameters of water-injected meat with the proportion of water injection. Based on this, the method of one-way ANOVA and discriminant analysis was used to analyse the differences between these parameters in the capacity of distinguishing water-injected proportion, and established a model for identifying water-injected meat. The results show that, except for T 21b, T 22e and T 23b, the other parameters of the T 2 relaxation spectrum changed regularly with the change of water-injected proportion. The ability of different parameters to distinguish water-injected proportion was different. Based on S, P 22 and T 23m as the prediction variable, the Fisher model and the Bayes model were established by discriminant analysis method, qualitative and quantitative classification of water-injected meat can be realized. The rate of correct discrimination of distinguished validation and cross validation were 88%, the model was stable.

  9. Fluorescence molecular tomography reconstruction via discrete cosine transform-based regularization

    NASA Astrophysics Data System (ADS)

    Shi, Junwei; Liu, Fei; Zhang, Jiulou; Luo, Jianwen; Bai, Jing

    2015-05-01

    Fluorescence molecular tomography (FMT) as a noninvasive imaging modality has been widely used for biomedical preclinical applications. However, FMT reconstruction suffers from severe ill-posedness, especially when a limited number of projections are used. In order to improve the quality of FMT reconstruction results, a discrete cosine transform (DCT) based reweighted L1-norm regularization algorithm is proposed. In each iteration of the reconstruction process, different reweighted regularization parameters are adaptively assigned according to the values of DCT coefficients to suppress the reconstruction noise. In addition, the permission region of the reconstructed fluorophores is adaptively constructed to increase the convergence speed. In order to evaluate the performance of the proposed algorithm, physical phantom and in vivo mouse experiments with a limited number of projections are carried out. For comparison, different L1-norm regularization strategies are employed. By quantifying the signal-to-noise ratio (SNR) of the reconstruction results in the phantom and in vivo mouse experiments with four projections, the proposed DCT-based reweighted L1-norm regularization shows higher SNR than other L1-norm regularizations employed in this work.

  10. FOREWORD: Tackling inverse problems in a Banach space environment: from theory to applications Tackling inverse problems in a Banach space environment: from theory to applications

    NASA Astrophysics Data System (ADS)

    Schuster, Thomas; Hofmann, Bernd; Kaltenbacher, Barbara

    2012-10-01

    Inverse problems can usually be modelled as operator equations in infinite-dimensional spaces with a forward operator acting between Hilbert or Banach spaces—a formulation which quite often also serves as the basis for defining and analyzing solution methods. The additional amount of structure and geometric interpretability provided by the concept of an inner product has rendered these methods amenable to a convergence analysis, a fact which has led to a rigorous and comprehensive study of regularization methods in Hilbert spaces over the last three decades. However, for numerous problems such as x-ray diffractometry, certain inverse scattering problems and a number of parameter identification problems in PDEs, the reasons for using a Hilbert space setting seem to be based on conventions rather than an appropriate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, non-Hilbertian regularization and data fidelity terms incorporating a priori information on solution and noise, such as general Lp-norms, TV-type norms, or the Kullback-Leibler divergence, have recently become very popular. These facts have motivated intensive investigations on regularization methods in Banach spaces, a topic which has emerged as a highly active research field within the area of inverse problems. Meanwhile some of the most well-known regularization approaches, such as Tikhonov-type methods requiring the solution of extremal problems, and iterative ones like the Landweber method, the Gauss-Newton method, as well as the approximate inverse method, have been investigated for linear and nonlinear operator equations in Banach spaces. Convergence with rates has been proven and conditions on the solution smoothness and on the structure of nonlinearity have been formulated. Still, beyond the existing results a large number of challenging open questions have arisen, due to the more involved handling of general Banach spaces and the larger variety of concrete instances with special properties. The aim of this special section is to provide a forum for highly topical ongoing work in the area of regularization in Banach spaces, its numerics and its applications. Indeed, we have been lucky enough to obtain a number of excellent papers both from colleagues who have previously been contributing to this topic and from researchers entering the field due to its relevance in practical inverse problems. We would like to thank all contributers for enabling us to present a high quality collection of papers on topics ranging from various aspects of regularization via efficient numerical solution to applications in PDE models. We give a brief overview of the contributions included in this issue (here ordered alphabetically by first author). In their paper, Iterative regularization with general penalty term—theory and application to L1 and TV regularization, Radu Bot and Torsten Hein provide an extension of the Landweber iteration for linear operator equations in Banach space to general operators in place of the inverse duality mapping, which corresponds to the use of general regularization functionals in variational regularization. The L∞ topology in data space corresponds to the frequently occuring situation of uniformly distributed data noise. A numerically efficient solution of the resulting Tikhonov regularization problem via a Moreau-Yosida appriximation and a semismooth Newton method, along with a δ-free regularization parameter choice rule, is the topic of the paper L∞ fitting for inverse problems with uniform noise by Christian Clason. Extension of convergence rates results from classical source conditions to their generalization via variational inequalities with a priori and a posteriori stopping rules is the main contribution of the paper Regularization of linear ill-posed problems by the augmented Lagrangian method and variational inequalities by Klaus Frick and Markus Grasmair, again in the context of some iterative method. A powerful tool for proving convergence rates of Tikhonov type but also other regularization methods in Banach spaces are assumptions of the type of variational inequalities that combine conditions on solution smoothness (i.e., source conditions in the Hilbert space case) and nonlinearity of the forward operator. In Parameter choice in Banach space regularization under variational inequalities, Bernd Hofmann and Peter Mathé provide results with general error measures and especially study the question of regularization parameter choice. Daijun Jiang, Hui Feng, and Jun Zou consider an application of Banach space ideas in the context of an application problem in their paper Convergence rates of Tikhonov regularizations for parameter identifiation in a parabolic-elliptic system, namely the identification of a distributed diffusion coefficient in a coupled elliptic-parabolic system. In particular, they show convergence rates of Lp-H1 (variational) regularization for the application under consideration via the use and verification of certain source and nonlinearity conditions. In computational practice, the Lp norm with p close to one is often used as a substitute for the actually sparsity promoting L1 norm. In Norm sensitivity of sparsity regularization with respect to p, Kamil S Kazimierski, Peter Maass and Robin Strehlow consider the question of how sensitive the Tikhonov regularized solution is with respect to p. They do so by computing the derivative via the implicit function theorem, particularly at the crucial value, p=1. Another iterative regularization method in Banach space is considered by Qinian Jin and Linda Stals in Nonstationary iterated Tikhonov regularization for ill-posed problems in Banach spaces. Using a variational formulation and under some smoothness and convexity assumption on the preimage space, they extend the convergence analysis of the well-known iterative Tikhonov method for linear problems in Hilbert space to a more general Banach space framework. Systems of linear or nonlinear operators can be efficiently treated by cyclic iterations, thus several variants of gradient and Newton-type Kaczmarz methods have already been studied in the Hilbert space setting. Antonio Leitão and M Marques Alves in their paper On Landweber---Kaczmarz methods for regularizing systems of ill-posed equations in Banach spaces carry out an extension to Banach spaces for the fundamental Landweber version. The impact of perturbations in the evaluation of the forward operator and its derivative on the convergence behaviour of regularization methods is a practically and highly relevant issue. It is treated in the paper Convergence rates analysis of Tikhonov regularization for nonlinear ill-posed problems with noisy operators by Shuai Lu and Jens Flemming for variational regularization of nonlinear problems in Banach spaces. In The approximate inverse in action: IV. Semi-discrete equations in a Banach space setting, Thomas Schuster, Andreas Rieder and Frank Schöpfer extend the concept of approximate inverse to the practically and highly relevant situation of finitely many measurements and a general smooth and convex Banach space as preimage space. They devise two approaches for computing the reconstruction kernels required in the method and provide convergence and regularization results. Frank Werner and Thorsten Hohage in Convergence rates in expectation for Tikhonov-type regularization of inverse problems with Poisson data prove convergence rates results for variational regularization with general convex regularization term and the Kullback-Leibler distance as data fidelity term by combining a new result on Poisson distributed data with a deterministic rates analysis. Finally, we would like to thank the Inverse Problems team, especially Joanna Evangelides and Chris Wileman, for their extraordinary smooth and productive cooperation, as well as Alfred K Louis for his kind support of our initiative.

  11. Origin of Short-Perihelion Comets

    NASA Technical Reports Server (NTRS)

    Guliyev, A. S.

    2011-01-01

    New regularities for short-perihelion comets are found. Distant nodes of cometary orbits of Kreutz family are concentrated in a plane with ascending node 76 and inclination 267 at the distance from 2 up to 3 a.u. and in a very narrow interval of longitudes. There is a correlation dependence between q and cos I concerning the found plane (coefficient of correlation 0.41). Similar results are received regarding to cometary families of Meyer, Kracht and Marsden. Distant nodes of these comets are concentrated close three planes (their parameters are discussed in the article) and at distances 1.4; 0.5; 6 a.u. accordingly. It is concluded that these comet groups were formed as a result of collision of parent bodies with meteoric streams. One more group, consisting of 7 comets is identified. 5 comet pairs are selected among sungrazers.

  12. Fourier analysis algorithm for the posterior corneal keratometric data: clinical usefulness in keratoconus.

    PubMed

    Sideroudi, Haris; Labiris, Georgios; Georgantzoglou, Kimon; Ntonti, Panagiota; Siganos, Charalambos; Kozobolis, Vassilios

    2017-07-01

    To develop an algorithm for the Fourier analysis of posterior corneal videokeratographic data and to evaluate the derived parameters in the diagnosis of Subclinical Keratoconus (SKC) and Keratoconus (KC). This was a cross-sectional, observational study that took place in the Eye Institute of Thrace, Democritus University, Greece. Eighty eyes formed the KC group, 55 eyes formed the SKC group while 50 normal eyes populated the control group. A self-developed algorithm in visual basic for Microsoft Excel performed a Fourier series harmonic analysis for the posterior corneal sagittal curvature data. The algorithm decomposed the obtained curvatures into a spherical component, regular astigmatism, asymmetry and higher order irregularities for averaged central 4 mm and for each individual ring separately (1, 2, 3 and 4 mm). The obtained values were evaluated for their diagnostic capacity using receiver operating curves (ROC). Logistic regression was attempted for the identification of a combined diagnostic model. Significant differences were detected in regular astigmatism, asymmetry and higher order irregularities among groups. For the SKC group, the parameters with high diagnostic ability (AUC > 90%) were the higher order irregularities, the asymmetry and the regular astigmatism, mainly in the corneal periphery. Higher predictive accuracy was identified using diagnostic models that combined the asymmetry, regular astigmatism and higher order irregularities in averaged 3and 4 mm area (AUC: 98.4%, Sensitivity: 91.7% and Specificity:100%). Fourier decomposition of posterior Keratometric data provides parameters with high accuracy in differentiating SKC from normal corneas and should be included in the prompt diagnosis of KC. © 2017 The Authors Ophthalmic & Physiological Optics © 2017 The College of Optometrists.

  13. Effects of a regular exercise program on biochemical parameters of type 2 diabetes mellitus patients.

    PubMed

    Dinçer, Şensu; Altan, Mehmet; Terzioğlu, Duygu; Uslu, Ezel; Karşidağ, Kubilay; Batu, Şule; Metin, Gökhan

    2016-11-01

    We aimed to investigate the effects of a regular exercise program on exercise capacity, blood biochemical profiles, certain antioxidant and oxidative stress parameters of type 2 Diabetes mellitus (DM) patients. Thirty one type 2 DM patients (ages ranging from 42-65 years) who have hemoglobin A1c (HbA1c) levels ≥7.5% and ≤9.5% were included to study and performed two cardiopulmonary exercise tests (CPET) before and after the exercise program. Subjects performed aerobic exercise training for 90 minutes a day; 3 days a week during 12 weeks. Blood samples were collected to analyze certain oxidant and antioxidant parameters (advanced oxidation protein products [AOPP], ferric reducing ability of plasma [FRAP], malondialdehyde [MDA], and sialic acid [SA]), blood lipid profile, fasting blood glucose (FBG) and HbA1c. At the end of the program HbA1c and FBG, triglyceride (TG) and very-low-density lipoprotein (VLDL) levels decreased and high-density lipoprotein (HDL) increased significantly (P=0.000, P=0.001, P=0.008, P=0,001 and P=0.02, respectively). AOPP, FRAP, SA levels of the patients increased significantly following first CPET (P=0.000, P=0.049, P=0.014 respectively). At the end of the exercise program AOPP level increased significantly following last CPET. Baseline SA level increased significantly following exercise program (P=0.002). We suggest that poor glycemic control which plays the major role in the pathogenesis of DM and its complications would be improved by 12 weeks of a regular exercise program. Whereas the acute exercise induces protein oxidation, regularly aerobic training may enhance the antioxidant status of type 2 DM patients.

  14. Comparison of clinical parameters and environmental noise levels between regular surgery and piezosurgery for extraction of impacted third molars.

    PubMed

    Chang, Hao-Hueng; Lee, Ming-Shu; Hsu, You-Chyun; Tsai, Shang-Jye; Lin, Chun-Pin

    2015-10-01

    Impacted third molars can be extracted by regular surgery or piezosurgery. The aim of this study was to compare clinical parameters and device-produced noise levels between regular surgery and piezosurgery for the extraction of impacted third molars. Twenty patients (18 women and 2 men, 17-29 years of age) with bilateral symmetrical impacted mandibular or maxillary third molars of the same level were included in this randomized crossover clinical trial. The 40 impacted third molars were divided into a control group (n = 20), in which the third molar was extracted by regular surgery using a high-speed handpiece and an elevator, and an experimental group (n = 20), in which the third molar was extracted by piezosurgery using a high-speed handpiece and a piezotome. The clinical parameters were evaluated by a self-reported questionnaire. The noise levels produced by the high-speed handpiece and piezotome were measured and compared between the experimental and control groups. Patients in the experimental group had a better feeling about tooth extraction and force delivery during extraction and less facial swelling than patients in the control group. However, there were no significant differences in noise-related disturbance, extraction period, degree of facial swelling, pain score, pain duration, any noise levels produced by the devices under different circumstances during tooth extraction between the control and experimental groups. The piezosurgery device produced noise levels similar to or lower than those of the high-speed drilling device. However, piezosurgery provides advantages of increased patient comfort during extraction of impacted third molars. Copyright © 2014. Published by Elsevier B.V.

  15. Blind estimation of blur in hyperspectral images

    NASA Astrophysics Data System (ADS)

    Zhang, Mo; Vozel, Benoit; Chehdi, Kacem; Uss, Mykhail; Abramov, Sergey; Lukin, Vladimir

    2017-10-01

    Hyperspectral images acquired by remote sensing systems are generally degraded by noise and can be sometimes more severely degraded by blur. When no knowledge is available about the degradations present on the original image, blind restoration methods can only be considered. By blind, we mean absolutely no knowledge neither of the blur point spread function (PSF) nor the original latent channel and the noise level. In this study, we address the blind restoration of the degraded channels component-wise, according to a sequential scheme. For each degraded channel, the sequential scheme estimates the blur point spread function (PSF) in a first stage and deconvolves the degraded channel in a second and final stage by means of using the PSF previously estimated. We propose a new component-wise blind method for estimating effectively and accurately the blur point spread function. This method follows recent approaches suggesting the detection, selection and use of sufficiently salient edges in the current processed channel for supporting the regularized blur PSF estimation. Several modifications are beneficially introduced in our work. A new selection of salient edges through thresholding adequately the cumulative distribution of their corresponding gradient magnitudes is introduced. Besides, quasi-automatic and spatially adaptive tuning of the involved regularization parameters is considered. To prove applicability and higher efficiency of the proposed method, we compare it against the method it originates from and four representative edge-sparsifying regularized methods of the literature already assessed in a previous work. Our attention is mainly paid to the objective analysis (via ݈l1-norm) of the blur PSF error estimation accuracy. The tests are performed on a synthetic hyperspectral image. This synthetic hyperspectral image has been built from various samples from classified areas of a real-life hyperspectral image, in order to benefit from realistic spatial distribution of reference spectral signatures to recover after synthetic degradation. The synthetic hyperspectral image has been successively degraded with eight real blurs taken from the literature, each of a different support size. Conclusions, practical recommendations and perspectives are drawn from the results experimentally obtained.

  16. Compliance with guidelines and predictors of mortality in hemodialysis. Learning from Serbia patients.

    PubMed

    Djukanović, Ljubica; Dimković, Nada; Marinković, Jelena; Andrić, Branislav; Bogdanović, Jasmina; Budošan, Ivana; Cvetičanin, Anica; Djordjev, Kosta; Djordjević, Verica; Djurić, Živka; Lilić, Branimir Haviža; Jovanović, Nasta; Jelačić, Rosa; Knežević, Violeta; Kostić, Svetislav; Lazarević, Tatjana; Ljubenović, Stanimir; Marić, Ivko; Marković, Rodoljub; Milenković, Srboljub; Milićević, Olivera; Mitić, Igor; Mićunović, Vesna; Mišković, Milena; Pilipović, Dragana; Plješa, Steva; Radaković, Miroslava; Stanojević, Marina Stojanović; Janković, Biserka Tirmenštajn; Vojinović, Goran; Šefer, Kornelija

    2015-01-01

    The aims of the study were to determine the percentage of patients on regular hemodialysis (HD) in Serbia failing to meet KDOQI guidelines targets and find out factors associated with the risk of time to death and the association between guidelines adherence and patient outcome. A cohort of 2153 patients on regular HD in 24 centers (55.7% of overall HD population) in Serbia were followed from January 2010 to December 2012. The percentage of patients failing to meet KDOQI guidelines targets of dialysis dose (Kt/V>1.2), hemoglobin (>110g/L), serum phosphorus (1.1-1.8mmol/L), calcium (2.1-2.4mmol/L) and iPTH (150-300pg/mL) was determined. Cox proportional hazards analysis was used to select variables significantly associated with the risk of time to death. The patients were on regular HD for 5.3±5.3 years, dialyzed 11.8±1.9h/week. Kt/V<1.2 had 42.4% of patients, hemoglobin <110g/L had 66.1%, s-phosphorus <1.1mmol/L had 21.7% and >1.8mmol/L 28.6%, s-calcium <2.1mmol/L had 11.7% and >2.4mmol/L 25.3%, iPTH <150pg/mL had 40% and >300pg/mL 39.7% of patients. Using Cox model (adjustment for patient age, gender, duration of HD treatment) age, duration of HD treatment, hemoglobin, iPTH and diabetic nephropathy were selected as significant independent predictors of time to death. When targets of five examined parameters were included in Cox model, target for KtV, hemoglobin and iPTH were found to be significant independent predictors of time to death. Substantial proportion of patients examined failed to meet KDOQI guidelines targets. The relative risk of time to death was associated with being outside the targets for Kt/V, hemoglobin and iPTH. Copyright © 2015 The Authors. Published by Elsevier España, S.L.U. All rights reserved.

  17. Formation of Large-Amplitude Wave Groups in an Experimental Model Basin

    DTIC Science & Technology

    2008-08-01

    varying parameters, including amplitude, frequency, and signal duration. Superposition of thes finite regular waves produced repeatable wave groups at a...19 Regular Waves 20 Irregular Waves 21 Senix Wave Gages 21 GLRP 23 Instrumentation Calibration and Uncertainty 26 Senix Ultrasonic Wave Gages... signal output from sine wave superposition, two sine waves combined: x] + x2 (top) and x3 + x4 (middle), all four waves (x, + x2 + x, + xA

  18. Minimum mean squared error (MSE) adjustment and the optimal Tykhonov-Phillips regularization parameter via reproducing best invariant quadratic uniformly unbiased estimates (repro-BIQUUE)

    NASA Astrophysics Data System (ADS)

    Schaffrin, Burkhard

    2008-02-01

    In a linear Gauss-Markov model, the parameter estimates from BLUUE (Best Linear Uniformly Unbiased Estimate) are not robust against possible outliers in the observations. Moreover, by giving up the unbiasedness constraint, the mean squared error (MSE) risk may be further reduced, in particular when the problem is ill-posed. In this paper, the α-weighted S-homBLE (Best homogeneously Linear Estimate) is derived via formulas originally used for variance component estimation on the basis of the repro-BIQUUE (reproducing Best Invariant Quadratic Uniformly Unbiased Estimate) principle in a model with stochastic prior information. In the present model, however, such prior information is not included, which allows the comparison of the stochastic approach (α-weighted S-homBLE) with the well-established algebraic approach of Tykhonov-Phillips regularization, also known as R-HAPS (Hybrid APproximation Solution), whenever the inverse of the “substitute matrix” S exists and is chosen as the R matrix that defines the relative impact of the regularizing term on the final result.

  19. Image degradation characteristics and restoration based on regularization for diffractive imaging

    NASA Astrophysics Data System (ADS)

    Zhi, Xiyang; Jiang, Shikai; Zhang, Wei; Wang, Dawei; Li, Yun

    2017-11-01

    The diffractive membrane optical imaging system is an important development trend of ultra large aperture and lightweight space camera. However, related investigations on physics-based diffractive imaging degradation characteristics and corresponding image restoration methods are less studied. In this paper, the model of image quality degradation for the diffraction imaging system is first deduced mathematically based on diffraction theory and then the degradation characteristics are analyzed. On this basis, a novel regularization model of image restoration that contains multiple prior constraints is established. After that, the solving approach of the equation with the multi-norm coexistence and multi-regularization parameters (prior's parameters) is presented. Subsequently, the space-variant PSF image restoration method for large aperture diffractive imaging system is proposed combined with block idea of isoplanatic region. Experimentally, the proposed algorithm demonstrates its capacity to achieve multi-objective improvement including MTF enhancing, dispersion correcting, noise and artifact suppressing as well as image's detail preserving, and produce satisfactory visual quality. This can provide scientific basis for applications and possesses potential application prospects on future space applications of diffractive membrane imaging technology.

  20. Evaluation of haematological, hepatic and renal functions of petroleum tanker drivers in Lagos, Nigeria.

    PubMed

    Awodele, Olufunsho; Sulayman, Ademola A; Akintonwa, Alade

    2014-03-01

    Hydrocarbons which are among the major components of petroleum products are considered toxic and have been implicated in a number of human diseases. Tanker drivers are continuously exposed to hydrocarbons by inhalation and most of these drivers do not use protective devices to prevent inhalation of petroleum products; nor do they visit hospital regularly for routine check-up. In view of this occupational hazard, we investigated the haematological, renal and hepatic functions of workers of petroleum tankers drivers in Lagos, Nigeria. Twenty-five tanker drivers' and fifteen control subjects were randomly selected based on the selection criteria of not smoking and working for minimum of 5 years as petroleum tanker driver. The liver, renal and haematological parameters were analyzed using automated clinical and haematological analyzers while the lipid peroxidation and antioxidant level tests were assayed using standard methods. There were significant (p ≤ 0.05) increases in the levels of serum alanine amino transferase (31.14±13.72; 22.38±9.89), albumin (42.50±4.69; 45.36±1.74) and alkaline phosphatase (84.04±21.89; 62.04±23.33) of petroleum tanker drivers compared with the controls. A significant (p≤0.05) increase in the levels of creatinine, urea and white blood cells of the tanker drivers, compared with the controls, were also obtained. The results have enormous health implications of continuous exposure to petroleum products reflected hepatic and renal damage of petroleum tanker drivers. Therefore, there is need for this group of workers to be sensitized on the importance of protective devises, regular medical checkup and management.

  1. Gait parameters are differently affected by concurrent smartphone-based activities with scaled levels of cognitive effort.

    PubMed

    Caramia, Carlotta; Bernabucci, Ivan; D'Anna, Carmen; De Marchis, Cristiano; Schmid, Maurizio

    2017-01-01

    The widespread and pervasive use of smartphones for sending messages, calling, and entertainment purposes, mainly among young adults, is often accompanied by the concurrent execution of other tasks. Recent studies have analyzed how texting, reading or calling while walking-in some specific conditions-might significantly influence gait parameters. The aim of this study is to examine the effect of different smartphone activities on walking, evaluating the variations of several gait parameters. 10 young healthy students (all smartphone proficient users) were instructed to text chat (with two different levels of cognitive load), call, surf on a social network or play with a math game while walking in a real-life outdoor setting. Each of these activities is characterized by a different cognitive load. Using an inertial measurement unit on the lower trunk, spatio-temporal gait parameters, together with regularity, symmetry and smoothness parameters, were extracted and grouped for comparison among normal walking and different dual task demands. An overall significant effect of task type on the aforementioned parameters group was observed. The alterations in gait parameters vary as a function of cognitive effort. In particular, stride frequency, step length and gait speed show a decrement, while step time increases as a function of cognitive effort. Smoothness, regularity and symmetry parameters are significantly altered for specific dual task conditions, mainly along the mediolateral direction. These results may lead to a better understanding of the possible risks related to walking and concurrent smartphone use.

  2. Gait parameters are differently affected by concurrent smartphone-based activities with scaled levels of cognitive effort

    PubMed Central

    Bernabucci, Ivan; D'Anna, Carmen; De Marchis, Cristiano; Schmid, Maurizio

    2017-01-01

    The widespread and pervasive use of smartphones for sending messages, calling, and entertainment purposes, mainly among young adults, is often accompanied by the concurrent execution of other tasks. Recent studies have analyzed how texting, reading or calling while walking–in some specific conditions–might significantly influence gait parameters. The aim of this study is to examine the effect of different smartphone activities on walking, evaluating the variations of several gait parameters. 10 young healthy students (all smartphone proficient users) were instructed to text chat (with two different levels of cognitive load), call, surf on a social network or play with a math game while walking in a real-life outdoor setting. Each of these activities is characterized by a different cognitive load. Using an inertial measurement unit on the lower trunk, spatio-temporal gait parameters, together with regularity, symmetry and smoothness parameters, were extracted and grouped for comparison among normal walking and different dual task demands. An overall significant effect of task type on the aforementioned parameters group was observed. The alterations in gait parameters vary as a function of cognitive effort. In particular, stride frequency, step length and gait speed show a decrement, while step time increases as a function of cognitive effort. Smoothness, regularity and symmetry parameters are significantly altered for specific dual task conditions, mainly along the mediolateral direction. These results may lead to a better understanding of the possible risks related to walking and concurrent smartphone use. PMID:29023456

  3. Chemical interactions and thermodynamic studies in aluminum alloy/molten salt systems

    NASA Astrophysics Data System (ADS)

    Narayanan, Ramesh

    The recycling of aluminum and aluminum alloys such as Used Beverage Container (UBC) is done under a cover of molten salt flux based on (NaCl-KCl+fluorides). The reactions of aluminum alloys with molten salt fluxes have been investigated. Thermodynamic calculations are performed in the alloy/salt flux systems which allow quantitative predictions of the equilibrium compositions. There is preferential reaction of Mg in Al-Mg alloy with molten salt fluxes, especially those containing fluorides like NaF. An exchange reaction between Al-Mg alloy and molten salt flux has been demonstrated. Mg from the Al-Mg alloy transfers into the salt flux while Na from the salt flux transfers into the metal. Thermodynamic calculations indicated that the amount of Na in metal increases as the Mg content in alloy and/or NaF content in the reacting flux increases. This is an important point because small amounts of Na have a detrimental effect on the mechanical properties of the Al-Mg alloy. The reactions of Al alloys with molten salt fluxes result in the formation of bluish purple colored "streamers". It was established that the streamer is liquid alkali metal (Na and K in the case of NaCl-KCl-NaF systems) dissipating into the melt. The melts in which such streamers were observed are identified. The metal losses occurring due to reactions have been quantified, both by thermodynamic calculations and experimentally. A computer program has been developed to calculate ternary phase diagrams in molten salt systems from the constituting binary phase diagrams, based on a regular solution model. The extent of deviation of the binary systems from regular solution has been quantified. The systems investigated in which good agreement was found between the calculated and experimental phase diagrams included NaF-KF-LiF, NaCl-NaF-NaI and KNOsb3-TINOsb3-LiNOsb3. Furthermore, an insight has been provided on the interrelationship between the regular solution parameters and the topology of the phase diagram. The isotherms are flat (i.e. no skewness) when the regular solution parameters are zero. When the regular solution parameters are non-zero, the isotherms are skewed. A regular solution model is not adequate to accurately model the molten salt systems used in recycling like NaCl-KCl-LiF and NaCl-KCl-NaF.

  4. TIMSS 2011 Student and Teacher Predictors for Mathematics Achievement Explored and Identified via Elastic Net.

    PubMed

    Yoo, Jin Eun

    2018-01-01

    A substantial body of research has been conducted on variables relating to students' mathematics achievement with TIMSS. However, most studies have employed conventional statistical methods, and have focused on selected few indicators instead of utilizing hundreds of variables TIMSS provides. This study aimed to find a prediction model for students' mathematics achievement using as many TIMSS student and teacher variables as possible. Elastic net, the selected machine learning technique in this study, takes advantage of both LASSO and ridge in terms of variable selection and multicollinearity, respectively. A logistic regression model was also employed to predict TIMSS 2011 Korean 4th graders' mathematics achievement. Ten-fold cross-validation with mean squared error was employed to determine the elastic net regularization parameter. Among 162 TIMSS variables explored, 12 student and 5 teacher variables were selected in the elastic net model, and the prediction accuracy, sensitivity, and specificity were 76.06, 70.23, and 80.34%, respectively. This study showed that the elastic net method can be successfully applied to educational large-scale data by selecting a subset of variables with reasonable prediction accuracy and finding new variables to predict students' mathematics achievement. Newly found variables via machine learning can shed light on the existing theories from a totally different perspective, which in turn propagates creation of a new theory or complement of existing ones. This study also examined the current scale development convention from a machine learning perspective.

  5. TIMSS 2011 Student and Teacher Predictors for Mathematics Achievement Explored and Identified via Elastic Net

    PubMed Central

    Yoo, Jin Eun

    2018-01-01

    A substantial body of research has been conducted on variables relating to students' mathematics achievement with TIMSS. However, most studies have employed conventional statistical methods, and have focused on selected few indicators instead of utilizing hundreds of variables TIMSS provides. This study aimed to find a prediction model for students' mathematics achievement using as many TIMSS student and teacher variables as possible. Elastic net, the selected machine learning technique in this study, takes advantage of both LASSO and ridge in terms of variable selection and multicollinearity, respectively. A logistic regression model was also employed to predict TIMSS 2011 Korean 4th graders' mathematics achievement. Ten-fold cross-validation with mean squared error was employed to determine the elastic net regularization parameter. Among 162 TIMSS variables explored, 12 student and 5 teacher variables were selected in the elastic net model, and the prediction accuracy, sensitivity, and specificity were 76.06, 70.23, and 80.34%, respectively. This study showed that the elastic net method can be successfully applied to educational large-scale data by selecting a subset of variables with reasonable prediction accuracy and finding new variables to predict students' mathematics achievement. Newly found variables via machine learning can shed light on the existing theories from a totally different perspective, which in turn propagates creation of a new theory or complement of existing ones. This study also examined the current scale development convention from a machine learning perspective. PMID:29599736

  6. Classification of mislabelled microarrays using robust sparse logistic regression.

    PubMed

    Bootkrajang, Jakramate; Kabán, Ata

    2013-04-01

    Previous studies reported that labelling errors are not uncommon in microarray datasets. In such cases, the training set may become misleading, and the ability of classifiers to make reliable inferences from the data is compromised. Yet, few methods are currently available in the bioinformatics literature to deal with this problem. The few existing methods focus on data cleansing alone, without reference to classification, and their performance crucially depends on some tuning parameters. In this article, we develop a new method to detect mislabelled arrays simultaneously with learning a sparse logistic regression classifier. Our method may be seen as a label-noise robust extension of the well-known and successful Bayesian logistic regression classifier. To account for possible mislabelling, we formulate a label-flipping process as part of the classifier. The regularization parameter is automatically set using Bayesian regularization, which not only saves the computation time that cross-validation would take, but also eliminates any unwanted effects of label noise when setting the regularization parameter. Extensive experiments with both synthetic data and real microarray datasets demonstrate that our approach is able to counter the bad effects of labelling errors in terms of predictive performance, it is effective at identifying marker genes and simultaneously it detects mislabelled arrays to high accuracy. The code is available from http://cs.bham.ac.uk/∼jxb008. Supplementary data are available at Bioinformatics online.

  7. Insights into new-onset atrial fibrillation following open heart surgery and implications for type II atrial flutter.

    PubMed

    Sadrpour, Shervin A; Srinivasan, Deepa; Bhimani, Ashish A; Lee, Seungyup; Ryu, Kyungmoo; Cakulev, Ivan; Khrestian, Celeen M; Markowitz, Alan H; Waldo, Albert L; Sahadevan, Jayakumar

    2015-12-01

    Postoperative atrial fibrillation (POAF), new-onset AF after open heart surgery (OHS), is thought to be related to pericarditis. Based on AF studies in the canine sterile pericarditis model, we hypothesized that POAF in patients after OHS may be associated with a rapid, regular rhythm in the left atrium (LA), suggestive of an LA driver maintaining AF. The aim of this study was to test the hypothesis that in patients with POAF, atrial electrograms (AEGs) recorded from at least one of the two carefully selected LA sites would manifest a rapid, regular rhythm with AEGs of short cycle length (CL) and constant morphology, but a selected right atrial (RA) site would manifest AEGs with irregular CLs and variable morphology. In 44 patients undergoing OHS, AEGs recorded from the epicardial surface of the RA, the LA portion of Bachmann's bundle, and the posterior LA during sustained AF were analysed for regularity of CL and morphology. Sustained AF occurred in 15 of 44 patients. Atrial electrograms were recorded in 11 of 15 patients; 8 of 11 had rapid, regular activation with constant morphology recorded from at least one LA site; no regular AEG sites were present in 3 of 11 patients. Atrial electrograms recorded during sustained POAF frequently demonstrated rapid, regular activation in at least one LA site, consistent with a driver maintaining AF. Published by Oxford University Press on behalf of the European Society of Cardiology 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  8. Manifold Regularized Multitask Feature Learning for Multimodality Disease Classification

    PubMed Central

    Jie, Biao; Zhang, Daoqiang; Cheng, Bo; Shen, Dinggang

    2015-01-01

    Multimodality based methods have shown great advantages in classification of Alzheimer’s disease (AD) and its prodromal stage, that is, mild cognitive impairment (MCI). Recently, multitask feature selection methods are typically used for joint selection of common features across multiple modalities. However, one disadvantage of existing multimodality based methods is that they ignore the useful data distribution information in each modality, which is essential for subsequent classification. Accordingly, in this paper we propose a manifold regularized multitask feature learning method to preserve both the intrinsic relatedness among multiple modalities of data and the data distribution information in each modality. Specifically, we denote the feature learning on each modality as a single task, and use group-sparsity regularizer to capture the intrinsic relatedness among multiple tasks (i.e., modalities) and jointly select the common features from multiple tasks. Furthermore, we introduce a new manifold-based Laplacian regularizer to preserve the data distribution information from each task. Finally, we use the multikernel support vector machine method to fuse multimodality data for eventual classification. Conversely, we also extend our method to the semisupervised setting, where only partial data are labeled. We evaluate our method using the baseline magnetic resonance imaging (MRI), fluorodeoxyglucose positron emission tomography (FDG-PET), and cerebrospinal fluid (CSF) data of subjects from AD neuroimaging initiative database. The experimental results demonstrate that our proposed method can not only achieve improved classification performance, but also help to discover the disease-related brain regions useful for disease diagnosis. PMID:25277605

  9. [Relationship between cyberbullying and the suicide related psychological behavior among middle and high school students in Anhui Province].

    PubMed

    Wang, Gengfu; Fang, Yu; Jiang, Liu; Zhou, Guiyang; Yuan, Shanshan; Wang, Xiuxiu; Su, Puyu

    2015-11-01

    To examine the prevalence rate of cyberbullying in middle and high school students in Anhui Province and explore the relationship between cyberbullying and suicide related psychological behavior. A total of 5726 middle and high school students from the 7th to the 12th grades in three regular middle schools and three regular high schools recruited from three cities in the Anhui Province (Tongling, Chuzhou, and Fuyang). Tongling, Chuzhou, and Fuyang are in the south, middle and north of Anhui, respectively. Each city was selected one regular middle school and one regular high school, and 8 classes were selected form each grade from each school. A stratified cluster random sampling method was used to randomly select 5726 participants among the six schools. Self-reports on cyberbullying and suicide related psychological behavior were collected. Among these 5726 adolescents, 46.8% of them involved in cyberbullying. Among them, 3.2% were bullies, 23.8% were victims, and 19.8% were both. Prevalence rates of suicide idea, suicide plan, suicide preparation, suicide implementation were 19.3%, 6.9%, 4.7% and 1.8%, respectively. Cyberbullying involvement, as victims, bullies or bully-victims, increased the risk of four kinds of suicide related psychological behavior (suicide idea, suicide plan, suicide preparation, suicide implementation) (P < 0.05). Cyberbullying has become a common occurrence in middle and high school students. Additionally, cyberbullying is closely related to suicide related psychological behavior among middle and high school students.

  10. Seismic data enhancement and regularization using finite offset Common Diffraction Surface (CDS) stack

    NASA Astrophysics Data System (ADS)

    Garabito, German; Cruz, João Carlos Ribeiro; Oliva, Pedro Andrés Chira; Söllner, Walter

    2017-01-01

    The Common Reflection Surface stack is a robust method for simulating zero-offset and common-offset sections with high accuracy from multi-coverage seismic data. For simulating common-offset sections, the Common-Reflection-Surface stack method uses a hyperbolic traveltime approximation that depends on five kinematic parameters for each selected sample point of the common-offset section to be simulated. The main challenge of this method is to find a computationally efficient data-driven optimization strategy for accurately determining the five kinematic stacking parameters on which each sample of the stacked common-offset section depends. Several authors have applied multi-step strategies to obtain the optimal parameters by combining different pre-stack data configurations. Recently, other authors used one-step data-driven strategies based on a global optimization for estimating simultaneously the five parameters from multi-midpoint and multi-offset gathers. In order to increase the computational efficiency of the global optimization process, we use in this paper a reduced form of the Common-Reflection-Surface traveltime approximation that depends on only four parameters, the so-called Common Diffraction Surface traveltime approximation. By analyzing the convergence of both objective functions and the data enhancement effect after applying the two traveltime approximations to the Marmousi synthetic dataset and a real land dataset, we conclude that the Common-Diffraction-Surface approximation is more efficient within certain aperture limits and preserves at the same time a high image accuracy. The preserved image quality is also observed in a direct comparison after applying both approximations for simulating common-offset sections on noisy pre-stack data.

  11. Impact of auditory training for perceptual assessment of voice executed by undergraduate students in Speech-Language Pathology.

    PubMed

    Silva, Regiane Serafim Abreu; Simões-Zenari, Marcia; Nemr, Nair Kátia

    2012-01-01

    To analyze the impact of auditory training for auditory-perceptual assessment carried out by Speech-Language Pathology undergraduate students. During two semesters, 17 undergraduate students enrolled in theoretical subjects regarding phonation (Phonation/Phonation Disorders) analyzed samples of altered and unaltered voices (selected for this purpose), using the GRBAS scale. All subjects received auditory training during nine 15-minute meetings. In each meeting, a different parameter was presented using the different voices sample, with predominance of the trained aspect in each session. Sample assessment using the scale was carried out before and after training, and in other four opportunities throughout the meetings. Students' assessments were compared to an assessment carried out by three voice-experts speech-language pathologists who were the judges. To verify training effectiveness, the Friedman's test and the Kappa index were used. The rate of correct answers in the pre-training was considered between regular and good. It was observed maintenance of the number of correct answers throughout assessments, for most of the scale parameters. In the post-training moment, the students showed improvements in the analysis of asthenia, a parameter that was emphasized during training after the students reported difficulties analyzing it. There was a decrease in the number of correct answers for the roughness parameter after it was approached segmented into hoarseness and harshness, and observed in association with different diagnoses and acoustic parameters. Auditory training enhances students' initial abilities to perform the evaluation, aside from guiding adjustments in the dynamics of the university subject.

  12. Word Length and Word Frequency Affect Eye Movements in Dyslexic Children Reading in a Regular (German) Orthography

    ERIC Educational Resources Information Center

    Durrwachter, Ute; Sokolov, Alexander N.; Reinhard, Jens; Klosinski, Gunther; Trauzettel-Klosinski, Susanne

    2010-01-01

    We combined independently the word length and word frequency to examine if the difficulty of reading material affects eye movements in readers of German, which has high orthographic regularity, comparing the outcome with previous findings available in other languages. Sixteen carefully selected German-speaking dyslexic children (mean age, 9.5…

  13. Costs in Serving Handicapped Children in Head Start: An Analysis of Methods and Cost Estimates. Final Report.

    ERIC Educational Resources Information Center

    Syracuse Univ., NY. Div. of Special Education and Rehabilitation.

    An evaluation of the costs of serving handicapped children in Head Start was based on information collected in conjunction with on-site visits to regular Head Start programs, experimental programs, and specially selected model preschool programs, and from questionnaires completed by 1,353 grantees and delegate agencies of regular Head Start…

  14. Elastic and failure response of imperfect three-dimensional metallic lattices: the role of geometric defects induced by Selective Laser Melting

    NASA Astrophysics Data System (ADS)

    Liu, Lu; Kamm, Paul; García-Moreno, Francisco; Banhart, John; Pasini, Damiano

    2017-10-01

    This paper examines three-dimensional metallic lattices with regular octet and rhombicuboctahedron units fabricated with geometric imperfections via Selective Laser Sintering. We use X-ray computed tomography to capture morphology, location, and distribution of process-induced defects with the aim of studying their role in the elastic response, damage initiation, and failure evolution under quasi-static compression. Testing results from in-situ compression tomography show that each lattice exhibits a distinct failure mechanism that is governed not only by cell topology but also by geometric defects induced by additive manufacturing. Extracted from X-ray tomography images, the statistical distributions of three sets of defects, namely strut waviness, strut thickness variation, and strut oversizing, are used to develop numerical models of statistically representative lattices with imperfect geometry. Elastic and failure responses are predicted within 10% agreement from the experimental data. In addition, a computational study is presented to shed light into the relationship between the amplitude of selected defects and the reduction of elastic properties compared to their nominal values. The evolution of failure mechanisms is also explained with respect to strut oversizing, a parameter that can critically cause failure mode transitions that are not visible in defect-free lattices.

  15. Effects of high-frequency damping on iterative convergence of implicit viscous solver

    NASA Astrophysics Data System (ADS)

    Nishikawa, Hiroaki; Nakashima, Yoshitaka; Watanabe, Norihiko

    2017-11-01

    This paper discusses effects of high-frequency damping on iterative convergence of an implicit defect-correction solver for viscous problems. The study targets a finite-volume discretization with a one parameter family of damped viscous schemes. The parameter α controls high-frequency damping: zero damping with α = 0, and larger damping for larger α (> 0). Convergence rates are predicted for a model diffusion equation by a Fourier analysis over a practical range of α. It is shown that the convergence rate attains its minimum at α = 1 on regular quadrilateral grids, and deteriorates for larger values of α. A similar behavior is observed for regular triangular grids. In both quadrilateral and triangular grids, the solver is predicted to diverge for α smaller than approximately 0.5. Numerical results are shown for the diffusion equation and the Navier-Stokes equations on regular and irregular grids. The study suggests that α = 1 and 4/3 are suitable values for robust and efficient computations, and α = 4 / 3 is recommended for the diffusion equation, which achieves higher-order accuracy on regular quadrilateral grids. Finally, a Jacobian-Free Newton-Krylov solver with the implicit solver (a low-order Jacobian approximately inverted by a multi-color Gauss-Seidel relaxation scheme) used as a variable preconditioner is recommended for practical computations, which provides robust and efficient convergence for a wide range of α.

  16. Cortical dipole imaging using truncated total least squares considering transfer matrix error.

    PubMed

    Hori, Junichi; Takeuchi, Kosuke

    2013-01-01

    Cortical dipole imaging has been proposed as a method to visualize electroencephalogram in high spatial resolution. We investigated the inverse technique of cortical dipole imaging using a truncated total least squares (TTLS). The TTLS is a regularization technique to reduce the influence from both the measurement noise and the transfer matrix error caused by the head model distortion. The estimation of the regularization parameter was also investigated based on L-curve. The computer simulation suggested that the estimation accuracy was improved by the TTLS compared with Tikhonov regularization. The proposed method was applied to human experimental data of visual evoked potentials. We confirmed the TTLS provided the high spatial resolution of cortical dipole imaging.

  17. Regularized non-stationary morphological reconstruction algorithm for weak signal detection in microseismic monitoring: methodology

    NASA Astrophysics Data System (ADS)

    Huang, Weilin; Wang, Runqiu; Chen, Yangkang

    2018-05-01

    Microseismic signal is typically weak compared with the strong background noise. In order to effectively detect the weak signal in microseismic data, we propose a mathematical morphology based approach. We decompose the initial data into several morphological multiscale components. For detection of weak signal, a non-stationary weighting operator is proposed and introduced into the process of reconstruction of data by morphological multiscale components. The non-stationary weighting operator can be obtained by solving an inversion problem. The regularized non-stationary method can be understood as a non-stationary matching filtering method, where the matching filter has the same size as the data to be filtered. In this paper, we provide detailed algorithmic descriptions and analysis. The detailed algorithm framework, parameter selection and computational issue for the regularized non-stationary morphological reconstruction (RNMR) method are presented. We validate the presented method through a comprehensive analysis through different data examples. We first test the proposed technique using a synthetic data set. Then the proposed technique is applied to a field project, where the signals induced from hydraulic fracturing are recorded by 12 three-component geophones in a monitoring well. The result demonstrates that the RNMR can improve the detectability of the weak microseismic signals. Using the processed data, the short-term-average over long-term average picking algorithm and Geiger's method are applied to obtain new locations of microseismic events. In addition, we show that the proposed RNMR method can be used not only in microseismic data but also in reflection seismic data to detect the weak signal. We also discussed the extension of RNMR from 1-D to 2-D or a higher dimensional version.

  18. Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method

    PubMed Central

    Pereira, N F; Sitek, A

    2011-01-01

    Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated. PMID:20736496

  19. A ℓ2, 1 norm regularized multi-kernel learning for false positive reduction in Lung nodule CAD.

    PubMed

    Cao, Peng; Liu, Xiaoli; Zhang, Jian; Li, Wei; Zhao, Dazhe; Huang, Min; Zaiane, Osmar

    2017-03-01

    The aim of this paper is to describe a novel algorithm for False Positive Reduction in lung nodule Computer Aided Detection(CAD). In this paper, we describes a new CT lung CAD method which aims to detect solid nodules. Specially, we proposed a multi-kernel classifier with a ℓ 2, 1 norm regularizer for heterogeneous feature fusion and selection from the feature subset level, and designed two efficient strategies to optimize the parameters of kernel weights in non-smooth ℓ 2, 1 regularized multiple kernel learning algorithm. The first optimization algorithm adapts a proximal gradient method for solving the ℓ 2, 1 norm of kernel weights, and use an accelerated method based on FISTA; the second one employs an iterative scheme based on an approximate gradient descent method. The results demonstrates that the FISTA-style accelerated proximal descent method is efficient for the ℓ 2, 1 norm formulation of multiple kernel learning with the theoretical guarantee of the convergence rate. Moreover, the experimental results demonstrate the effectiveness of the proposed methods in terms of Geometric mean (G-mean) and Area under the ROC curve (AUC), and significantly outperforms the competing methods. The proposed approach exhibits some remarkable advantages both in heterogeneous feature subsets fusion and classification phases. Compared with the fusion strategies of feature-level and decision level, the proposed ℓ 2, 1 norm multi-kernel learning algorithm is able to accurately fuse the complementary and heterogeneous feature sets, and automatically prune the irrelevant and redundant feature subsets to form a more discriminative feature set, leading a promising classification performance. Moreover, the proposed algorithm consistently outperforms the comparable classification approaches in the literature. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method

    NASA Astrophysics Data System (ADS)

    Pereira, N. F.; Sitek, A.

    2010-09-01

    Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.

  1. Effect of compliance during periodontal maintenance therapy on levels of bacteria associated with periodontitis: A 6-year prospective study.

    PubMed

    Costa, Fernando Oliveira; Vieira, Thaís Riberal; Cortelli, Sheila Cavalca; Cota, Luís Otávio Miranda; Costa, José Eustáquio; Aguiar, Maria Cássia Ferreira; Cortelli, José Roberto

    2018-05-01

    It is well established that regular compliance during periodontal maintenance therapy (PMT) maintains the stability of periodontal clinical parameters obtained after active periodontal therapy (APT). However, compliance during PMT has not yet been related to subgingival bacterial levels. Thus, this study followed individuals in PMT over 6 years and longitudinally evaluated the effects of compliance on periodontitis-associated bacterial levels and its relation to periodontal status. From a 6-year prospective cohort study with 212 individuals in PMT, 91 were determined to be eligible. From this total, 28 regular compliers (RC) were randomly selected and matched for age and sex with 28 irregular compliers (IC). Complete periodontal examination and microbiological samples were obtained 5 times: T1 (prior to APT), T2 (after APT), T3 (2 years), T4 (4 years), and T5 (6 years). Total bacteria counts and levels of Actinomyces naeslundii, Porphyromonas gingivalis, Tannerella forsythia, and Treponema denticola were evaluated through quantitative polymerase chain reaction. RC had less tooth loss and better clinical and microbiological conditions over time when compared with IC. IC had higher total bacterial counts and higher levels of T. denticola. Moreover, among IC, total bacterial counts were positively associated with plaque index and bleeding on probing, while levels of A. naeslundii, T. forsythia, and T. denticola were negatively associated with clinical attachment loss (4 to 5 mm) among RC. Compliance positively influenced subgingival microbiota and contributed to stability of periodontal clinical status. Regular visits during PMT sustained microbiological benefits provided by APT over a 6-year period. © 2018 American Academy of Periodontology.

  2. Geostatistical Characteristic of Space -Time Variation in Underground Water Selected Quality Parameters in Klodzko Water Intake Area (SW Part of Poland)

    NASA Astrophysics Data System (ADS)

    Namysłowska-Wilczyńska, Barbara

    2016-04-01

    This paper presents selected results of research connected with the development of a (3D) geostatistical hydrogeochemical model of the Klodzko Drainage Basin, dedicated to the spatial and time variation in the selected quality parameters of underground water in the Klodzko water intake area (SW part of Poland). The research covers the period 2011÷2012. Spatial analyses of the variation in various quality parameters, i.e, contents of: ammonium ion [gNH4+/m3], NO3- (nitrate ion) [gNO3/m3], PO4-3 (phosphate ion) [gPO4-3/m3], total organic carbon C (TOC) [gC/m3], pH redox potential and temperature C [degrees], were carried out on the basis of the chemical determinations of the quality parameters of underground water samples taken from the wells in the water intake area. Spatial and time variation in the quality parameters was analyzed on the basis of archival data (period 1977÷1999) for 22 (pump and siphon) wells with a depth ranging from 9.5 to 38.0 m b.g.l., later data obtained (November 2011) from tests of water taken from 14 existing wells. The wells were built in the years 1954÷1998. The water abstraction depth (difference between the terrain elevation and the dynamic water table level) is ranged from 276÷286 m a.s.l., with an average of 282.05 m a.s.l. Dynamic water table level is contained between 6.22 m÷16.44 m b.g.l., with a mean value of 9.64 m b.g.l. The latest data (January 2012) acquired from 3 new piezometers, with a depth of 9÷10m, which were made in other locations in the relevant area. Thematic databases, containing original data on coordinates X, Y (latitude, longitude) and Z (terrain elevation and time - years) and on regionalized variables, i.e. the underground water quality parameters in the Klodzko water intake area determined for different analytical configurations (22 wells, 14 wells, 14 wells + 3 piezometers), were created. Both archival data (acquired in the years 1977÷1999) and the latest data (collected in 2011÷2012) were analyzed. These data were subjected to spatial analyses using statistical and geostatistical methods. The evaluation of basic statistics of the investigated quality parameters, including their histograms of distributions, scatter diagrams between these parameters and also correlation coefficients r were presented in this article. The directional semivariogram function and the ordinary (block) kriging procedure were used to build the 3D geostatistical model. The geostatistical parameters of the theoretical models of directional semivariograms of the studied water quality parameters, calculated along the time interval and along the wells depth (taking into account the terrain elevation), were used in the ordinary (block) kriging estimation. The obtained results of estimation, i.e. block diagrams allowed to determine the levels of increased values Z* of studied underground water quality parameters. Analysis of the variability in the selected quality parameters of underground water for an analyzed area in Klodzko water intake was enriched by referring to the results of geostatistical studies carried out for underground water quality parameters and also for a treated water and in Klodzko water supply system (iron Fe, manganese Mn, ammonium ion NH4+ contents), discussed in earlier works. Spatial and time variation in the latter-mentioned parameters was analysed on the basis of the data (2007÷2011, 2008÷2011). Generally, the behaviour of the underground water quality parameters has been found to vary in space and time. Thanks to the spatial analyses of the variation in the quality parameters in the Kłodzko underground water intake area some regularities (trends) in the variation in water quality have been identified.

  3. The unsaturated flow in porous media with dynamic capillary pressure

    NASA Astrophysics Data System (ADS)

    Milišić, Josipa-Pina

    2018-05-01

    In this paper we consider a degenerate pseudoparabolic equation for the wetting saturation of an unsaturated two-phase flow in porous media with dynamic capillary pressure-saturation relationship where the relaxation parameter depends on the saturation. Following the approach given in [13] the existence of a weak solution is proved using Galerkin approximation and regularization techniques. A priori estimates needed for passing to the limit when the regularization parameter goes to zero are obtained by using appropriate test-functions, motivated by the fact that considered PDE allows a natural generalization of the classical Kullback entropy. Finally, a special care was given in obtaining an estimate of the mixed-derivative term by combining the information from the capillary pressure with the obtained a priori estimates on the saturation.

  4. Regularities in Low-Temperature Phosphatization of Silicates

    NASA Astrophysics Data System (ADS)

    Savenko, A. V.

    2018-01-01

    The regularities in low-temperature phosphatization of silicates are defined from long-term experiments on the interaction between different silicate minerals and phosphate-bearing solutions in a wide range of medium acidity. It is shown that the parameters of the reaction of phosphatization of hornblende, orthoclase, and labradorite have the same values as for clayey minerals (kaolinite and montmorillonite). This effect may appear, if phosphotization proceeds, not after silicate minerals with a different structure and composition, but after a secondary silicate phase formed upon interaction between silicates and water and stable in a certain pH range. Variation in the parameters of the reaction of phosphatization at pH ≈ 1.8 is due to the stability of the silicate phase different from that at higher pH values.

  5. A unified framework for approximation in inverse problems for distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Ito, K.

    1988-01-01

    A theoretical framework is presented that can be used to treat approximation techniques for very general classes of parameter estimation problems involving distributed systems that are either first or second order in time. Using the approach developed, one can obtain both convergence and stability (continuous dependence of parameter estimates with respect to the observations) under very weak regularity and compactness assumptions on the set of admissible parameters. This unified theory can be used for many problems found in the recent literature and in many cases offers significant improvements to existing results.

  6. Structural characterization of the packings of granular regular polygons.

    PubMed

    Wang, Chuncheng; Dong, Kejun; Yu, Aibing

    2015-12-01

    By using a recently developed method for discrete modeling of nonspherical particles, we simulate the random packings of granular regular polygons with three to 11 edges under gravity. The effects of shape and friction on the packing structures are investigated by various structural parameters, including packing fraction, the radial distribution function, coordination number, Voronoi tessellation, and bond-orientational order. We find that packing fraction is generally higher for geometrically nonfrustrated regular polygons, and can be increased by the increase of edge number and decrease of friction. The changes of packing fraction are linked with those of the microstructures, such as the variations of the translational and orientational orders and local configurations. In particular, the free areas of Voronoi tessellations (which are related to local packing fractions) can be described by log-normal distributions for all polygons. The quantitative analyses establish a clearer picture for the packings of regular polygons.

  7. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for Lp-Regularization Using the Multiple Sub-Dictionary Representation

    PubMed Central

    Zhang, Jie; Fan, Shangang; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki

    2017-01-01

    Both L1/2 and L2/3 are two typical non-convex regularizations of Lp (0

  8. Nonsmooth, nonconvex regularizers applied to linear electromagnetic inverse problems

    NASA Astrophysics Data System (ADS)

    Hidalgo-Silva, H.; Gomez-Trevino, E.

    2017-12-01

    Tikhonov's regularization method is the standard technique applied to obtain models of the subsurface conductivity distribution from electric or electromagnetic measurements by solving UT (m) = | F (m) - d |2 + λ P(m). The second term correspond to the stabilizing functional, with P (m) = | ∇ m |2 the usual approach, and λ the regularization parameter. Due to the roughness penalizer inclusion, the model developed by Tikhonov's algorithm tends to smear discontinuities, a feature that may be undesirable. An important requirement for the regularizer is to allow the recovery of edges, and smooth the homogeneous parts. As is well known, Total Variation (TV) is now the standard approach to meet this requirement. Recently, Wang et.al. proved convergence for alternating direction method of multipliers in nonconvex, nonsmooth optimization. In this work we present a study of several algorithms for model recovering of Geosounding data based on Infimal Convolution, and also on hybrid, TV and second order TV and nonsmooth, nonconvex regularizers, observing their performance on synthetic and real data. The algorithms are based on Bregman iteration and Split Bregman method, and the geosounding method is the low-induction numbers magnetic dipoles. Non-smooth regularizers are considered using the Legendre-Fenchel transform.

  9. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for Lp-Regularization Using the Multiple Sub-Dictionary Representation.

    PubMed

    Li, Yunyi; Zhang, Jie; Fan, Shangang; Yang, Jie; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki; Gui, Guan

    2017-12-15

    Both L 1/2 and L 2/3 are two typical non-convex regularizations of L p (0

  10. Prevalence of Autism Spectrum Disorders in Ecuador: A Pilot Study in Quito

    ERIC Educational Resources Information Center

    Dekkers, Laura M.; Groot, Norbert A.; Díaz Mosquera, Elena N.; Andrade Zúñiga, Ivonne P.; Delfos, Martine F.

    2015-01-01

    This research presents the results of the first phase of the study on the prevalence of pupils with Autism Spectrum Disorder (ASD) in regular education in Quito, Ecuador. One-hundred-and-sixty-one regular schools in Quito were selected with a total of 51,453 pupils. Prevalence of ASD was assessed by an interview with the rector of the school or…

  11. The Effects of Pupil-Corrected Tests and Written Teacher Comments on Learning to Spell in the Upper Elementary Grades.

    ERIC Educational Resources Information Center

    Lesner, Julius

    To determine the effects of teacher comments on spelling test papers, 32 randomly selected fourth- and sixth-grade teachers from low and high socioeconomic area Los Angeles elementary schools used 965 pupils in their regular classes as subjects. The teachers gave the regular weekly spelling test, and one of four evaluation treatments was randomly…

  12. Dimensionally regularized Tsallis' statistical mechanics and two-body Newton's gravitation

    NASA Astrophysics Data System (ADS)

    Zamora, J. D.; Rocca, M. C.; Plastino, A.; Ferri, G. L.

    2018-05-01

    Typical Tsallis' statistical mechanics' quantifiers like the partition function and the mean energy exhibit poles. We are speaking of the partition function Z and the mean energy 〈 U 〉 . The poles appear for distinctive values of Tsallis' characteristic real parameter q, at a numerable set of rational numbers of the q-line. These poles are dealt with dimensional regularization resources. The physical effects of these poles on the specific heats are studied here for the two-body classical gravitation potential.

  13. Nonminimal Wu-Yang wormhole

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balakin, A. B.; Zayats, A. E.; Sushkov, S. V.

    2007-04-15

    We discuss exact solutions of a three-parameter nonminimal Einstein-Yang-Mills model, which describe the wormholes of a new type. These wormholes are considered to be supported by the SU(2)-symmetric Yang-Mills field, nonminimally coupled to gravity, the Wu-Yang ansatz for the gauge field being used. We distinguish between regular solutions, describing traversable nonminimal Wu-Yang wormholes, and black wormholes possessing one or two event horizons. The relation between the asymptotic mass of the regular traversable Wu-Yang wormhole and its throat radius is analyzed.

  14. The second law of thermodynamics is the first law of psychology: evolutionary developmental psychology and the theory of tandem, coordinated inheritances: comment on Lickliter and Honeycutt (2003).

    PubMed

    Tooby, John; Cosmides, Leda; Barrett, H Clark

    2003-11-01

    Organisms inherit a set of environmental regularities as well as genes, and these two inheritances repeatedly encounter each other across generations. This repetition drives natural selection to coordinate the interplay of stably replicated genes with stably persisting environmental regularities, so that this web of interactions produces the reliable development of a functionally organized design. Selection is the only known counterweight to the tendency of physical systems to lose rather than grow functional organization. This means that the individually unique and unpredictable factors in the web of developmental interactions are a disordering threat to normal development. Selection built anti-entropic mechanisms into organisms to orchestrate transactions with environments so that they have some chance of being organization-building and reproduction-enhancing rather than disordering.

  15. Higher order sensitivity of solutions to convex programming problems without strict complementarity

    NASA Technical Reports Server (NTRS)

    Malanowski, Kazimierz

    1988-01-01

    Consideration is given to a family of convex programming problems which depend on a vector parameter. It is shown that the solutions of the problems and the associated Lagrange multipliers are arbitrarily many times directionally differentiable functions of the parameter, provided that the data of the problems are sufficiently regular. The characterizations of the respective derivatives are given.

  16. Regularized estimation of Euler pole parameters

    NASA Astrophysics Data System (ADS)

    Aktuğ, Bahadir; Yildirim, Ömer

    2013-07-01

    Euler vectors provide a unified framework to quantify the relative or absolute motions of tectonic plates through various geodetic and geophysical observations. With the advent of space geodesy, Euler parameters of several relatively small plates have been determined through the velocities derived from the space geodesy observations. However, the available data are usually insufficient in number and quality to estimate both the Euler vector components and the Euler pole parameters reliably. Since Euler vectors are defined globally in an Earth-centered Cartesian frame, estimation with the limited geographic coverage of the local/regional geodetic networks usually results in highly correlated vector components. In the case of estimating the Euler pole parameters directly, the situation is even worse, and the position of the Euler pole is nearly collinear with the magnitude of the rotation rate. In this study, a new method, which consists of an analytical derivation of the covariance matrix of the Euler vector in an ideal network configuration, is introduced and a regularized estimation method specifically tailored for estimating the Euler vector is presented. The results show that the proposed method outperforms the least squares estimation in terms of the mean squared error.

  17. A hybrid Pade-Galerkin technique for differential equations

    NASA Technical Reports Server (NTRS)

    Geer, James F.; Andersen, Carl M.

    1993-01-01

    A three-step hybrid analysis technique, which successively uses the regular perturbation expansion method, the Pade expansion method, and then a Galerkin approximation, is presented and applied to some model boundary value problems. In the first step of the method, the regular perturbation method is used to construct an approximation to the solution in the form of a finite power series in a small parameter epsilon associated with the problem. In the second step of the method, the series approximation obtained in step one is used to construct a Pade approximation in the form of a rational function in the parameter epsilon. In the third step, the various powers of epsilon which appear in the Pade approximation are replaced by new (unknown) parameters (delta(sub j)). These new parameters are determined by requiring that the residual formed by substituting the new approximation into the governing differential equation is orthogonal to each of the perturbation coordinate functions used in step one. The technique is applied to model problems involving ordinary or partial differential equations. In general, the technique appears to provide good approximations to the solution even when the perturbation and Pade approximations fail to do so. The method is discussed and topics for future investigations are indicated.

  18. On-Orbit Performance of the Helioseismic and Magnetic Imager Instrument onboard the Solar Dynamics Observatory

    NASA Astrophysics Data System (ADS)

    Hoeksema, J. T.; Baldner, C. S.; Bush, R. I.; Schou, J.; Scherrer, P. H.

    2018-03-01

    The Helioseismic and Magnetic Imager (HMI) instrument is a major component of NASA's Solar Dynamics Observatory (SDO) spacecraft. Since commencement of full regular science operations on 1 May 2010, HMI has operated with remarkable continuity, e.g. during the more than five years of the SDO prime mission that ended 30 September 2015, HMI collected 98.4% of all possible 45-second velocity maps; minimizing gaps in these full-disk Dopplergrams is crucial for helioseismology. HMI velocity, intensity, and magnetic-field measurements are used in numerous investigations, so understanding the quality of the data is important. This article describes the calibration measurements used to track the performance of the HMI instrument, and it details trends in important instrument parameters during the prime mission. Regular calibration sequences provide information used to improve and update the calibration of HMI data. The set-point temperature of the instrument front window and optical bench is adjusted regularly to maintain instrument focus, and changes in the temperature-control scheme have been made to improve stability in the observable quantities. The exposure time has been changed to compensate for a 20% decrease in instrument throughput. Measurements of the performance of the shutter and tuning mechanisms show that they are aging as expected and continue to perform according to specification. Parameters of the tunable optical-filter elements are regularly adjusted to account for drifts in the central wavelength. Frequent measurements of changing CCD-camera characteristics, such as gain and flat field, are used to calibrate the observations. Infrequent expected events such as eclipses, transits, and spacecraft off-points interrupt regular instrument operations and provide the opportunity to perform additional calibration. Onboard instrument anomalies are rare and seem to occur quite uniformly in time. The instrument continues to perform very well.

  19. Comparison of Two Methods of Noise Power Spectrum Determinations of Medical Radiography Systems

    NASA Astrophysics Data System (ADS)

    Hassan, Wan Muhamad Saridan Wan; Ahmed Darwish, Zeki

    2011-03-01

    Noise in medical images is recognized as an important factor that determines the image quality. Image noise is characterized by noise power spectrum (NPS). We compared two methods of NPS determination namely the methods of Wagner and Dobbins on Lanex Regular TMG screen-film system and Hologic Lorad Selenia full field digital mammography system, with the aim of choosing the better method to use. The methods differ in terms of various parametric choices and algorithm implementations. These parameters include the low pass filtering, low frequency filtering, windowing, smoothing, aperture correction, overlapping of region of interest (ROI), length of fast Fourier transform, ROI size, method of ROI normalization, and slice selection of the NPS. Overall, the two methods agreed to the practical value of noise power spectrum between 10-3-10-6 mm2 over spatial frequency range 0-10 mm-1.

  20. Pd (II) complexes of bidentate chalcone ligands: Synthesis, spectral, thermal, antitumor, antioxidant, antimicrobial, DFT and SAR studies

    NASA Astrophysics Data System (ADS)

    Gaber, Mohamed; Awad, Mohamed K.; Atlam, Faten M.

    2018-05-01

    The ligation behavior of two chalcone ligands namely, (E)-3-(4-chlorophenyl)-1-(pyridin-2-yl)prop-2-en-1-one (L1) and (E)-3-(4-methoxyphenyl)-1-(pyridin-2-yl)prop-2-en-1-one (L2), towards the Pd(II) ion is determined. The structures of the complexes are elucidated by elemental analysis, spectral methods (IR, electronic and NMR spectra) as well as the conductance measurements and thermal analysis. The metal complexes exhibit a square planar geometrical arrangement. The kinetic and thermodynamic parameters for some selected decomposition steps have been calculated. The antimicrobial, antioxidant and anticancer activities of the chalcones and their Pd(II) complexes have been evaluated. Molecular orbital computations are performed using DFT at B3LYP level with 6-31 + G(d) and LANL2DZ basis sets to access reliable results to the experimental values. The calculations are performed to obtain the optimized molecular geometry, charge density distribution, extent of distortion from regular geometry. Thermodynamic parameters for the investigated compounds are also studied. The calculations confirm that the investigated complexes have square planner geometry, which is in a good agreement with the experimental observation.

  1. Fractal morphometry of cell complexity.

    PubMed

    Losa, Gabriele A

    2002-01-01

    Irregularity and self-similarity under scale changes are the main attributes of the morphological complexity of both normal and abnormal cells and tissues. In other words, the shape of a self-similar object does not change when the scale of measurement changes, because each part of it looks similar to the original object. However, the size and geometrical parameters of an irregular object do differ when it is examined at increasing resolution, which reveals more details. Significant progress has been made over the past three decades in understanding how irregular shapes and structures in the physical and biological sciences can be analysed. Dominant influences have been the discovery of a new practical geometry of Nature, now known as fractal geometry, and the continuous improvements in computation capabilities. Unlike conventional Euclidean geometry, which was developed to describe regular and ideal geometrical shapes which are practically unknown in nature, fractal geometry can be used to measure the fractal dimension, contour length, surface area and other dimension parameters of almost all irregular and complex biological tissues. We have used selected examples to illustrate the application of the fractal principle to measuring irregular and complex membrane ultrastructures of cells at specific functional and pathological stage.

  2. [Study of ocular surface electromyography signal analysis].

    PubMed

    Zhu, Bei; Qi, Li-Ping

    2009-11-01

    Test ocular surface electromyography signal waves and characteristic parameters to provide effective data for the diagnosis and treatment of ocular myopathy. Surface electromyography signals tests were performed in 140 normal volunteers and 30 patients with ophthalmoplegia. Surface electrodes were attached to medial canthi, lateral canthi and the middle of frontal bone. Then some alternate flashing red lamps were installed on perimeter to reduce the movement of eyeball. The computer hardware, software, and A/D adapter (12 Bit) were used. Sampling frequency could be selected within 40 kHz, frequency of amplifier was 2 kHz, and input short circuit noise was less than 3 microV. For normal volunteers, the ocular surface electromyography signals were regular, and the electric waves were similar between different sex groups and age groups. While for patients with ophthalmoplegia, the wave amplitude of ocular surface electromyography signals were declined or disappeared in the dyskinesia direction. The wave amplitude was related with the degree of pathological process. The characteristic parameters of patients with ophthalmoplegia were higher than normal volunteers. The figures of ocular surface electromyogram obtained from normal volunteers were obviously different with that from patients with ophthalmoplegia. This test can provide reliable quantized data for the diagnosis and treatment of ocular myopathy.

  3. Computerized morphometry as an aid in distinguishing recurrent versus nonrecurrent meningiomas.

    PubMed

    Noy, Shawna; Vlodavsky, Euvgeni; Klorin, Geula; Drumea, Karen; Ben Izhak, Ofer; Shor, Eli; Sabo, Edmond

    2011-06-01

    To use novel digital and morphometric methods to identify variables able to better predict the recurrence of intracranial meningiomas. Histologic images from 30 previously diagnosed meningioma tumors that recurred over 10 years of follow-up were consecutively selected from the Rambam Pathology Archives. Images were captured and morphometrically analyzed. Novel algorithms of digital pattern recognition using Fourier transformation and fractal and nuclear texture analyses were applied to evaluate the overall growth pattern complexity of the tumors, as well as the chromatin texture of individual tumor nuclei. The extracted parameters were then correlated with patient prognosis. Kaplan-Meier analyses revealed statistically significant associations between tumor morphometric parameters and recurrence times. Tumors with less nuclear orientation, more nuclear density, higher fractal dimension, and less regular chromatin textures tended to recur faster than those with a higher degree of nuclear order, less pattern complexity, lower density, and more homogeneous chromatin nuclear textures (p < 0.01). To our knowledge, these digital morphometric methods were used for the first time to accurately predict tumor recurrence in patients with intracranial meningiomas. The use of these methods may bring additional valuable information to the clinician regarding the optimal management of these patients.

  4. Carbon dioxide diffuse emission from the soil: ten years of observations at Vesuvio and Campi Flegrei (Pozzuoli), and linkages with volcanic activity

    NASA Astrophysics Data System (ADS)

    Granieri, D.; Avino, R.; Chiodini, G.

    2010-01-01

    Carbon dioxide flux from the soil is regularly monitored in selected areas of Vesuvio and Solfatara (Campi Flegrei, Pozzuoli) with the twofold aim of i) monitoring spatial and temporal variations of the degassing process and ii) investigating if the surface phenomena could provide information about the processes occurring at depth. At present, the surveyed areas include 15 fixed points around the rim of Vesuvio and 71 fixed points in the floor of Solfatara crater. Soil CO2 flux has been measured since 1998, at least once a month, in both areas. In addition, two automatic permanent stations, located at Vesuvio and Solfatara, measure the CO2 flux and some environmental parameters that can potentially influence the CO2 diffuse degassing. Series acquired by continuous stations are characterized by an annual periodicity that is related to the typical periodicities of some meteorological parameters. Conversely, series of CO2 flux data arising from periodic measurements over the arrays of Vesuvio and Solfatara are less dependent on external factors such as meteorological parameters, local soil properties (porosity, hydraulic conductivity) and topographic effects (high or low ground). Therefore we argue that the long-term trend of this signal contains the “best” possible representation of the endogenous signal related to the upflow of deep hydrothermal fluids.

  5. Prospective regularization design in prior-image-based reconstruction

    NASA Astrophysics Data System (ADS)

    Dang, Hao; Siewerdsen, Jeffrey H.; Webster Stayman, J.

    2015-12-01

    Prior-image-based reconstruction (PIBR) methods leveraging patient-specific anatomical information from previous imaging studies and/or sequences have demonstrated dramatic improvements in dose utilization and image quality for low-fidelity data. However, a proper balance of information from the prior images and information from the measurements is required (e.g. through careful tuning of regularization parameters). Inappropriate selection of reconstruction parameters can lead to detrimental effects including false structures and failure to improve image quality. Traditional methods based on heuristics are subject to error and sub-optimal solutions, while exhaustive searches require a large number of computationally intensive image reconstructions. In this work, we propose a novel method that prospectively estimates the optimal amount of prior image information for accurate admission of specific anatomical changes in PIBR without performing full image reconstructions. This method leverages an analytical approximation to the implicitly defined PIBR estimator, and introduces a predictive performance metric leveraging this analytical form and knowledge of a particular presumed anatomical change whose accurate reconstruction is sought. Additionally, since model-based PIBR approaches tend to be space-variant, a spatially varying prior image strength map is proposed to optimally admit changes everywhere in the image (eliminating the need to know change locations a priori). Studies were conducted in both an ellipse phantom and a realistic thorax phantom emulating a lung nodule surveillance scenario. The proposed method demonstrated accurate estimation of the optimal prior image strength while achieving a substantial computational speedup (about a factor of 20) compared to traditional exhaustive search. Moreover, the use of the proposed prior strength map in PIBR demonstrated accurate reconstruction of anatomical changes without foreknowledge of change locations in phantoms where the optimal parameters vary spatially by an order of magnitude or more. In a series of studies designed to explore potential unknowns associated with accurate PIBR, optimal prior image strength was found to vary with attenuation differences associated with anatomical change but exhibited only small variations as a function of the shape and size of the change. The results suggest that, given a target change attenuation, prospective patient-, change-, and data-specific customization of the prior image strength can be performed to ensure reliable reconstruction of specific anatomical changes.

  6. Regular Topographic Patterning of Karst Depressions Suggests Landscape Self-Organization

    NASA Astrophysics Data System (ADS)

    Quintero, C.; Cohen, M. J.

    2017-12-01

    Thousands of wetland depressions that are commonly host to cypress domes dot the sub-tropical limestone landscape of South Florida. The origin of these depression features has been the topic of debate. Here we build upon the work of previous surveyors of this landscape to analyze the morphology and spatial distribution of depressions on the Big Cypress landscape. We took advantage of the emergence and availability of high resolution Light Direction and Ranging (LiDAR) technology and ArcMap GIS software to analyze the structure and regularity of landscape features with methods unavailable to past surveyors. Six 2.25 km2 LiDAR plots within the preserve were selected for remote analysis and one depression feature within each plot was selected for more intensive sediment and water depth surveying. Depression features on the Big Cypress landscape were found to show strong evidence of regular spatial patterning. Periodicity, a feature of regularly patterned landscapes, is apparent in both Variograms and Radial Spectrum Analyses. Size class distributions of the identified features indicate constrained feature sizes while Average Nearest Neighbor analyses support the inference of dispersed features with non-random spacing. The presence of regular patterning on this landscape strongly implies biotic reinforcement of spatial structure by way of the scale dependent feedback. In characterizing the structure of this wetland landscape we add to the growing body of work dedicated to documenting how water, life and geology may interact to shape the natural landscapes we see today.

  7. SU-E-T-398: Evaluation of Radiobiological Parameters Using Serial Tumor Imaging During Radiotherapy as An Inverse Ill-Posed Problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chvetsov, A; Sandison, G; Schwartz, J

    Purpose: Combination of serial tumor imaging with radiobiological modeling can provide more accurate information on the nature of treatment response and what underlies resistance. The purpose of this article is to improve the algorithms related to imaging-based radiobilogical modeling of tumor response. Methods: Serial imaging of tumor response to radiation therapy represents a sum of tumor cell sensitivity, tumor growth rates, and the rate of cell loss which are not separated explicitly. Accurate treatment response assessment would require separation of these radiobiological determinants of treatment response because they define tumor control probability. We show that the problem of reconstruction ofmore » radiobiological parameters from serial imaging data can be considered as inverse ill-posed problem described by the Fredholm integral equation of the first kind because it is governed by a sum of several exponential processes. Therefore, the parameter reconstruction can be solved using regularization methods. Results: To study the reconstruction problem, we used a set of serial CT imaging data for the head and neck cancer and a two-level cell population model of tumor response which separates the entire tumor cell population in two subpopulations of viable and lethally damage cells. The reconstruction was done using a least squared objective function and a simulated annealing algorithm. Using in vitro data for radiobiological parameters as reference data, we shown that the reconstructed values of cell surviving fractions and potential doubling time exhibit non-physical fluctuations if no stabilization algorithms are applied. The variational regularization allowed us to obtain statistical distribution for cell surviving fractions and cell number doubling times comparable to in vitro data. Conclusion: Our results indicate that using variational regularization can increase the number of free parameters in the model and open the way to development of more advanced algorithms which take into account tumor heterogeneity, for example, related to hypoxia.« less

  8. School Psychologists' Continuing Professional Development Preferences and Practices

    ERIC Educational Resources Information Center

    Armistead, Leigh D.; Castillo, Jose M.; Curtis, Michael J.; Chappel, Ashley; Cunningham, Jennifer

    2013-01-01

    This study investigated school psychologists' continuing professional development (CPD) activities, topics, needs, motivations, financial expenditures, and opinions, as well as relationships between select demographic characteristics and certain CPD practices and preferences. A survey was mailed to 1,000 randomly selected Regular Members of…

  9. A Tikhonov Regularization Scheme for Focus Rotations with Focused Ultrasound Phased Arrays

    PubMed Central

    Hughes, Alec; Hynynen, Kullervo

    2016-01-01

    Phased arrays have a wide range of applications in focused ultrasound therapy. By using an array of individually-driven transducer elements, it is possible to steer a focus through space electronically and compensate for acoustically heterogeneous media with phase delays. In this paper, the concept of focusing an ultrasound phased array is expanded to include a method to control the orientation of the focus using a Tikhonov regularization scheme. It is then shown that the Tikhonov regularization parameter used to solve the ill-posed focus rotation problem plays an important role in the balance between quality focusing and array efficiency. Finally, the technique is applied to the synthesis of multiple foci, showing that this method allows for multiple independent spatial rotations. PMID:27913323

  10. A Tikhonov Regularization Scheme for Focus Rotations With Focused Ultrasound-Phased Arrays.

    PubMed

    Hughes, Alec; Hynynen, Kullervo

    2016-12-01

    Phased arrays have a wide range of applications in focused ultrasound therapy. By using an array of individually driven transducer elements, it is possible to steer a focus through space electronically and compensate for acoustically heterogeneous media with phase delays. In this paper, the concept of focusing an ultrasound-phased array is expanded to include a method to control the orientation of the focus using a Tikhonov regularization scheme. It is then shown that the Tikhonov regularization parameter used to solve the ill-posed focus rotation problem plays an important role in the balance between quality focusing and array efficiency. Finally, the technique is applied to the synthesis of multiple foci, showing that this method allows for multiple independent spatial rotations.

  11. The Cluster AgeS Experiment (CASE). Detecting Aperiodic Photometric Variability with the Friends of Friends Algorithm

    NASA Astrophysics Data System (ADS)

    Rozyczka, M.; Narloch, W.; Pietrukowicz, P.; Thompson, I. B.; Pych, W.; Poleski, R.

    2018-03-01

    We adapt the friends of friends algorithm to the analysis of light curves, and show that it can be succesfully applied to searches for transient phenomena in large photometric databases. As a test case we search OGLE-III light curves for known dwarf novae. A single combination of control parameters allows us to narrow the search to 1% of the data while reaching a ≍90% detection efficiency. A search involving ≍2% of the data and three combinations of control parameters can be significantly more effective - in our case a 100% efficiency is reached. The method can also quite efficiently detect semi-regular variability. In particular, 28 new semi-regular variables have been found in the field of the globular cluster M22, which was examined earlier with the help of periodicity-searching algorithms.

  12. Sinc-Galerkin estimation of diffusivity in parabolic problems

    NASA Technical Reports Server (NTRS)

    Smith, Ralph C.; Bowers, Kenneth L.

    1991-01-01

    A fully Sinc-Galerkin method for the numerical recovery of spatially varying diffusion coefficients in linear partial differential equations is presented. Because the parameter recovery problems are inherently ill-posed, an output error criterion in conjunction with Tikhonov regularization is used to formulate them as infinite-dimensional minimization problems. The forward problems are discretized with a sinc basis in both the spatial and temporal domains thus yielding an approximate solution which displays an exponential convergence rate and is valid on the infinite time interval. The minimization problems are then solved via a quasi-Newton/trust region algorithm. The L-curve technique for determining an approximate value of the regularization parameter is briefly discussed, and numerical examples are given which show the applicability of the method both for problems with noise-free data as well as for those whose data contains white noise.

  13. An experimental comparison of various methods of nearfield acoustic holography

    DOE PAGES

    Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.

    2017-05-19

    An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less

  14. An experimental comparison of various methods of nearfield acoustic holography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.

    An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less

  15. Simulations of the impacts of building height layout on air quality in natural-ventilated rooms around street canyons.

    PubMed

    Yang, Fang; Zhong, Ke; Chen, Yonghang; Kang, Yanming

    2017-10-01

    Numerical simulations were conducted to investigate the effects of building height ratio (i.e., HR, the height ratio of the upstream building to the downstream building) on the air quality in buildings beside street canyons, and both regular and staggered canyons were considered for the simulations. The results show that the building height ratio affects not only the ventilation fluxes of the rooms in the downstream building but also the pollutant concentrations around the building. The parameter, outdoor effective source intensity of a room, is then proposed to calculate the amount of vehicular pollutants that enters into building rooms. Smaller value of this parameter indicates less pollutant enters the room. The numerical results reveal that HRs from 2/7 to 7/2 are the favorable height ratios for the regular canyons, as they obtain smaller values than the other cases. While HR values of 5/7, 7/7, and 7/5 are appropriate for staggered canyons. In addition, in terms of improving indoor air quality by natural ventilation, the staggered canyons with favorable HR are better than those of the regular canyons.

  16. Design of 4D x-ray tomography experiments for reconstruction using regularized iterative algorithms

    NASA Astrophysics Data System (ADS)

    Mohan, K. Aditya

    2017-10-01

    4D X-ray computed tomography (4D-XCT) is widely used to perform non-destructive characterization of time varying physical processes in various materials. The conventional approach to improving temporal resolution in 4D-XCT involves the development of expensive and complex instrumentation that acquire data faster with reduced noise. It is customary to acquire data with many tomographic views at a high signal to noise ratio. Instead, temporal resolution can be improved using regularized iterative algorithms that are less sensitive to noise and limited views. These algorithms benefit from optimization of other parameters such as the view sampling strategy while improving temporal resolution by reducing the total number of views or the detector exposure time. This paper presents the design principles of 4D-XCT experiments when using regularized iterative algorithms derived using the framework of model-based reconstruction. A strategy for performing 4D-XCT experiments is presented that allows for improving the temporal resolution by progressively reducing the number of views or the detector exposure time. Theoretical analysis of the effect of the data acquisition parameters on the detector signal to noise ratio, spatial reconstruction resolution, and temporal reconstruction resolution is also presented in this paper.

  17. Exercise Training positively modulates the Ectonucleotidase Enzymes in Lymphocytes of Metabolic Syndrome Patients.

    PubMed

    Martins, C C; Bagatini, M D; Cardoso, A M; Zanini, D; Abdalla, F H; Baldissarelli, J; Dalenogare, D P; Dos Santos, D L; Schetinger, M R C; Morsch, V M M

    2016-11-01

    In this study, we investigated the cardiovascular risk factors as well as ectonucleotidase activities in lymphocytes of metabolic syndrome (MetS) patients before and after an exercise intervention. 20 MetS patients, who performed regular concurrent exercise training for 30 weeks, 3 times/week, were studied. Anthropometric, biochemical, inflammatory and hepatic parameters and hydrolysis of adenine nucleotides and nucleoside in lymphocytes were collected from patients before and after 15 and 30 weeks of the exercise intervention as well as from participants of the control group. An increase in the hydrolysis of ATP and ADP, and a decrease in adenosine deamination in lymphocytes of MetS patients before the exercise intervention were observed (P<0.001). However, these alterations were reversed by exercise training after 30 weeks of intervention. Additionally, exercise training reduced the inflammatory and hepatic markers to baseline levels after 30 weeks of exercise. Our results clearly indicated alteration in ectonucleotidase enzymes in lymphocytes in the MetS, whereas regular exercise training had a protective effect on the enzymatic alterations and on inflammatory and hepatic parameters, especially if it is performed regularly and for a long period. © Georg Thieme Verlag KG Stuttgart · New York.

  18. The impact of Nordic walking training on the gait of the elderly.

    PubMed

    Ben Mansour, Khaireddine; Gorce, Philippe; Rezzoug, Nasser

    2018-03-27

    The purpose of the current study was to define the impact of regular practice of Nordic walking on the gait of the elderly. Thereby, we aimed to determine whether the gait characteristics of active elderly persons practicing Nordic walking are more similar to healthy adults than that of the sedentary elderly. Comparison was made based on parameters computed from three inertial sensors during walking at a freely chosen velocity. Results showed differences in gait pattern in terms of the amplitude computed from acceleration and angular velocity at the lumbar region (root mean square), the distribution (Skewness) quantified from the vertical and Euclidean norm of the lumbar acceleration, the complexity (Sample Entropy) of the mediolateral component of lumbar angular velocity and the Euclidean norm of the shank acceleration and angular velocity, the regularity of the lower limbs, the spatiotemporal parameters and the variability (standard deviation) of stance and stride durations. These findings reveal that the pattern of active elderly differs significantly from sedentary elderly of the same age while similarity was observed between the active elderly and healthy adults. These results advance that regular physical activity such as Nordic walking may counteract the deterioration of gait quality that occurs with aging.

  19. Estimation of High-Dimensional Graphical Models Using Regularized Score Matching

    PubMed Central

    Lin, Lina; Drton, Mathias; Shojaie, Ali

    2017-01-01

    Graphical models are widely used to model stochastic dependences among large collections of variables. We introduce a new method of estimating undirected conditional independence graphs based on the score matching loss, introduced by Hyvärinen (2005), and subsequently extended in Hyvärinen (2007). The regularized score matching method we propose applies to settings with continuous observations and allows for computationally efficient treatment of possibly non-Gaussian exponential family models. In the well-explored Gaussian setting, regularized score matching avoids issues of asymmetry that arise when applying the technique of neighborhood selection, and compared to existing methods that directly yield symmetric estimates, the score matching approach has the advantage that the considered loss is quadratic and gives piecewise linear solution paths under ℓ1 regularization. Under suitable irrepresentability conditions, we show that ℓ1-regularized score matching is consistent for graph estimation in sparse high-dimensional settings. Through numerical experiments and an application to RNAseq data, we confirm that regularized score matching achieves state-of-the-art performance in the Gaussian case and provides a valuable tool for computationally efficient estimation in non-Gaussian graphical models. PMID:28638498

  20. Predictive cues for auditory stream formation in humans and monkeys.

    PubMed

    Aggelopoulos, Nikolaos C; Deike, Susann; Selezneva, Elena; Scheich, Henning; Brechmann, André; Brosch, Michael

    2017-12-18

    Auditory perception is improved when stimuli are predictable, and this effect is evident in a modulation of the activity of neurons in the auditory cortex as shown previously. Human listeners can better predict the presence of duration deviants embedded in stimulus streams with fixed interonset interval (isochrony) and repeated duration pattern (regularity), and neurons in the auditory cortex of macaque monkeys have stronger sustained responses in the 60-140 ms post-stimulus time window under these conditions. Subsequently, the question has arisen whether isochrony or regularity in the sensory input contributed to the enhancement of the neuronal and behavioural responses. Therefore, we varied the two factors isochrony and regularity independently and measured the ability of human subjects to detect deviants embedded in these sequences as well as measuring the responses of neurons the primary auditory cortex of macaque monkeys during presentations of the sequences. The performance of humans in detecting deviants was significantly increased by regularity. Isochrony enhanced detection only in the presence of the regularity cue. In monkeys, regularity increased the sustained component of neuronal tone responses in auditory cortex while isochrony had no consistent effect. Although both regularity and isochrony can be considered as parameters that would make a sequence of sounds more predictable, our results from the human and monkey experiments converge in that regularity has a greater influence on behavioural performance and neuronal responses. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  1. A Description of Similarity of Personality between Selected Groups of Television Viewers and Certain Television Roles Regularly Viewed by Them.

    ERIC Educational Resources Information Center

    Perrow, Maxwell Vermilyea

    The aim of this study was to provide quantitative measurement of the degree of identification, or lack of identification, that takes place between the viewers of television and the television roles they like and view regularly and those they dislike and view infrequently. Participants in the study were asked to keep a Television Viewing Diary and…

  2. Benefits of regular aerobic exercise for executive functioning in healthy populations.

    PubMed

    Guiney, Hayley; Machado, Liana

    2013-02-01

    Research suggests that regular aerobic exercise has the potential to improve executive functioning, even in healthy populations. The purpose of this review is to elucidate which components of executive functioning benefit from such exercise in healthy populations. In light of the developmental time course of executive functions, we consider separately children, young adults, and older adults. Data to date from studies of aging provide strong evidence of exercise-linked benefits related to task switching, selective attention, inhibition of prepotent responses, and working memory capacity; furthermore, cross-sectional fitness data suggest that working memory updating could potentially benefit as well. In young adults, working memory updating is the main executive function shown to benefit from regular exercise, but cross-sectional data further suggest that task-switching and post error performance may also benefit. In children, working memory capacity has been shown to benefit, and cross-sectional data suggest potential benefits for selective attention and inhibitory control. Although more research investigating exercise-related benefits for specific components of executive functioning is clearly needed in young adults and children, when considered across the age groups, ample evidence indicates that regular engagement in aerobic exercise can provide a simple means for healthy people to optimize a range of executive functions.

  3. 5 CFR 302.401 - Selection and appointment.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... reemployment, reemployment, or regular list on which candidates have not received numerical scores, an agency... candidates have received numerical scores, the agency must make its selection for each vacancy from not more... method, an agency is not required to— (1) Accord an applicant on its priority reemployment or...

  4. Grouped gene selection and multi-classification of acute leukemia via new regularized multinomial regression.

    PubMed

    Li, Juntao; Wang, Yanyan; Jiang, Tao; Xiao, Huimin; Song, Xuekun

    2018-05-09

    Diagnosing acute leukemia is the necessary prerequisite to treating it. Multi-classification on the gene expression data of acute leukemia is help for diagnosing it which contains B-cell acute lymphoblastic leukemia (BALL), T-cell acute lymphoblastic leukemia (TALL) and acute myeloid leukemia (AML). However, selecting cancer-causing genes is a challenging problem in performing multi-classification. In this paper, weighted gene co-expression networks are employed to divide the genes into groups. Based on the dividing groups, a new regularized multinomial regression with overlapping group lasso penalty (MROGL) has been presented to simultaneously perform multi-classification and select gene groups. By implementing this method on three-class acute leukemia data, the grouped genes which work synergistically are identified, and the overlapped genes shared by different groups are also highlighted. Moreover, MROGL outperforms other five methods on multi-classification accuracy. Copyright © 2017. Published by Elsevier B.V.

  5. [Ecological security of wastewater treatment processes: a review].

    PubMed

    Yang, Sai; Hua, Tao

    2013-05-01

    Though the regular indicators of wastewater after treatment can meet the discharge requirements and reuse standards, it doesn't mean the effluent is harmless. From the sustainable point of view, to ensure the ecological and human security, comprehensive toxicity should be considered when discharge standards are set up. In order to improve the ecological security of wastewater treatment processes, toxicity reduction should be considered when selecting and optimizing the treatment processes. This paper reviewed the researches on the ecological security of wastewater treatment processes, with the focus on the purposes of various treatment processes, including the processes for special wastewater treatment, wastewater reuse, and for the safety of receiving waters. Conventional biological treatment combined with advanced oxidation technologies can enhance the toxicity reduction on the base of pollutants removal, which is worthy of further study. For the process aimed at wastewater reuse, the integration of different process units can complement the advantages of both conventional pollutants removal and toxicity reduction. For the process aimed at ecological security of receiving waters, the emphasis should be put on the toxicity reduction optimization of process parameters and process unit selection. Some suggestions for the problems in the current research and future research directions were put forward.

  6. High Dietary Fructose Intake on Cardiovascular Disease Related Parameters in Growing Rats.

    PubMed

    Yoo, SooYeon; Ahn, Hyejin; Park, Yoo Kyoung

    2016-12-26

    The objective of this study was to determine the effects of a high-fructose diet on cardiovascular disease (CVD)-related parameters in growing rats. Three-week-old female Sprague Dawley rats were randomly assigned to four experimental groups; a regular diet group (RD: fed regular diet based on AIN-93G, n = 8), a high-fructose diet group (30Frc: fed regular diet with 30% fructose, n = 8), a high-fat diet group (45Fat: fed regular diet with 45 kcal% fat, n = 8) or a high fructose with high-fat diet group (30Frc + 45Fat, fed diet 30% fructose with 45 kcal% fat, n = 8). After an eight-week treatment period, the body weight, total-fat weight, serum glucose, insulin, lipid profiles and pro-inflammatory cytokines, abdominal aortic wall thickness, and expressions of eNOS and ET-1 mRNA were analyzed. The result showed that total-fat weight was higher in the 30Frc, 45Fat, and 30Frc + 45Fat groups compared to the RD group ( p < 0.05). Serum triglyceride (TG) levels were highest in the 30Frc group than the other groups ( p < 0.05). The abdominal aorta of 30Frc, 45Fat, and 30Frc + 45Fat groups had higher wall thickness than the RD group ( p < 0.05). Abdominal aortic eNOS mRNA level was decreased in 30Frc, 45Fat, and 30Frc + 45Fat groups compared to the RD group ( p < 0.05), and also 45Fat and 30Frc + 45Fat groups had decreased mRNA expression of eNOS compared to the 30Frc group ( p < 0.05). ET-1 mRNA level was higher in 30Frc, 45Fat, and 30Frc + 45Fat groups than the RD group ( p < 0.05). Both high fructose consumption and high fat consumption in growing rats had similar negative effects on CVD-related parameters.

  7. Selected Occupational Topics

    ERIC Educational Resources Information Center

    Business Education Forum, 1973

    1973-01-01

    Research studies are classified in this regular section as marketing and distribution, typewriting, basic business and economics, shorthand and transcription, data processing, and the beginning teacher. (MU)

  8. Regularized inversion of controlled source audio-frequency magnetotelluric data in horizontally layered transversely isotropic media

    NASA Astrophysics Data System (ADS)

    Zhou, Jianmei; Wang, Jianxun; Shang, Qinglong; Wang, Hongnian; Yin, Changchun

    2014-04-01

    We present an algorithm for inverting controlled source audio-frequency magnetotelluric (CSAMT) data in horizontally layered transversely isotropic (TI) media. The popular inversion method parameterizes the media into a large number of layers which have fixed thickness and only reconstruct the conductivities (e.g. Occam's inversion), which does not enable the recovery of the sharp interfaces between layers. In this paper, we simultaneously reconstruct all the model parameters, including both the horizontal and vertical conductivities and layer depths. Applying the perturbation principle and the dyadic Green's function in TI media, we derive the analytic expression of Fréchet derivatives of CSAMT responses with respect to all the model parameters in the form of Sommerfeld integrals. A regularized iterative inversion method is established to simultaneously reconstruct all the model parameters. Numerical results show that the inverse algorithm, including the depths of the layer interfaces, can significantly improve the inverse results. It can not only reconstruct the sharp interfaces between layers, but also can obtain conductivities close to the true value.

  9. Combined sphere-spheroid particle model for the retrieval of the microphysical aerosol parameters via regularized inversion of lidar data

    NASA Astrophysics Data System (ADS)

    Samaras, Stefanos; Böckmann, Christine; Nicolae, Doina

    2016-06-01

    In this work we propose a two-step advancement of the Mie spherical-particle model accounting for particle non-sphericity. First, a naturally two-dimensional (2D) generalized model (GM) is made, which further triggers analogous 2D re-definitions of microphysical parameters. We consider a spheroidal-particle approach where the size distribution is additionally dependent on aspect ratio. Second, we incorporate the notion of a sphere-spheroid particle mixture (PM) weighted by a non-sphericity percentage. The efficiency of these two models is investigated running synthetic data retrievals with two different regularization methods to account for the inherent instability of the inversion procedure. Our preliminary studies show that a retrieval with the PM model improves the fitting errors and the microphysical parameter retrieval and it has at least the same efficiency as the GM. While the general trend of the initial size distributions is captured in our numerical experiments, the reconstructions are subject to artifacts. Finally, our approach is applied to a measurement case yielding acceptable results.

  10. Hierarchical Bayesian modeling of ionospheric TEC disturbances as non-stationary processes

    NASA Astrophysics Data System (ADS)

    Seid, Abdu Mohammed; Berhane, Tesfahun; Roininen, Lassi; Nigussie, Melessew

    2018-03-01

    We model regular and irregular variation of ionospheric total electron content as stationary and non-stationary processes, respectively. We apply the method developed to SCINDA GPS data set observed at Bahir Dar, Ethiopia (11.6 °N, 37.4 °E) . We use hierarchical Bayesian inversion with Gaussian Markov random process priors, and we model the prior parameters in the hyperprior. We use Matérn priors via stochastic partial differential equations, and use scaled Inv -χ2 hyperpriors for the hyperparameters. For drawing posterior estimates, we use Markov Chain Monte Carlo methods: Gibbs sampling and Metropolis-within-Gibbs for parameter and hyperparameter estimations, respectively. This allows us to quantify model parameter estimation uncertainties as well. We demonstrate the applicability of the method proposed using a synthetic test case. Finally, we apply the method to real GPS data set, which we decompose to regular and irregular variation components. The result shows that the approach can be used as an accurate ionospheric disturbance characterization technique that quantifies the total electron content variability with corresponding error uncertainties.

  11. Substance P modulation of TRPC3/7 channels improves respiratory rhythm regularity and ICAN-dependent pacemaker activity

    PubMed Central

    Ben-Mabrouk, Faiza; Tryba, Andrew Kieran

    2011-01-01

    Neuromodulators, such as Substance P (SubP) play an important role in modulating many rhythmic activities driven by central pattern generators (e.g., locomotion, respiration). However, the mechanism by which SubP enhances breathing regularity has not been determined. Here, we used mouse brainstem slices containing the pre-Bötzinger Complex (Pre-BötC) to demonstrate, for the first time, that SubP activates transient receptor protein canonical (TRPC) channels to enhance respiratory rhythm regularity. Moreover, SubP enhancement of network regularity is accomplished via selective enhancement of ICAN-dependent intrinsic bursting properties. In contrast to INaP-dependant pacemakers, ICAN-dependant pacemaker bursting activity is TRPC dependent. Western Blots reveal TRPC3 and TRPC7 channels are expressed in rhythmically active ventral respiratory group (VRG) island preparations. Taken together, these data suggest that SubP-mediated activation of TRPC3/7 channels underlies rhythmic ICAN-dependent pacemaker activity and enhances the regularity of respiratory rhythm activity. PMID:20345918

  12. Substance P modulation of TRPC3/7 channels improves respiratory rhythm regularity and ICAN-dependent pacemaker activity.

    PubMed

    Ben-Mabrouk, Faiza; Tryba, Andrew K

    2010-04-01

    Neuromodulators, such as substance P (SubP), play an important role in modulating many rhythmic activities driven by central pattern generators (e.g. locomotion, respiration). However, the mechanism by which SubP enhances breathing regularity has not been determined. Here, we used mouse brainstem slices containing the pre-Bötzinger complex to demonstrate, for the first time, that SubP activates transient receptor protein canonical (TRPC) channels to enhance respiratory rhythm regularity. Moreover, SubP enhancement of network regularity is accomplished via selective enhancement of ICAN (inward non-specific cation current)-dependent intrinsic bursting properties. In contrast to INaP (persistent sodium current)-dependent pacemakers, ICAN-dependent pacemaker bursting activity is TRPC-dependent. Western Blots reveal TRPC3 and TRPC7 channels are expressed in rhythmically active ventral respiratory group island preparations. Taken together, these data suggest that SubP-mediated activation of TRPC3/7 channels underlies rhythmic ICAN-dependent pacemaker activity and enhances the regularity of respiratory rhythm activity.

  13. Comparison of three-dimensional optical coherence tomography and combining a rotating Scheimpflug camera with a Placido topography system for forme fruste keratoconus diagnosis.

    PubMed

    Fukuda, Shinichi; Beheregaray, Simone; Hoshi, Sujin; Yamanari, Masahiro; Lim, Yiheng; Hiraoka, Takahiro; Yasuno, Yoshiaki; Oshika, Tetsuro

    2013-12-01

    To evaluate the ability of parameters measured by three-dimensional (3D) corneal and anterior segment optical coherence tomography (CAS-OCT) and a rotating Scheimpflug camera combined with a Placido topography system (Scheimpflug camera with topography) to discriminate between normal eyes and forme fruste keratoconus. Forty-eight eyes of 48 patients with keratoconus, 25 eyes of 25 patients with forme fruste keratoconus and 128 eyes of 128 normal subjects were evaluated. Anterior and posterior keratometric parameters (steep K, flat K, average K), elevation, topographic parameters, regular and irregular astigmatism (spherical, asymmetry, regular and higher-order astigmatism) and five pachymetric parameters (minimum, minimum-median, inferior-superior, inferotemporal-superonasal, vertical thinnest location of the cornea) were measured using 3D CAS-OCT and a Scheimpflug camera with topography. The area under the receiver operating curve (AUROC) was calculated to assess the discrimination ability. Compatibility and repeatability of both devices were evaluated. Posterior surface elevation showed higher AUROC values in discrimination analysis of forme fruste keratoconus using both devices. Both instruments showed significant linear correlations (p<0.05, Pearson's correlation coefficient) and good repeatability (ICCs: 0.885-0.999) for normal and forme fruste keratoconus. Posterior elevation was the best discrimination parameter for forme fruste keratoconus. Both instruments presented good correlation and repeatability for this condition.

  14. Fluctuating survival selection explains variation in avian group size

    PubMed Central

    Brown, Charles R.; Brown, Mary Bomberger; Roche, Erin A.; O’Brien, Valerie A.; Page, Catherine E.

    2016-01-01

    Most animal groups vary extensively in size. Because individuals in certain sizes of groups often have higher apparent fitness than those in other groups, why wide group size variation persists in most populations remains unexplained. We used a 30-y mark–recapture study of colonially breeding cliff swallows (Petrochelidon pyrrhonota) to show that the survival advantages of different colony sizes fluctuated among years. Colony size was under both stabilizing and directional selection in different years, and reversals in the sign of directional selection regularly occurred. Directional selection was predicted in part by drought conditions: birds in larger colonies tended to be favored in cooler and wetter years, and birds in smaller colonies in hotter and drier years. Oscillating selection on colony size likely reflected annual differences in food availability and the consequent importance of information transfer, and/or the level of ectoparasitism, with the net benefit of sociality varying under these different conditions. Averaged across years, there was no net directional change in selection on colony size. The wide range in cliff swallow group size is probably maintained by fluctuating survival selection and represents the first case, to our knowledge, in which fitness advantages of different group sizes regularly oscillate over time in a natural vertebrate population. PMID:27091998

  15. Fluctuating survival selection explains variation in avian group size.

    PubMed

    Brown, Charles R; Brown, Mary Bomberger; Roche, Erin A; O'Brien, Valerie A; Page, Catherine E

    2016-05-03

    Most animal groups vary extensively in size. Because individuals in certain sizes of groups often have higher apparent fitness than those in other groups, why wide group size variation persists in most populations remains unexplained. We used a 30-y mark-recapture study of colonially breeding cliff swallows (Petrochelidon pyrrhonota) to show that the survival advantages of different colony sizes fluctuated among years. Colony size was under both stabilizing and directional selection in different years, and reversals in the sign of directional selection regularly occurred. Directional selection was predicted in part by drought conditions: birds in larger colonies tended to be favored in cooler and wetter years, and birds in smaller colonies in hotter and drier years. Oscillating selection on colony size likely reflected annual differences in food availability and the consequent importance of information transfer, and/or the level of ectoparasitism, with the net benefit of sociality varying under these different conditions. Averaged across years, there was no net directional change in selection on colony size. The wide range in cliff swallow group size is probably maintained by fluctuating survival selection and represents the first case, to our knowledge, in which fitness advantages of different group sizes regularly oscillate over time in a natural vertebrate population.

  16. Fluctuating survival selection explains variation in avian group size

    USGS Publications Warehouse

    Brown, Charles B.; Brown, Mary Bomberger; Roche, Erin A.; O'brien, Valerie A; Page, Catherine E.

    2016-01-01

    Most animal groups vary extensively in size. Because individuals in certain sizes of groups often have higher apparent fitness than those in other groups, why wide group size variation persists in most populations remains unexplained. We used a 30-y mark–recapture study of colonially breeding cliff swallows (Petrochelidon pyrrhonota) to show that the survival advantages of different colony sizes fluctuated among years. Colony size was under both stabilizing and directional selection in different years, and reversals in the sign of directional selection regularly occurred. Directional selection was predicted in part by drought conditions: birds in larger colonies tended to be favored in cooler and wetter years, and birds in smaller colonies in hotter and drier years. Oscillating selection on colony size likely reflected annual differences in food availability and the consequent importance of information transfer, and/or the level of ectoparasitism, with the net benefit of sociality varying under these different conditions. Averaged across years, there was no net directional change in selection on colony size. The wide range in cliff swallow group size is probably maintained by fluctuating survival selection and represents the first case, to our knowledge, in which fitness advantages of different group sizes regularly oscillate over time in a natural vertebrate population.

  17. Selecting the Best Furniture for Your Classroom.

    ERIC Educational Resources Information Center

    Troup, Wilson

    2002-01-01

    Offers advice on furnishing a technology classroom, asserting that the overriding selection criteria must be quality. This is defined as furniture that functions smoothly and looks attractive with regular maintenance for up to two decades. Addresses eye appeal, versatility versus performance, and durability. A sidebar also discusses ergonomics and…

  18. Characterizing Reinforcement Learning Methods through Parameterized Learning Problems

    DTIC Science & Technology

    2011-06-03

    extraneous. The agent could potentially adapt these representational aspects by applying methods from feature selection ( Kolter and Ng, 2009; Petrik et al...611–616. AAAI Press. Kolter , J. Z. and Ng, A. Y. (2009). Regularization and feature selection in least-squares temporal difference learning. In A. P

  19. Effects of 12-week supervised treadmill training on spatio-temporal gait parameters in patients with claudication.

    PubMed

    Konik, Anita; Kuklewicz, Stanisław; Rosłoniec, Ewelina; Zając, Marcin; Spannbauer, Anna; Nowobilski, Roman; Mika, Piotr

    2016-01-01

    The purpose of the study was to evaluate selected temporal and spatial gait parameters in patients with intermittent claudication after completion of 12-week supervised treadmill walking training. The study included 36 patients (26 males and 10 females) aged: mean 64 (SD 7.7) with intermittent claudication. All patients were tested on treadmill (Gait Trainer, Biodex). Before the programme and after its completion, the following gait biomechanical parameters were tested: step length (cm), step cycle (cycle/s), leg support time (%), coefficient of step variation (%) as well as pain-free walking time (PFWT) and maximal walking time (MWT) were measured. Training was conducted in accordance with the current TASC II guidelines. After 12 weeks of training, patients showed significant change in gait biomechanics consisting in decreased frequency of step cycle (p < 0.05) and extended step length (p < 0.05). PFWT increased by 96% (p < 0.05). MWT increased by 100% (p < 0.05). After completing the training, patients' gait was more regular, which was expressed via statistically significant decrease of coefficient of variation (p < 0.05) for both legs. No statistically significant relation between the post-training improvement of PFWT and MWT and step length increase and decreased frequency of step cycle was observed (p > 0.05). Twelve-week treadmill walking training programme may lead to significant improvement of temporal and spatial gait parameters in patients with intermittent claudication. Twelve-week treadmill walking training programme may lead to significant improvement of pain-free walking time and maximum walking time in patients with intermittent claudication.

  20. A PET reconstruction formulation that enforces non-negativity in projection space for bias reduction in Y-90 imaging

    NASA Astrophysics Data System (ADS)

    Lim, Hongki; Dewaraja, Yuni K.; Fessler, Jeffrey A.

    2018-02-01

    Most existing PET image reconstruction methods impose a nonnegativity constraint in the image domain that is natural physically, but can lead to biased reconstructions. This bias is particularly problematic for Y-90 PET because of the low probability positron production and high random coincidence fraction. This paper investigates a new PET reconstruction formulation that enforces nonnegativity of the projections instead of the voxel values. This formulation allows some negative voxel values, thereby potentially reducing bias. Unlike the previously reported NEG-ML approach that modifies the Poisson log-likelihood to allow negative values, the new formulation retains the classical Poisson statistical model. To relax the non-negativity constraint embedded in the standard methods for PET reconstruction, we used an alternating direction method of multipliers (ADMM). Because choice of ADMM parameters can greatly influence convergence rate, we applied an automatic parameter selection method to improve the convergence speed. We investigated the methods using lung to liver slices of XCAT phantom. We simulated low true coincidence count-rates with high random fractions corresponding to the typical values from patient imaging in Y-90 microsphere radioembolization. We compared our new methods with standard reconstruction algorithms and NEG-ML and a regularized version thereof. Both our new method and NEG-ML allow more accurate quantification in all volumes of interest while yielding lower noise than the standard method. The performance of NEG-ML can degrade when its user-defined parameter is tuned poorly, while the proposed algorithm is robust to any count level without requiring parameter tuning.

  1. Headgroup interactions and ion flotation efficiency in mixtures of a chelating surfactant, different foaming agents, and divalent metal ions.

    PubMed

    Svanedal, Ida; Boija, Susanne; Norgren, Magnus; Edlund, Håkan

    2014-06-10

    The correlation between interaction parameters and ion flotation efficiency in mixtures of chelating surfactant metal complexes and different foaming agents was investigated. We have recently shown that chelating surfactant 2-dodecyldiethylenetriaminepentaacetic acid (4-C12-DTPA) forms strong coordination complexes with divalent metal ions, and this can be utilized in ion flotation. Interaction parameters for mixed micelles and mixed monolayer formation for Mg(2+) and Ni(2+) complexes with the chelating surfactant 4-C12-DTPA and different foaming agents were calculated by Rubingh's regular solution theory. Parameters for the calculations were extracted from surface tension measurements and NMR diffusometry. The effects of metal ion coordination on the interactions between 4-C12-DTPA and the foaming agents could be linked to a previously established difference in coordination chemistry between the examined metal ions. As can be expected from mixtures of amphoteric surfactants, the interactions were strongly pH-dependent. Strong correlation was found between interaction parameter β(σ) for mixed monolayer formation and the phase-transfer efficiency of Ni(2+) complexes with 4-C12-DTPA during flotation in a customized flotation cell. In a mixture of Cu(2+) and Zn(2+), the significant difference in conditional stability constants (log K) between the metal complexes was utilized to selectively recover the metal complex with the highest log K (Cu(2+)) by ion flotation. Flotation experiments in an excess concentration of metal ions confirmed the coordination of more than one metal ion to the headgroup of 4-C12-DTPA.

  2. A Systematic Approach to Sensor Selection for Aircraft Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2009-01-01

    A systematic approach for selecting an optimal suite of sensors for on-board aircraft gas turbine engine health estimation is presented. The methodology optimally chooses the engine sensor suite and the model tuning parameter vector to minimize the Kalman filter mean squared estimation error in the engine s health parameters or other unmeasured engine outputs. This technique specifically addresses the underdetermined estimation problem where there are more unknown system health parameters representing degradation than available sensor measurements. This paper presents the theoretical estimation error equations, and describes the optimization approach that is applied to select the sensors and model tuning parameters to minimize these errors. Two different model tuning parameter vector selection approaches are evaluated: the conventional approach of selecting a subset of health parameters to serve as the tuning parameters, and an alternative approach that selects tuning parameters as a linear combination of all health parameters. Results from the application of the technique to an aircraft engine simulation are presented, and compared to those from an alternative sensor selection strategy.

  3. [Evaluation of formal elements of Spanish pediatrics journals].

    PubMed

    Aleixandre-Benavent, R; González de Dios, J; Valderrama-Zurián, F J; Bolaños Pizarro, M; Valderrama-Zurián, J C

    2007-03-01

    Standardization of scientific journals is indispensable for accurate transmission of knowledge, since it guarantees the universality and reproducibility of research. The objective of this study was to evaluate the formal elements of Spanish pediatrics journals. In 2005, we studied the characteristics of Spanish biomedical journals with special emphasis on Spanish pediatrics journals. The form used for the selection of journals for inclusion in the database Indice Médico Español (IME) was employed to evaluate 65 distinct characteristics in each journal. The parameters were grouped in the following five categores: journal presentation, presentation of the articles, scientific and editorial committees, content characteristics, and dissemination parameters. The journals with the highest overall scores were Anales de Pediatría (63 points out of a maximum of 82), followed by Pediatría de Atención Primaria (53 points), Acta Pediátrica Española and Cirugía Pediátrica (55 points each), Pediatrika (53 points), and Revista Española de Pediatría (48 points). The score obtained by Anales de Pediatría places this journal in the top 10 Spanish journals included in IME. Spanish pediatrics journals meet most of the formal elements required of biomedical journals, although some aspects could be improved, such as deficiencies in the frequency and regularity of publication, mention of the dates of manuscript receipt and acceptance, the lack of a clear description of the editorial process of manuscript selection and peer review, the absence of committee members' institutional affiliations, and the absence of articles by non-Spanish authors.

  4. Nonminimal coupling for the gravitational and electromagnetic fields: Black hole solutions and solitons

    NASA Astrophysics Data System (ADS)

    Balakin, Alexander B.; Bochkarev, Vladimir V.; Lemos, José P. S.

    2008-04-01

    Using a Lagrangian formalism, a three-parameter nonminimal Einstein-Maxwell theory is established. The three parameters q1, q2, and q3 characterize the cross-terms in the Lagrangian, between the Maxwell field and terms linear in the Ricci scalar, Ricci tensor, and Riemann tensor, respectively. Static spherically symmetric equations are set up, and the three parameters are interrelated and chosen so that effectively the system reduces to a one parameter only, q. Specific black hole and other type of one-parameter solutions are studied. First, as a preparation, the Reissner-Nordström solution, with q1=q2=q3=0, is displayed. Then, we search for solutions in which the electric field is regular everywhere as well as asymptotically Coulombian, and the metric potentials are regular at the center as well as asymptotically flat. In this context, the one-parameter model with q1≡-q, q2=2q, q3=-q, called the Gauss-Bonnet model, is analyzed in detail. The study is done through the solution of the Abel equation (the key equation), and the dynamical system associated with the model. There is extra focus on an exact solution of the model and its critical properties. Finally, an exactly integrable one-parameter model, with q1≡-q, q2=q, q3=0, is considered also in detail. A special submodel, in which the Fibonacci number appears naturally, of this one-parameter model is shown, and the corresponding exact solution is presented. Interestingly enough, it is a soliton of the theory, the Fibonacci soliton, without horizons and with a mild conical singularity at the center.

  5. Stark broadening parameter regularities and interpolation and critical evaluation of data for CP star atmospheres research: Stark line shifts

    NASA Astrophysics Data System (ADS)

    Dimitrijevic, M. S.; Tankosic, D.

    1998-04-01

    In order to find out if regularities and systematic trends found to be apparent among experimental Stark line shifts allow the accurate interpolation of new data and critical evaluation of experimental results, the exceptions to the established regularities are analysed on the basis of critical reviews of experimental data, and reasons for such exceptions are discussed. We found that such exceptions are mostly due to the situations when: (i) the energy gap between atomic energy levels within a supermultiplet is equal or comparable to the energy gap to the nearest perturbing levels; (ii) the most important perturbing level is embedded between the energy levels of the supermultiplet; (iii) the forbidden transitions have influence on Stark line shifts.

  6. Nonclassical states of light with a smooth P function

    NASA Astrophysics Data System (ADS)

    Damanet, François; Kübler, Jonas; Martin, John; Braun, Daniel

    2018-02-01

    There is a common understanding in quantum optics that nonclassical states of light are states that do not have a positive semidefinite and sufficiently regular Glauber-Sudarshan P function. Almost all known nonclassical states have P functions that are highly irregular, which makes working with them difficult and direct experimental reconstruction impossible. Here we introduce classes of nonclassical states with regular, non-positive-definite P functions. They are constructed by "puncturing" regular smooth positive P functions with negative Dirac-δ peaks or other sufficiently narrow smooth negative functions. We determine the parameter ranges for which such punctures are possible without losing the positivity of the state, the regimes yielding antibunching of light, and the expressions of the Wigner functions for all investigated punctured states. Finally, we propose some possible experimental realizations of such states.

  7. Propagation of spiking regularity and double coherence resonance in feedforward networks.

    PubMed

    Men, Cong; Wang, Jiang; Qin, Ying-Mei; Deng, Bin; Tsang, Kai-Ming; Chan, Wai-Lok

    2012-03-01

    We investigate the propagation of spiking regularity in noisy feedforward networks (FFNs) based on FitzHugh-Nagumo neuron model systematically. It is found that noise could modulate the transmission of firing rate and spiking regularity. Noise-induced synchronization and synfire-enhanced coherence resonance are also observed when signals propagate in noisy multilayer networks. It is interesting that double coherence resonance (DCR) with the combination of synaptic input correlation and noise intensity is finally attained after the processing layer by layer in FFNs. Furthermore, inhibitory connections also play essential roles in shaping DCR phenomena. Several properties of the neuronal network such as noise intensity, correlation of synaptic inputs, and inhibitory connections can serve as control parameters in modulating both rate coding and the order of temporal coding.

  8. Regular black holes in Einstein-Gauss-Bonnet gravity

    NASA Astrophysics Data System (ADS)

    Ghosh, Sushant G.; Singh, Dharm Veer; Maharaj, Sunil D.

    2018-05-01

    Einstein-Gauss-Bonnet theory, a natural generalization of general relativity to a higher dimension, admits a static spherically symmetric black hole which was obtained by Boulware and Deser. This black hole is similar to its general relativity counterpart with a curvature singularity at r =0 . We present an exact 5D regular black hole metric, with parameter (k >0 ), that interpolates between the Boulware-Deser black hole (k =0 ) and the Wiltshire charged black hole (r ≫k ). Owing to the appearance of the exponential correction factor (e-k /r2), responsible for regularizing the metric, the thermodynamical quantities are modified, and it is demonstrated that the Hawking-Page phase transition is achievable. The heat capacity diverges at a critical radius r =rC, where incidentally the temperature is maximum. Thus, we have a regular black hole with Cauchy and event horizons, and evaporation leads to a thermodynamically stable double-horizon black hole remnant with vanishing temperature. The entropy does not satisfy the usual exact horizon area result of general relativity.

  9. Applicability of regular particle shapes in light scattering calculations for atmospheric ice particles.

    PubMed

    Macke, A; Mishchenko, M I

    1996-07-20

    We ascertain the usefulness of simple ice particle geometries for modeling the intensity distribution of light scattering by atmospheric ice particles. To this end, similarities and differences in light scattering by axis-equivalent, regular and distorted hexagonal cylindric, ellipsoidal, and circular cylindric ice particles are reported. All the results pertain to particles with sizes much larger than a wavelength and are based on a geometrical optics approximation. At a nonabsorbing wavelength of 0.55 µm, ellipsoids (circular cylinders) have a much (slightly) larger asymmetry parameter g than regular hexagonal cylinders. However, our computations show that only random distortion of the crystal shape leads to a closer agreement with g values as small as 0.7 as derived from some remote-sensing data analysis. This may suggest that scattering by regular particle shapes is not necessarily representative of real atmospheric ice crystals at nonabsorbing wavelengths. On the other hand, if real ice particles happen to be hexagonal, they may be approximated by circular cylinders at absorbing wavelengths.

  10. Patch-based image reconstruction for PET using prior-image derived dictionaries

    NASA Astrophysics Data System (ADS)

    Tahaei, Marzieh S.; Reader, Andrew J.

    2016-09-01

    In PET image reconstruction, regularization is often needed to reduce the noise in the resulting images. Patch-based image processing techniques have recently been successfully used for regularization in medical image reconstruction through a penalized likelihood framework. Re-parameterization within reconstruction is another powerful regularization technique in which the object in the scanner is re-parameterized using coefficients for spatially-extensive basis vectors. In this work, a method for extracting patch-based basis vectors from the subject’s MR image is proposed. The coefficients for these basis vectors are then estimated using the conventional MLEM algorithm. Furthermore, using the alternating direction method of multipliers, an algorithm for optimizing the Poisson log-likelihood while imposing sparsity on the parameters is also proposed. This novel method is then utilized to find sparse coefficients for the patch-based basis vectors extracted from the MR image. The results indicate the superiority of the proposed methods to patch-based regularization using the penalized likelihood framework.

  11. An analytical method for the inverse Cauchy problem of Lame equation in a rectangle

    NASA Astrophysics Data System (ADS)

    Grigor’ev, Yu

    2018-04-01

    In this paper, we present an analytical computational method for the inverse Cauchy problem of Lame equation in the elasticity theory. A rectangular domain is frequently used in engineering structures and we only consider the analytical solution in a two-dimensional rectangle, wherein a missing boundary condition is recovered from the full measurement of stresses and displacements on an accessible boundary. The essence of the method consists in solving three independent Cauchy problems for the Laplace and Poisson equations. For each of them, the Fourier series is used to formulate a first-kind Fredholm integral equation for the unknown function of data. Then, we use a Lavrentiev regularization method, and the termwise separable property of kernel function allows us to obtain a closed-form regularized solution. As a result, for the displacement components, we obtain solutions in the form of a sum of series with three regularization parameters. The uniform convergence and error estimation of the regularized solutions are proved.

  12. On constraining pilot point calibration with regularization in PEST

    USGS Publications Warehouse

    Fienen, M.N.; Muffels, C.T.; Hunt, R.J.

    2009-01-01

    Ground water model calibration has made great advances in recent years with practical tools such as PEST being instrumental for making the latest techniques available to practitioners. As models and calibration tools get more sophisticated, however, the power of these tools can be misapplied, resulting in poor parameter estimates and/or nonoptimally calibrated models that do not suit their intended purpose. Here, we focus on an increasingly common technique for calibrating highly parameterized numerical models - pilot point parameterization with Tikhonov regularization. Pilot points are a popular method for spatially parameterizing complex hydrogeologic systems; however, additional flexibility offered by pilot points can become problematic if not constrained by Tikhonov regularization. The objective of this work is to explain and illustrate the specific roles played by control variables in the PEST software for Tikhonov regularization applied to pilot points. A recent study encountered difficulties implementing this approach, but through examination of that analysis, insight into underlying sources of potential misapplication can be gained and some guidelines for overcoming them developed. ?? 2009 National Ground Water Association.

  13. Quantitative evaluation of first-order retardation corrections to the quarkonium spectrum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brambilla, N.; Prosperi, G.M.

    1992-08-01

    We evaluate numerically first-order retardation corrections for some charmonium and bottomonium masses under the usual assumption of a Bethe-Salpeter purely scalar confinement kernel. The result depends strictly on the use of an additional effective potential to express the corrections (rather than to resort to Kato perturbation theory) and on an appropriate regularization prescription. The kernel has been chosen in order to reproduce in the instantaneous approximation a semirelativistic potential suggested by the Wilson loop method. The calculations are performed for two sets of parameters determined by fits in potential theory. The corrections turn out to be typically of the ordermore » of a few hundred MeV and depend on an additional scale parameter introduced in the regularization. A conjecture existing in the literature on the origin of the constant term in the potential is also discussed.« less

  14. Retrieving cloudy atmosphere parameters from RPG-HATPRO radiometer data

    NASA Astrophysics Data System (ADS)

    Kostsov, V. S.

    2015-03-01

    An algorithm for simultaneously determining both tropospheric temperature and humidity profiles and cloud liquid water content from ground-based measurements of microwave radiation is presented. A special feature of this algorithm is that it combines different types of measurements and different a priori information on the sought parameters. The features of its use in processing RPG-HATPRO radiometer data obtained in the course of atmospheric remote sensing experiments carried out by specialists from the Faculty of Physics of St. Petersburg State University are discussed. The results of a comparison of both temperature and humidity profiles obtained using a ground-based microwave remote sensing method with those obtained from radiosonde data are analyzed. It is shown that this combined algorithm is comparable (in accuracy) to the classical method of statistical regularization in determining temperature profiles; however, this algorithm demonstrates better accuracy (when compared to the method of statistical regularization) in determining humidity profiles.

  15. Regular dorsal dimples and damaged mites of Varroa destructor in some Iranian honey bees (Apis mellifera).

    PubMed

    Ardestani, Masoud M; Ebadi, Rahim; Tahmasbi, Gholamhossein

    2011-07-01

    The frequency of damaged Varroa destructor Anderson and Trueman (Mesostigmata: Varroidae) found on the bottom board of hives of the honey bee, Apis mellifera L. (Hymenoptera: Apidae) has been used as an indicator of the degree of tolerance or resistance of honey bee colonies against mites. However, it is not clear that this measure is adequate. These injuries should be separated from regular dorsal dimples that have a developmental origin. To investigate damage to Varroa mites and regular dorsal dimples, 32 honey bee (A. mellifera) colonies were selected from four Iranian provinces: Isfahan, Markazi, Qazvin, and Tehran. These colonies were part of the National Honey bee Breeding Program that resulted in province-specific races. In April, Varroa mites were collected from heavily infested colonies and used to infest the 32 experimental colonies. In August, 20 of these colonies were selected (five colonies from each province). Adult bees from these colonies were placed in cages and after introducing mites, damaged mites were collected from each cage every day. The average percentage of injured mites ranged from 0.6 to 3.0% in four provinces. The results did not show any statistical differences between the colonies within provinces for injuries to mites, but there were some differences among province-specific lines. Two kinds of injuries to the mites were observed: injuries to legs and pedipalps, and injuries to other parts of the body. There were also some regular dorsal dimples on dorsal idiosoma of the mites that were placed in categories separate from mites damaged by bees. This type of classification helps identifying damage to mites and comparing them with developmental origin symptoms, and may provide criteria for selecting bees tolerant or resistant to this mite.

  16. Parrondo's games based on complex networks and the paradoxical effect.

    PubMed

    Ye, Ye; Wang, Lu; Xie, Nenggang

    2013-01-01

    Parrondo's games were first constructed using a simple tossing scenario, which demonstrates the following paradoxical situation: in sequences of games, a winning expectation may be obtained by playing the games in a random order, although each game (game A or game B) in the sequence may result in losing when played individually. The available Parrondo's games based on the spatial niche (the neighboring environment) are applied in the regular networks. The neighbors of each node are the same in the regular graphs, whereas they are different in the complex networks. Here, Parrondo's model based on complex networks is proposed, and a structure of game B applied in arbitrary topologies is constructed. The results confirm that Parrondo's paradox occurs. Moreover, the size of the region of the parameter space that elicits Parrondo's paradox depends on the heterogeneity of the degree distributions of the networks. The higher heterogeneity yields a larger region of the parameter space where the strong paradox occurs. In addition, we use scale-free networks to show that the network size has no significant influence on the region of the parameter space where the strong or weak Parrondo's paradox occurs. The region of the parameter space where the strong Parrondo's paradox occurs reduces slightly when the average degree of the network increases.

  17. [Research and Design of a System for Detecting Automated External Defbrillator Performance Parameters].

    PubMed

    Wang, Kewu; Xiao, Shengxiang; Jiang, Lina; Hu, Jingkai

    2017-09-30

    In order to regularly detect the performance parameters of automated external defibrillator (AED), to make sure it is safe before using the instrument, research and design of a system for detecting automated external defibrillator performance parameters. According to the research of the characteristics of its performance parameters, combing the STM32's stability and high speed with PWM modulation control, the system produces a variety of ECG normal and abnormal signals through the digital sampling methods. Completed the design of the hardware and software, formed a prototype. This system can accurate detect automated external defibrillator discharge energy, synchronous defibrillation time, charging time and other key performance parameters.

  18. Automatic parameter selection for feature-based multi-sensor image registration

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen; Tom, Victor; Webb, Helen; Chao, Alan

    2006-05-01

    Accurate image registration is critical for applications such as precision targeting, geo-location, change-detection, surveillance, and remote sensing. However, the increasing volume of image data is exceeding the current capacity of human analysts to perform manual registration. This image data glut necessitates the development of automated approaches to image registration, including algorithm parameter value selection. Proper parameter value selection is crucial to the success of registration techniques. The appropriate algorithm parameters can be highly scene and sensor dependent. Therefore, robust algorithm parameter value selection approaches are a critical component of an end-to-end image registration algorithm. In previous work, we developed a general framework for multisensor image registration which includes feature-based registration approaches. In this work we examine the problem of automated parameter selection. We apply the automated parameter selection approach of Yitzhaky and Peli to select parameters for feature-based registration of multisensor image data. The approach consists of generating multiple feature-detected images by sweeping over parameter combinations and using these images to generate estimated ground truth. The feature-detected images are compared to the estimated ground truth images to generate ROC points associated with each parameter combination. We develop a strategy for selecting the optimal parameter set by choosing the parameter combination corresponding to the optimal ROC point. We present numerical results showing the effectiveness of the approach using registration of collected SAR data to reference EO data.

  19. Gap probability - Measurements and models of a pecan orchard

    NASA Technical Reports Server (NTRS)

    Strahler, Alan H.; Li, Xiaowen; Moody, Aaron; Liu, YI

    1992-01-01

    Measurements and models are compared for gap probability in a pecan orchard. Measurements are based on panoramic photographs of 50* by 135 view angle made under the canopy looking upwards at regular positions along transects between orchard trees. The gap probability model is driven by geometric parameters at two levels-crown and leaf. Crown level parameters include the shape of the crown envelope and spacing of crowns; leaf level parameters include leaf size and shape, leaf area index, and leaf angle, all as functions of canopy position.

  20. Measurement of resistant starch by enzymatic digestion in starch and selected plant materials: collaborative study.

    PubMed

    McCleary, Barry V; McNally, Marian; Rossiter, Patricia

    2002-01-01

    Interlaboratory performance statistics was determined for a method developed to measure the resistant starch (RS) content of selected plant food products and a range of commercial starch samples. Food materials examined contained RS (cooked kidney beans, green banana, and corn flakes) and commercial starches, most of which naturally contain, or were processed to yield, elevated RS levels. The method evaluated was optimized to yield RS values in agreement with those reported for in vivo studies. Thirty-seven laboratories tested 8 pairs of blind duplicate starch or plant material samples with RS values between 0.6 (regular maize starch) and 64% (fresh weight basis). For matrixes excluding regular maize starch, repeatability relative standard deviation (RSDr) values ranged from 1.97 to 4.2%, and reproducibility relative standard deviation (RSDR) values ranged from 4.58 to 10.9%. The range of applicability of the test is 2-64% RS. The method is not suitable for products with <1% RS (e.g., regular maize starch; 0.6% RS). For such products, RSDr and RSDR values are unacceptably high.

  1. Quadratic semiparametric Von Mises calculus

    PubMed Central

    Robins, James; Li, Lingling; Tchetgen, Eric

    2009-01-01

    We discuss a new method of estimation of parameters in semiparametric and nonparametric models. The method is based on U-statistics constructed from quadratic influence functions. The latter extend ordinary linear influence functions of the parameter of interest as defined in semiparametric theory, and represent second order derivatives of this parameter. For parameters for which the matching cannot be perfect the method leads to a bias-variance trade-off, and results in estimators that converge at a slower than n–1/2-rate. In a number of examples the resulting rate can be shown to be optimal. We are particularly interested in estimating parameters in models with a nuisance parameter of high dimension or low regularity, where the parameter of interest cannot be estimated at n–1/2-rate. PMID:23087487

  2. Process Parameters on the Crystallization and Morphology of Hydroxyapatite Powders Prepared by a Hydrolysis Method

    NASA Astrophysics Data System (ADS)

    Wang, Moo-Chin; Hon, Min-Hsiung; Chen, Hui-Ting; Yen, Feng-Lin; Hung, I.-Ming; Ko, Horng-Huey; Shih, Wei-Jen

    2013-07-01

    The effects of process parameters on the crystallization and morphology of hydroxyapatite (Ca10(PO4)6(OH)2, HA) powders synthesized from dicalcium phosphate dihydrate (CaHPO4·2H2O, DCPD) using a hydrolysis method have been investigated. X-ray diffraction (XRD), Fourier-transform infrared (FT-IR) spectra, scanning electron microscopy (SEM), transmission electron microscopy (TEM), and selected area electron diffraction (SAED) were used to characterize the synthesized powders. When DCPD underwent hydrolysis in 2.5 NaOH solution (Na(aq)) at 303 K to 348 K (30 °C to 75 °C) for 1 hour, the XRD results revealed that HA was obtained for all the as-dried samples. The SEM morphology of the HA powders for DCPD hydrolysis produced at 348 K (75 °C) shows regular alignment and a short rod shape with a size of 200 nm in length and 50 nm in width. With DCPD hydrolysis in 2.5 M NaOH(aq) holding at 348 K (75 °C) for 1 to 24 hours, XRD results demonstrated that all samples were HA and no other phases could be detected. Moreover, the XRD results also show that all the as-dried powders still maintained the HA structure when DCPD underwent hydrolysis in 0.1 to 5 M NaOH(aq) at 348 K (75 °C) for 1 hour. Otherwise, the full transformation from HA to octa-calcium phosphate (OCP, Ca8H2(PO4)6·5H2O) occurred when hydrolysis happened in 10 M NaOH(aq). FT-IR spectra analysis revealed that some carbonated HA (Ca10(PO4)6(CO3), CHA) had formed. The SEM morphology results show that the 60 to 65 nm width of the uniformly long rods with regular alignment formed in the HA powder aggregates when DCPD underwent hydrolysis in 2.5 M NaOH(aq) at 348 K (75 °C) for 1 hour.

  3. Spatio Temporal EEG Source Imaging with the Hierarchical Bayesian Elastic Net and Elitist Lasso Models

    PubMed Central

    Paz-Linares, Deirel; Vega-Hernández, Mayrim; Rojas-López, Pedro A.; Valdés-Hernández, Pedro A.; Martínez-Montes, Eduardo; Valdés-Sosa, Pedro A.

    2017-01-01

    The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods, Quality Measures and Visualization Routines are freely available in a public website. PMID:29200994

  4. Spatio Temporal EEG Source Imaging with the Hierarchical Bayesian Elastic Net and Elitist Lasso Models.

    PubMed

    Paz-Linares, Deirel; Vega-Hernández, Mayrim; Rojas-López, Pedro A; Valdés-Hernández, Pedro A; Martínez-Montes, Eduardo; Valdés-Sosa, Pedro A

    2017-01-01

    The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods, Quality Measures and Visualization Routines are freely available in a public website.

  5. An oscillating wave energy converter with nonlinear snap-through Power-Take-Off systems in regular waves

    NASA Astrophysics Data System (ADS)

    Zhang, Xian-tao; Yang, Jian-min; Xiao, Long-fei

    2016-07-01

    Floating oscillating bodies constitute a large class of wave energy converters, especially for offshore deployment. Usually the Power-Take-Off (PTO) system is a directly linear electric generator or a hydraulic motor that drives an electric generator. The PTO system is simplified as a linear spring and a linear damper. However the conversion is less powerful with wave periods off resonance. Thus, a nonlinear snap-through mechanism with two symmetrically oblique springs and a linear damper is applied in the PTO system. The nonlinear snap-through mechanism is characteristics of negative stiffness and double-well potential. An important nonlinear parameter γ is defined as the ratio of half of the horizontal distance between the two springs to the original length of both springs. Time domain method is applied to the dynamics of wave energy converter in regular waves. And the state space model is used to replace the convolution terms in the time domain equation. The results show that the energy harvested by the nonlinear PTO system is larger than that by linear system for low frequency input. While the power captured by nonlinear converters is slightly smaller than that by linear converters for high frequency input. The wave amplitude, damping coefficient of PTO systems and the nonlinear parameter γ affect power capture performance of nonlinear converters. The oscillation of nonlinear wave energy converters may be local or periodically inter well for certain values of the incident wave frequency and the nonlinear parameter γ, which is different from linear converters characteristics of sinusoidal response in regular waves.

  6. Determination of heat transfer parameters by use of finite integral transform and experimental data for regular geometric shapes

    NASA Astrophysics Data System (ADS)

    Talaghat, Mohammad Reza; Jokar, Seyyed Mohammad

    2017-12-01

    This article offers a study on estimation of heat transfer parameters (coefficient and thermal diffusivity) using analytical solutions and experimental data for regular geometric shapes (such as infinite slab, infinite cylinder, and sphere). Analytical solutions have a broad use in experimentally determining these parameters. Here, the method of Finite Integral Transform (FIT) was used for solutions of governing differential equations. The temperature change at centerline location of regular shapes was recorded to determine both the thermal diffusivity and heat transfer coefficient. Aluminum and brass were used for testing. Experiments were performed for different conditions such as in a highly agitated water medium ( T = 52 °C) and in air medium ( T = 25 °C). Then, with the known slope of the temperature ratio vs. time curve and thickness of slab or radius of the cylindrical or spherical materials, thermal diffusivity value and heat transfer coefficient may be determined. According to the method presented in this study, the estimated of thermal diffusivity of aluminum and brass is 8.395 × 10-5 and 3.42 × 10-5 for a slab, 8.367 × 10-5 and 3.41 × 10-5 for a cylindrical rod and 8.385 × 10-5 and 3.40 × 10-5 m2/s for a spherical shape, respectively. The results showed there is close agreement between the values estimated here and those already published in the literature. The TAAD% is 0.42 and 0.39 for thermal diffusivity of aluminum and brass, respectively.

  7. Regular treatment with formoterol versus regular treatment with salmeterol for chronic asthma: serious adverse events

    PubMed Central

    Cates, Christopher J; Lasserson, Toby J

    2014-01-01

    Background An increase in serious adverse events with both regular formoterol and regular salmeterol in chronic asthma has been demonstrated in previous Cochrane reviews. Objectives We set out to compare the risks of mortality and non-fatal serious adverse events in trials which have randomised patients with chronic asthma to regular formoterol versus regular salmeterol. Search methods We identified trials using the Cochrane Airways Group Specialised Register of trials. We checked manufacturers’ websites of clinical trial registers for unpublished trial data and also checked Food and Drug Administration (FDA) submissions in relation to formoterol and salmeterol. The date of the most recent search was January 2012. Selection criteria We included controlled, parallel-design clinical trials on patients of any age and with any severity of asthma if they randomised patients to treatment with regular formoterol versus regular salmeterol (without randomised inhaled corticosteroids), and were of at least 12 weeks’ duration. Data collection and analysis Two authors independently selected trials for inclusion in the review and extracted outcome data. We sought unpublished data on mortality and serious adverse events from the sponsors and authors. Main results The review included four studies (involving 1116 adults and 156 children). All studies were open label and recruited patients who were already taking inhaled corticosteroids for their asthma, and all studies contributed data on serious adverse events. All studies compared formoterol 12 μg versus salmeterol 50 μg twice daily. The adult studies were all comparing Foradil Aerolizer with Serevent Diskus, and the children’s study compared Oxis Turbohaler to Serevent Accuhaler. There was only one death in an adult (which was unrelated to asthma) and none in children, and there were no significant differences in non-fatal serious adverse events comparing formoterol to salmeterol in adults (Peto odds ratio (OR) 0.77; 95% confidence interval (CI) 0.46 to 1.28), or children (Peto OR 0.95; 95% CI 0.06 to 15.33). Over a six-month period, in studies involving adults that contributed to this analysis, the percentages with serious adverse events were 5.1% for formoterol and 6.4% for salmeterol; and over a three-month period the percentages of children with serious adverse events were 1.3% for formoterol and 1.3% for salmeterol. Authors’ conclusions We identified four studies comparing regular formoterol to regular salmeterol (without randomised inhaled corticosteroids, but all participants were on regular background inhaled corticosteroids). The events were infrequent and consequently too few patients have been studied to allow any firm conclusions to be drawn about the relative safety of formoterol and salmeterol. Asthma-related serious adverse events were rare and there were no reported asthma-related deaths. PMID:22419326

  8. Spectral Regularization Algorithms for Learning Large Incomplete Matrices.

    PubMed

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-03-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.

  9. Spectral Regularization Algorithms for Learning Large Incomplete Matrices

    PubMed Central

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-01-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 106 × 106 incomplete matrix with 105 observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques. PMID:21552465

  10. A fractional-order accumulative regularization filter for force reconstruction

    NASA Astrophysics Data System (ADS)

    Wensong, Jiang; Zhongyu, Wang; Jing, Lv

    2018-02-01

    The ill-posed inverse problem of the force reconstruction comes from the influence of noise to measured responses and results in an inaccurate or non-unique solution. To overcome this ill-posedness, in this paper, the transfer function of the reconstruction model is redefined by a Fractional order Accumulative Regularization Filter (FARF). First, the measured responses with noise are refined by a fractional-order accumulation filter based on a dynamic data refresh strategy. Second, a transfer function, generated by the filtering results of the measured responses, is manipulated by an iterative Tikhonov regularization with a serious of iterative Landweber filter factors. Third, the regularization parameter is optimized by the Generalized Cross-Validation (GCV) to improve the ill-posedness of the force reconstruction model. A Dynamic Force Measurement System (DFMS) for the force reconstruction is designed to illustrate the application advantages of our suggested FARF method. The experimental result shows that the FARF method with r = 0.1 and α = 20, has a PRE of 0.36% and an RE of 2.45%, is superior to other cases of the FARF method and the traditional regularization methods when it comes to the dynamic force reconstruction.

  11. Three-dimensional Gravity Inversion with a New Gradient Scheme on Unstructured Grids

    NASA Astrophysics Data System (ADS)

    Sun, S.; Yin, C.; Gao, X.; Liu, Y.; Zhang, B.

    2017-12-01

    Stabilized gradient-based methods have been proved to be efficient for inverse problems. Based on these methods, setting gradient close to zero can effectively minimize the objective function. Thus the gradient of objective function determines the inversion results. By analyzing the cause of poor resolution on depth in gradient-based gravity inversion methods, we find that imposing depth weighting functional in conventional gradient can improve the depth resolution to some extent. However, the improvement is affected by the regularization parameter and the effect of the regularization term becomes smaller with increasing depth (shown as Figure 1 (a)). In this paper, we propose a new gradient scheme for gravity inversion by introducing a weighted model vector. The new gradient can improve the depth resolution more efficiently, which is independent of the regularization parameter, and the effect of regularization term will not be weakened when depth increases. Besides, fuzzy c-means clustering method and smooth operator are both used as regularization terms to yield an internal consecutive inverse model with sharp boundaries (Sun and Li, 2015). We have tested our new gradient scheme with unstructured grids on synthetic data to illustrate the effectiveness of the algorithm. Gravity forward modeling with unstructured grids is based on the algorithm proposed by Okbe (1979). We use a linear conjugate gradient inversion scheme to solve the inversion problem. The numerical experiments show a great improvement in depth resolution compared with regular gradient scheme, and the inverse model is compact at all depths (shown as Figure 1 (b)). AcknowledgeThis research is supported by Key Program of National Natural Science Foundation of China (41530320), China Natural Science Foundation for Young Scientists (41404093), and Key National Research Project of China (2016YFC0303100, 2017YFC0601900). ReferencesSun J, Li Y. 2015. Multidomain petrophysically constrained inversion and geology differentiation using guided fuzzy c-means clustering. Geophysics, 80(4): ID1-ID18. Okabe M. 1979. Analytical expressions for gravity anomalies due to homogeneous polyhedral bodies and translations into magnetic anomalies. Geophysics, 44(4), 730-741.

  12. 4D-tomographic reconstruction of water vapor using the hybrid regularization technique with application to the North West of Iran

    NASA Astrophysics Data System (ADS)

    Adavi, Zohre; Mashhadi-Hossainali, Masoud

    2015-04-01

    Water vapor is considered as one of the most important weather parameter in meteorology. Its non-uniform distribution, which is due to the atmospheric phenomena above the surface of the earth, depends both on space and time. Due to the limited spatial and temporal coverage of observations, estimating water vapor is still a challenge in meteorology and related fields such as positioning and geodetic techniques. Tomography is a method for modeling the spatio-temporal variations of this parameter. By analyzing the impact of troposphere on the Global Navigation Satellite (GNSS) signals, inversion techniques are used for modeling the water vapor in this approach. Non-uniqueness and instability of solution are the two characteristic features of this problem. Horizontal and/or vertical constraints are usually used to compute a unique solution for this problem. Here, a hybrid regularization method is used for computing a regularized solution. The adopted method is based on the Least-Square QR (LSQR) and Tikhonov regularization techniques. This method benefits from the advantages of both the iterative and direct techniques. Moreover, it is independent of initial values. Based on this property and using an appropriate resolution for the model, firstly the number of model elements which are not constrained by GPS measurement are minimized and then; water vapor density is only estimated at the voxels which are constrained by these measurements. In other words, no constraint is added to solve the problem. Reconstructed profiles of water vapor are validated using radiosonde measurements.

  13. The Changes of Pulmonary Function in COPD During Four-Year Period

    PubMed Central

    Cukic, Vesna; Lovre, Vladimir; Ustamujic, Aida

    2013-01-01

    Conflict of interest: none declared. Introduction COPD (chronic obstructive pulmonary disease) is characterized by airflow limitation that is not fully reversible. OBJECTIVE: to show the changes of pulmonary function in COPD during the 4 -year evolution of illness. Material and Methods The research was done on patients suffering from COPD treated at the Clinic “Podhrastovi” during 2006 and 2007. The tested parameters were examined from the date of receiving patient with COPD to hospital treatment in 2006 and 2007 and then followed prospectively until 2010 or 2011 (the follow-up period was 4 years). There were total 199 treated patients who were chosen at random and regularly attended the control examinations. The study was conducted on adult patients of both sexes, different age group. In each patient the duration of illness was recorded so is sex, age, data of smoking habits, information about the regularity of taking bronchodilator therapy during remissions of disease, about the treatment of disease exacerbations, results of pulmonary functional tests as follows: FVC (forced vital capacity), FEV1 (forced expiratory volume in one second) and bronchodilator reversibility testing. All these parameters were measured at the beginning and at the end of each hospital treatment on the apparatuses of Clinic “Podhrastovi”. We took in elaboration those data obtained in the beginning of the first hospitalization and at the end of the last hospitalization or at the last control in outpatient department when patient was in stable state. Patients were divided into three groups according to the number of exacerbations per year. Results airflow limitation in COPD is progressive; both FVC and FEV1 shows the statistically significant decrease during follow-up period of 4 years (p values / for both parameters/ =0.05) . But in patients regularly treated in phases of remission and exacerbations of illness the course of illness is slower. The fall of FVC and FEV1 is statistically significantly smaller in those received regular treatment in phases of remissions and exacerbations of illness (p values / for both parameters/ =0.01). The number of patients responding properly to bronchodilators decreased statistically significantly in patients with COPD during follow-up period (p=0.05). Conclusion COPD is characterized with airflow limitation which is progressive in the course of illness, but that course may be made slower using appropriate treatment during remission and exacerbations of diseases. PMID:24082829

  14. A Comparison of the Career Maturity, Self Concept and Academic Achievement of Female Cooperative Vocational Office Training Students, Intensive Business Training Students, and Regular Business Education Students in Selected High Schools in Mississippi.

    ERIC Educational Resources Information Center

    Seaward, Marty Robertson

    The purpose of this study was to compare the career maturity, self concept, and academic achievement of female students enrolled in intensive business training (IBT), cooperative vocational office training (CVOT), and regular business education programs. A sample of 240 students, equalized into three groups on the basis of IQ scores, were given…

  15. Superintendent Selection: Lessons from Political Science.

    ERIC Educational Resources Information Center

    Brunner, C. Cryss

    Research has shown that women are underrepresented in positions of educational authority. This paper presents findings of a study that asked the following question: What is it about the regularities in discourse and practice in relationship to power in a particular community that would allow a woman to be selected for the superintendency, when…

  16. Embedded Incremental Feature Selection for Reinforcement Learning

    DTIC Science & Technology

    2012-05-01

    Prior to this work, feature selection for reinforce- ment learning has focused on linear value function ap- proximation ( Kolter and Ng, 2009; Parr et al...InProceed- ings of the the 23rd International Conference on Ma- chine Learning, pages 449–456. Kolter , J. Z. and Ng, A. Y. (2009). Regularization and feature

  17. Two-level structural sparsity regularization for identifying lattices and defects in noisy images

    DOE PAGES

    Li, Xin; Belianinov, Alex; Dyck, Ondrej E.; ...

    2018-03-09

    Here, this paper presents a regularized regression model with a two-level structural sparsity penalty applied to locate individual atoms in a noisy scanning transmission electron microscopy image (STEM). In crystals, the locations of atoms is symmetric, condensed into a few lattice groups. Therefore, by identifying the underlying lattice in a given image, individual atoms can be accurately located. We propose to formulate the identification of the lattice groups as a sparse group selection problem. Furthermore, real atomic scale images contain defects and vacancies, so atomic identification based solely on a lattice group may result in false positives and false negatives.more » To minimize error, model includes an individual sparsity regularization in addition to the group sparsity for a within-group selection, which results in a regression model with a two-level sparsity regularization. We propose a modification of the group orthogonal matching pursuit (gOMP) algorithm with a thresholding step to solve the atom finding problem. The convergence and statistical analyses of the proposed algorithm are presented. The proposed algorithm is also evaluated through numerical experiments with simulated images. The applicability of the algorithm on determination of atom structures and identification of imaging distortions and atomic defects was demonstrated using three real STEM images. In conclusion, we believe this is an important step toward automatic phase identification and assignment with the advent of genomic databases for materials.« less

  18. Two-level structural sparsity regularization for identifying lattices and defects in noisy images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Xin; Belianinov, Alex; Dyck, Ondrej E.

    Here, this paper presents a regularized regression model with a two-level structural sparsity penalty applied to locate individual atoms in a noisy scanning transmission electron microscopy image (STEM). In crystals, the locations of atoms is symmetric, condensed into a few lattice groups. Therefore, by identifying the underlying lattice in a given image, individual atoms can be accurately located. We propose to formulate the identification of the lattice groups as a sparse group selection problem. Furthermore, real atomic scale images contain defects and vacancies, so atomic identification based solely on a lattice group may result in false positives and false negatives.more » To minimize error, model includes an individual sparsity regularization in addition to the group sparsity for a within-group selection, which results in a regression model with a two-level sparsity regularization. We propose a modification of the group orthogonal matching pursuit (gOMP) algorithm with a thresholding step to solve the atom finding problem. The convergence and statistical analyses of the proposed algorithm are presented. The proposed algorithm is also evaluated through numerical experiments with simulated images. The applicability of the algorithm on determination of atom structures and identification of imaging distortions and atomic defects was demonstrated using three real STEM images. In conclusion, we believe this is an important step toward automatic phase identification and assignment with the advent of genomic databases for materials.« less

  19. Effects of dance movement therapy on selected cardiovascular parameters and estimated maximum oxygen consumption in hypertensive patients.

    PubMed

    Aweto, H A; Owoeye, O B A; Akinbo, S R A; Onabajo, A A

    2012-01-01

    Objective:Arterial hypertension is a medical condition associated with increased risks of of death, cardiovascular mortality and cardiovascular morbidity including stroke, coronary heart disease, atrial fibrillation and renal insufficiency. Regular physical exercise is considered to be an important part of the non-pharmacologictreatment of hypertension. The purpose of this study was to investigate the effects of dance movement therapy (DMT) on selected cardiovascular parameters and estimated maximum oxygen consumption in hypertensive patients. Fifty (50) subjects with hypertension participated in the study. They were randomly assigned to 2 equal groups; A (DMT group) and B (Control group). Group A carried out dance movement therapy 2 times a week for 4 weeks while group B underwent some educational sessions 2 times a week for the same duration. All the subjects were on anti-hypertensive drugs. 38 subjects completed the study with the DMTgroup having a total of 23 subjects (10 males and 13 females) and the control group 15 subjects (6 males and 9 females). Descriptive statistics of mean, standard deviation and inferential statistics of paired and independentt-testwere used for data analysis. Following four weeks of dance movement therapy, paired t-test analysis showed that there was a statistically significant difference in the Resting systolic blood pressure (RSBP) (p < 0.001*), Resting diastolic blood pressure (RDBP) (p < 0.001*), Resting heart rate (RHR) (p = 0.024*), Maximum heart rate (MHR) (p=0.002*) and Estimated oxygen consumption (VO2max) (p = 0.023*) in subjects in group A (p < 0.05) while there was no significant difference observed in outcome variables of subjects in group B (p > 0.05). Independent t-test analysis between the differences in the pre and post intervention scores of groups A and B also showed statistically significant differences in all the outcome variables (p <0.05). DMT was effective in improving cardiovascular parameters and estimated maximum oxygen consumption in hypertensive patients.

  20. Healthy late preterm infants and supplementary artificial milk feeds: effects on breast feeding and associated clinical parameters.

    PubMed

    Mattsson, Elisabet; Funkquist, Eva-Lotta; Wickström, Maria; Nyqvist, Kerstin H; Volgsten, Helena

    2015-04-01

    to compare the influence of supplementary artificial milk feeds on breast feeding and certain clinical parameters among healthy late preterm infants given regular supplementary artificial milk feeds versus being exclusively breast fed from birth. a comparative study using quantitative methods. Data were collected via a parental diary and medical records. parents of 77 late preterm infants (34 5/7-36 6/7 weeks), whose mothers intended to breast feed, completed a diary during the infants׳ hospital stay. infants who received regular supplementary artificial milk feeds experienced a longer delay before initiation of breast feeding, were breast fed less frequently and had longer hospital stays than infants exclusively breast fed from birth. Exclusively breast-fed infants had a greater weight loss than infants with regular artificial milk supplementation. A majority of the mothers (65%) with an infant prescribed artificial milk never expressed their milk and among the mothers who used a breast-pump, milk expression commenced late (10-84 hours after birth). At discharge, all infants were breast fed to some extent, 43% were exclusively breast fed. clinical practice and routines influence the initiation of breast feeding among late preterm infants and may act as barriers to the mothers׳ establishment of exclusive breast feeding. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Joint image registration and fusion method with a gradient strength regularization

    NASA Astrophysics Data System (ADS)

    Lidong, Huang; Wei, Zhao; Jun, Wang

    2015-05-01

    Image registration is an essential process for image fusion, and fusion performance can be used to evaluate registration accuracy. We propose a maximum likelihood (ML) approach to joint image registration and fusion instead of treating them as two independent processes in the conventional way. To improve the visual quality of a fused image, a gradient strength (GS) regularization is introduced in the cost function of ML. The GS of the fused image is controllable by setting the target GS value in the regularization term. This is useful because a larger target GS brings a clearer fused image and a smaller target GS makes the fused image smoother and thus restrains noise. Hence, the subjective quality of the fused image can be improved whether the source images are polluted by noise or not. We can obtain the fused image and registration parameters successively by minimizing the cost function using an iterative optimization method. Experimental results show that our method is effective with transformation, rotation, and scale parameters in the range of [-2.0, 2.0] pixel, [-1.1 deg, 1.1 deg], and [0.95, 1.05], respectively, and variances of noise smaller than 300. It also demonstrated that our method yields a more visual pleasing fused image and higher registration accuracy compared with a state-of-the-art algorithm.

  2. Regularized solution of a nonlinear problem in electromagnetic sounding

    NASA Astrophysics Data System (ADS)

    Piero Deidda, Gian; Fenu, Caterina; Rodriguez, Giuseppe

    2014-12-01

    Non destructive investigation of soil properties is crucial when trying to identify inhomogeneities in the ground or the presence of conductive substances. This kind of survey can be addressed with the aid of electromagnetic induction measurements taken with a ground conductivity meter. In this paper, starting from electromagnetic data collected by this device, we reconstruct the electrical conductivity of the soil with respect to depth, with the aid of a regularized damped Gauss-Newton method. We propose an inversion method based on the low-rank approximation of the Jacobian of the function to be inverted, for which we develop exact analytical formulae. The algorithm chooses a relaxation parameter in order to ensure the positivity of the solution and implements various methods for the automatic estimation of the regularization parameter. This leads to a fast and reliable algorithm, which is tested on numerical experiments both on synthetic data sets and on field data. The results show that the algorithm produces reasonable solutions in the case of synthetic data sets, even in the presence of a noise level consistent with real applications, and yields results that are compatible with those obtained by electrical resistivity tomography in the case of field data. Research supported in part by Regione Sardegna grant CRP2_686.

  3. A Cash-back Rebate Program for Healthy Food Purchases in South Africa: Selection and Program Effects in Self-reported Diet Patterns.

    PubMed

    An, Ruopeng; Sturm, Roland

    2017-03-01

    A South African insurer launched a rebate program for healthy food purchases for its members, but only available in program-designated supermarkets. To eliminate selection bias in program enrollment, we estimated the impact of subsidies in nudging the population towards a healthier diet using an instrumental variable approach. Data came from a health behavior questionnaire administered among members in the health promotion program. Individual and supermarket addresses were geocoded and differential distances from home to program-designated supermarkets versus competing supermarkets were calculated. Bivariate probit and linear instrumental variable models were performed to control for likely unobserved selection biases, employing differential distances as a predictor of program enrollment. For regular fast-food, processed meat, and salty food consumption, approximately two-thirds of the difference between participants and nonparticipants was attributable to the intervention and one-third to selection effects. For fruit/ vegetable and fried food consumption, merely one-eighth of the difference was selection. The rebate reduced regular consumption of fast food by 15% and foods high in salt/sugar and fried foods by 22%- 26%, and increased fruit/vegetable consumption by 21% (0.66 serving/day). Large population interventions are an essential complement to laboratory experiments, but selection biases require explicit attention in evaluation studies conducted in naturalistic settings.

  4. Evolutionary graph theory: breaking the symmetry between interaction and replacement

    PubMed Central

    Ohtsuki, Hisashi; Pacheco, Jorge M.; Nowak, Martin A.

    2008-01-01

    We study evolutionary dynamics in a population whose structure is given by two graphs: the interaction graph determines who plays with whom in an evolutionary game; the replacement graph specifies the geometry of evolutionary competition and updating. First, we calculate the fixation probabilities of frequency dependent selection between two strategies or phenotypes. We consider three different update mechanisms: birth-death, death-birth and imitation. Then, as a particular example, we explore the evolution of cooperation. Suppose the interaction graph is a regular graph of degree h, the replacement graph is a regular graph of degree g and the overlap between the two graphs is a regular graph of degree l. We show that cooperation is favored by natural selection if b/c > hg/l. Here, b and c denote the benefit and cost of the altruistic act. This result holds for death-birth updating, weak selection and large population size. Note that the optimum population structure for cooperators is given by maximum overlap between the interaction and the replacement graph (g = h = l), which means that the two graphs are identical. We also prove that a modified replicator equation can describe how the expected values of the frequencies of an arbitrary number of strategies change on replacement and interaction graphs: the two graphs induce a transformation of the payoff matrix. PMID:17350049

  5. Multisensor satellite data for water quality analysis and water pollution risk assessment: decision making under deep uncertainty with fuzzy algorithm in framework of multimodel approach

    NASA Astrophysics Data System (ADS)

    Kostyuchenko, Yuriy V.; Sztoyka, Yulia; Kopachevsky, Ivan; Artemenko, Igor; Yuschenko, Maxim

    2017-10-01

    Multi-model approach for remote sensing data processing and interpretation is described. The problem of satellite data utilization in multi-modeling approach for socio-ecological risks assessment is formally defined. Observation, measurement and modeling data utilization method in the framework of multi-model approach is described. Methodology and models of risk assessment in framework of decision support approach are defined and described. Method of water quality assessment using satellite observation data is described. Method is based on analysis of spectral reflectance of aquifers. Spectral signatures of freshwater bodies and offshores are analyzed. Correlations between spectral reflectance, pollutions and selected water quality parameters are analyzed and quantified. Data of MODIS, MISR, AIRS and Landsat sensors received in 2002-2014 have been utilized verified by in-field spectrometry and lab measurements. Fuzzy logic based approach for decision support in field of water quality degradation risk is discussed. Decision on water quality category is making based on fuzzy algorithm using limited set of uncertain parameters. Data from satellite observations, field measurements and modeling is utilizing in the framework of the approach proposed. It is shown that this algorithm allows estimate water quality degradation rate and pollution risks. Problems of construction of spatial and temporal distribution of calculated parameters, as well as a problem of data regularization are discussed. Using proposed approach, maps of surface water pollution risk from point and diffuse sources are calculated and discussed.

  6. An evaluation of alternative selection indexes for a non-linear profit trait approaching its economic optimum.

    PubMed

    Martin-Collado, D; Byrne, T J; Visser, B; Amer, P R

    2016-12-01

    This study used simulation to evaluate the performance of alternative selection index configurations in the context of a breeding programme where a trait with a non-linear economic value is approaching an economic optimum. The simulation used a simple population structure that approximately mimics selection in dual purpose sheep flocks in New Zealand (NZ). In the NZ dual purpose sheep population, number of lambs born is a genetic trait that is approaching an economic optimum, while genetically correlated growth traits have linear economic values and are not approaching any optimum. The predominant view among theoretical livestock geneticists is that the optimal approach to select for nonlinear profit traits is to use a linear selection index and to update it regularly. However, there are some nonlinear index approaches that have not been evaluated. This study assessed the efficiency of the following four alternative selection index approaches in terms of genetic progress relative to each other: (i) a linear index, (ii) a linear index updated regularly, (iii) a nonlinear (quadratic) index, and (iv) a NLF index (nonlinear index below the optimum and then flat). The NLF approach does not reward or penalize animals for additional genetic merit beyond the trait optimum. It was found to be at least comparable in efficiency to the approach of regularly updating the linear index with short (15 year) and long (30 year) time frames. The relative efficiency of this approach was slightly reduced when the current average value of the nonlinear trait was close to the optimum. Finally, practical issues of industry application of indexes are considered and some potential practical benefits of efficient deployment of a NLF index in highly heterogeneous industries (breeds, flocks and production environments) such as in the NZ dual purpose sheep population are discussed. © 2016 Blackwell Verlag GmbH.

  7. Persistent frequent attenders in primary care: costs, reasons for attendance, organisation of care and potential for cognitive behavioural therapeutic intervention.

    PubMed

    Morriss, Richard; Kai, Joe; Atha, Christopher; Avery, Anthony; Bayes, Sara; Franklin, Matthew; George, Tracey; James, Marilyn; Malins, Samuel; McDonald, Ruth; Patel, Shireen; Stubley, Michelle; Yang, Min

    2012-07-06

    The top 3% of frequent attendance in primary care is associated with 15% of all appointments in primary care, a fivefold increase in hospital expenditure, and more mental disorder and functional somatic symptoms compared to normal attendance. Although often temporary if these rates of attendance last more than two years, they may become persistent (persistent frequent or regular attendance). However, there is no long-term study of the economic impact or clinical characteristics of regular attendance in primary care. Cognitive behaviour formulation and treatment (CBT) for regular attendance as a motivated behaviour may offer an understanding of the development, maintenance and treatment of regular attendance in the context of their health problems, cognitive processes and social context. A case control design will compare the clinical characteristics, patterns of health care use and economic costs over the last 10 years of 100 regular attenders (≥30 appointments with general practitioner [GP] over 2 years) with 100 normal attenders (6-22 appointments with GP over 2 years), from purposefully selected primary care practices with differing organisation of care and patient demographics. Qualitative interviews with regular attending patients and practice staff will explore patient barriers, drivers and experiences of consultation, and organisation of care by practices with its challenges. Cognitive behaviour formulation analysed thematically will explore the development, maintenance and therapeutic opportunities for management in regular attenders. The feasibility, acceptability and utility of CBT for regular attendance will be examined. The health care costs, clinical needs, patient motivation for consultation and organisation of care for persistent frequent or regular attendance in primary care will be explored to develop training and policies for service providers. CBT for regular attendance will be piloted with a view to developing this approach as part of a multifaceted intervention.

  8. Mental health status of Sri Lanka Navy personnel three years after end of combat operations: a follow up study.

    PubMed

    Hanwella, Raveen; Jayasekera, Nicholas E L W; de Silva, Varuni A

    2014-01-01

    The main aim of this study was to assess the mental health status of the Navy Special Forces and regular forces three and a half years after the end of combat operations in mid 2009, and compare it with the findings in 2009. This cross sectional study was carried out in the Sri Lanka Navy (SLN), three and a half years after the end of combat operations. Representative samples of SLN Special Forces and regular forces deployed in combat areas were selected using simple random sampling. Only personnel who had served continuously in combat areas during the one year period prior to the end of combat operations were included in the study. The sample consisted of 220 Special Forces and 275 regular forces personnel. Compared to regular forces a significantly higher number of Special Forces personnel had experienced potentially traumatic events. Compared to the period immediately after end of combat operations, in the Special Forces, prevalence of psychological distress and fatigue showed a marginal increase while hazardous drinking and multiple physical symptoms showed a marginal decrease. In the regular forces, the prevalence of psychological distress, fatigue and multiple somatic symptoms declined and prevalence of hazardous drinking increased from 16.5% to 25.7%. During the same period prevalence of smoking doubled in both Special Forces and regular forces. Prevalence of PTSD reduced from 1.9% in Special Forces to 0.9% and in the regular forces from 2.07% to 1.1%. Three and a half years after the end of combat operations mental health problems have declined among SLN regular forces while there was no significant change among Special Forces. Hazardous drinking among regular forces and smoking among both Special Forces and regular forces have increased.

  9. Hadamard States for the Klein-Gordon Equation on Lorentzian Manifolds of Bounded Geometry

    NASA Astrophysics Data System (ADS)

    Gérard, Christian; Oulghazi, Omar; Wrochna, Michał

    2017-06-01

    We consider the Klein-Gordon equation on a class of Lorentzian manifolds with Cauchy surface of bounded geometry, which is shown to include examples such as exterior Kerr, Kerr-de Sitter spacetime and the maximal globally hyperbolic extension of the Kerr outer region. In this setup, we give an approximate diagonalization and a microlocal decomposition of the Cauchy evolution using a time-dependent version of the pseudodifferential calculus on Riemannian manifolds of bounded geometry. We apply this result to construct all pure regular Hadamard states (and associated Feynman inverses), where regular refers to the state's two-point function having Cauchy data given by pseudodifferential operators. This allows us to conclude that there is a one-parameter family of elliptic pseudodifferential operators that encodes both the choice of (pure, regular) Hadamard state and the underlying spacetime metric.

  10. Constraints for transonic black hole accretion

    NASA Technical Reports Server (NTRS)

    Abramowicz, Marek A.; Kato, Shoji

    1989-01-01

    Regularity conditions and global topological constraints leave some forbidden regions in the parameter space of the transonic isothermal, rotating matter onto black holes. Unstable flows occupy regions touching the boundaries of the forbidden regions. The astrophysical consequences of these results are discussed.

  11. Use of time series and harmonic constituents of tidal propagation to enhance estimation of coastal aquifer heterogeneity

    USGS Publications Warehouse

    Hughes, Joseph D.; White, Jeremy T.; Langevin, Christian D.

    2010-01-01

    A synthetic two‐dimensional model of a horizontally and vertically heterogeneous confined coastal aquifer system, based on the Upper Floridan aquifer in south Florida, USA, subjected to constant recharge and a complex tidal signal was used to generate 15‐minute water‐level data at select locations over a 7‐day simulation period.   “Observed” water‐level data were generated by adding noise, representative of typical barometric pressure variations and measurement errors, to 15‐minute data from the synthetic model. Permeability was calibrated using a non‐linear gradient‐based parameter inversion approach with preferred‐value Tikhonov regularization and 1) “observed” water‐level data, 2) harmonic constituent data, or 3) a combination of “observed” water‐level and harmonic constituent data.    In all cases, high‐frequency data used in the parameter inversion process were able to characterize broad‐scale heterogeneities; the ability to discern fine‐scale heterogeneity was greater when harmonic constituent data were used.  These results suggest that the combined use of highly parameterized‐inversion techniques and high frequency time and/or processed‐harmonic constituent water‐level data could be a useful approach to better characterize aquifer heterogeneities in coastal aquifers influenced by ocean tides.

  12. Good reasons to implement quality assurance in nationwide breast cancer screening programs in Croatia and Serbia: results from a pilot study.

    PubMed

    Ciraj-Bjelac, Olivera; Faj, Dario; Stimac, Damir; Kosutic, Dusko; Arandjic, Danijela; Brkic, Hrvoje

    2011-04-01

    The purpose of this study is to investigate the need for and the possible achievements of a comprehensive QA programme and to look at effects of simple corrective actions on image quality in Croatia and in Serbia. The paper focuses on activities related to the technical and radiological aspects of QA. The methodology consisted of two phases. The aim of the first phase was the initial assessment of mammography practice in terms of image quality, patient dose and equipment performance in selected number of mammography units in Croatia and Serbia. Subsequently, corrective actions were suggested and implemented. Then the same parameters were re-assessed. Most of the suggested corrective actions were simple, low-cost and possible to implement immediately, as these were related to working habits in mammography units, such as film processing and darkroom conditions. It has been demonstrated how simple quantitative assessment of image quality can be used for optimisation purposes. Analysis of image quality parameters as OD, gradient and contrast demonstrated general similarities between mammography practices in Croatia and Serbia. The applied methodology should be expanded to larger number of hospitals and applied on a regular basis. Copyright © 2009 Elsevier Ireland Ltd. All rights reserved.

  13. Classifying orbits in galaxy models with a prolate or an oblate dark matter halo component

    NASA Astrophysics Data System (ADS)

    Zotos, Euaggelos E.

    2014-03-01

    Aims: The distinction between regular and chaotic motion in galaxies is undoubtedly an issue of paramount importance. We explore the nature of orbits of stars moving in the meridional plane (R,z) of an axially symmetric galactic model with a disk, a spherical nucleus, and a flat biaxial dark matter halo component. In particular, we study the influence of all the involved parameters of the dynamical system by computing both the percentage of chaotic orbits and the percentages of orbits of the main regular resonant families in each case. Methods: To distinguish between ordered and chaotic motion, we use the smaller alignment index (SALI) method to extensive samples of orbits by numerically integrating the equations of motion as well as the variational equations. Moreover, a method based on the concept of spectral dynamics that utilizes the Fourier transform of the time series of each coordinate is used to identify the various families of regular orbits and also to recognize the secondary resonances that bifurcate from them. Two cases are studied for every parameter: (i) the case where the halo component is prolate and (ii) the case where an oblate dark halo is present. Results: Our numerical investigation indicates that all the dynamical quantities affect, more or less, the overall orbital structure. It was observed that the mass of the nucleus, the halo flattening parameter, the scale length of the halo, the angular momentum, and the orbital energy are the most influential quantities, while the effect of all the other parameters is much weaker. It was also found that all the parameters corresponding to the disk only have a minor influence on the nature of orbits. Furthermore, some other quantities, such as the minimum distance to the origin, the horizontal, and the vertical force, were tested as potential chaos detectors. Our analysis revealed that only general information can be obtained from these quantities. We also compared our results with early related work. Appendix A is available in electronic form at http://www.aanda.org

  14. Nonlinear PP and PS joint inversion based on the exact Zoeppritz equations: a two-stage procedure

    NASA Astrophysics Data System (ADS)

    Zhi, Lixia; Chen, Shuangquan; Song, Baoshan; Li, Xiang-yang

    2018-04-01

    S-velocity and density are very important parameters in distinguishing lithology and estimating other petrophysical properties. A reliable estimate of S-velocity and density is very difficult to obtain, even from long-offset gather data. Joint inversion of PP and PS data provides a promising strategy for stabilizing and improving the results of inversion in estimating elastic parameters and density. For 2D or 3D inversion, the trace-by-trace strategy is still the most widely used method although it often suffers from a lack of clarity because of its high efficiency, which is due to parallel computing. This paper describes a two-stage inversion method for nonlinear PP and PS joint inversion based on the exact Zoeppritz equations. There are several advantages for our proposed methods as follows: (1) Thanks to the exact Zoeppritz equation, our joint inversion method is applicable for wide angle amplitude-versus-angle inversion; (2) The use of both P- and S-wave information can further enhance the stability and accuracy of parameter estimation, especially for the S-velocity and density; (3) The two-stage inversion procedure proposed in this paper can achieve a good compromise between efficiency and precision. On the one hand, the trace-by-trace strategy used in the first stage can be processed in parallel so that it has high computational efficiency. On the other hand, to deal with the indistinctness of and undesired disturbances to the inversion results obtained from the first stage, we apply the second stage—total variation (TV) regularization. By enforcing spatial and temporal constraints, the TV regularization stage deblurs the inversion results and leads to parameter estimation with greater precision. Notably, the computation consumption of the TV regularization stage can be ignored compared to the first stage because it is solved using the fast split Bregman iterations. Numerical examples using a well log and the Marmousi II model show that the proposed joint inversion is a reliable method capable of accurately estimating the density parameter as well as P-wave velocity and S-wave velocity, even when the seismic data is noisy with signal-to-noise ratio of 5.

  15. Isotope pattern deconvolution for peptide mass spectrometry by non-negative least squares/least absolute deviation template matching

    PubMed Central

    2012-01-01

    Background The robust identification of isotope patterns originating from peptides being analyzed through mass spectrometry (MS) is often significantly hampered by noise artifacts and the interference of overlapping patterns arising e.g. from post-translational modifications. As the classification of the recorded data points into either ‘noise’ or ‘signal’ lies at the very root of essentially every proteomic application, the quality of the automated processing of mass spectra can significantly influence the way the data might be interpreted within a given biological context. Results We propose non-negative least squares/non-negative least absolute deviation regression to fit a raw spectrum by templates imitating isotope patterns. In a carefully designed validation scheme, we show that the method exhibits excellent performance in pattern picking. It is demonstrated that the method is able to disentangle complicated overlaps of patterns. Conclusions We find that regularization is not necessary to prevent overfitting and that thresholding is an effective and user-friendly way to perform feature selection. The proposed method avoids problems inherent in regularization-based approaches, comes with a set of well-interpretable parameters whose default configuration is shown to generalize well without the need for fine-tuning, and is applicable to spectra of different platforms. The R package IPPD implements the method and is available from the Bioconductor platform (http://bioconductor.fhcrc.org/help/bioc-views/devel/bioc/html/IPPD.html). PMID:23137144

  16. Arbitrary Symbolism in Natural Language Revisited: When Word Forms Carry Meaning

    PubMed Central

    Reilly, Jamie; Westbury, Chris; Kean, Jacob; Peelle, Jonathan E.

    2012-01-01

    Cognitive science has a rich history of interest in the ways that languages represent abstract and concrete concepts (e.g., idea vs. dog). Until recently, this focus has centered largely on aspects of word meaning and semantic representation. However, recent corpora analyses have demonstrated that abstract and concrete words are also marked by phonological, orthographic, and morphological differences. These regularities in sound-meaning correspondence potentially allow listeners to infer certain aspects of semantics directly from word form. We investigated this relationship between form and meaning in a series of four experiments. In Experiments 1–2 we examined the role of metalinguistic knowledge in semantic decision by asking participants to make semantic judgments for aurally presented nonwords selectively varied by specific acoustic and phonetic parameters. Participants consistently associated increased word length and diminished wordlikeness with abstract concepts. In Experiment 3, participants completed a semantic decision task (i.e., abstract or concrete) for real words varied by length and concreteness. Participants were more likely to misclassify longer, inflected words (e.g., “apartment”) as abstract and shorter uninflected abstract words (e.g., “fate”) as concrete. In Experiment 4, we used a multiple regression to predict trial level naming data from a large corpus of nouns which revealed significant interaction effects between concreteness and word form. Together these results provide converging evidence for the hypothesis that listeners map sound to meaning through a non-arbitrary process using prior knowledge about statistical regularities in the surface forms of words. PMID:22879931

  17. Arbitrary symbolism in natural language revisited: when word forms carry meaning.

    PubMed

    Reilly, Jamie; Westbury, Chris; Kean, Jacob; Peelle, Jonathan E

    2012-01-01

    Cognitive science has a rich history of interest in the ways that languages represent abstract and concrete concepts (e.g., idea vs. dog). Until recently, this focus has centered largely on aspects of word meaning and semantic representation. However, recent corpora analyses have demonstrated that abstract and concrete words are also marked by phonological, orthographic, and morphological differences. These regularities in sound-meaning correspondence potentially allow listeners to infer certain aspects of semantics directly from word form. We investigated this relationship between form and meaning in a series of four experiments. In Experiments 1-2 we examined the role of metalinguistic knowledge in semantic decision by asking participants to make semantic judgments for aurally presented nonwords selectively varied by specific acoustic and phonetic parameters. Participants consistently associated increased word length and diminished wordlikeness with abstract concepts. In Experiment 3, participants completed a semantic decision task (i.e., abstract or concrete) for real words varied by length and concreteness. Participants were more likely to misclassify longer, inflected words (e.g., "apartment") as abstract and shorter uninflected abstract words (e.g., "fate") as concrete. In Experiment 4, we used a multiple regression to predict trial level naming data from a large corpus of nouns which revealed significant interaction effects between concreteness and word form. Together these results provide converging evidence for the hypothesis that listeners map sound to meaning through a non-arbitrary process using prior knowledge about statistical regularities in the surface forms of words.

  18. Nonlocal sparse model with adaptive structural clustering for feature extraction of aero-engine bearings

    NASA Astrophysics Data System (ADS)

    Zhang, Han; Chen, Xuefeng; Du, Zhaohui; Li, Xiang; Yan, Ruqiang

    2016-04-01

    Fault information of aero-engine bearings presents two particular phenomena, i.e., waveform distortion and impulsive feature frequency band dispersion, which leads to a challenging problem for current techniques of bearing fault diagnosis. Moreover, although many progresses of sparse representation theory have been made in feature extraction of fault information, the theory also confronts inevitable performance degradation due to the fact that relatively weak fault information has not sufficiently prominent and sparse representations. Therefore, a novel nonlocal sparse model (coined NLSM) and its algorithm framework has been proposed in this paper, which goes beyond simple sparsity by introducing more intrinsic structures of feature information. This work adequately exploits the underlying prior information that feature information exhibits nonlocal self-similarity through clustering similar signal fragments and stacking them together into groups. Within this framework, the prior information is transformed into a regularization term and a sparse optimization problem, which could be solved through block coordinate descent method (BCD), is formulated. Additionally, the adaptive structural clustering sparse dictionary learning technique, which utilizes k-Nearest-Neighbor (kNN) clustering and principal component analysis (PCA) learning, is adopted to further enable sufficient sparsity of feature information. Moreover, the selection rule of regularization parameter and computational complexity are described in detail. The performance of the proposed framework is evaluated through numerical experiment and its superiority with respect to the state-of-the-art method in the field is demonstrated through the vibration signals of experimental rig of aircraft engine bearings.

  19. Urban remote sensing in areas of conflict: TerraSAR-X and Sentinel-1 change detection in the Middle East

    NASA Astrophysics Data System (ADS)

    Tapete, Deodato; Cigna, Francesca

    2016-08-01

    Timely availability of images of suitable spatial resolution, temporal frequency and coverage is currently one of the major technical constraints on the application of satellite SAR remote sensing for the conservation of heritage assets in urban environments that are impacted by human-induced transformation. TerraSAR-X and Sentinel-1A, in this regard, are two different models of SAR data provision: very high resolution on-demand imagery with end user-selected acquisition parameters, on one side, and freely accessible GIS-ready products with intended regular temporal coverage, on the other. What this means for change detection analyses in urban areas is demonstrated in this paper via the experiment over Homs, the third largest city of Syria with an history of settlement since 2300 BCE, where the impacts of the recent civil war combine with pre- and post-conflict urban transformation . The potential performance of Sentinel-1A StripMap scenes acquired in an emergency context is simulated via the matching StripMap beam mode offered by TerraSAR-X. Benefits and limitations of the different radar frequency band, spatial resolution and single/multi-channel polarization are discussed, as a proof-of-concept of regular monitoring currently achievable with space-borne SAR in historic urban settings. Urban transformation observed across Homs in 2009, 2014 and 2015 shows the impact of the Syrian conflict on the cityscape and proves that operator-driven interpretation is required to understand the complexity of multiple and overlapping urban changes.

  20. A Course in Polymer Processing.

    ERIC Educational Resources Information Center

    Soong, David S.

    1985-01-01

    A special-topics course in polymer processing has acquired regular course status. Course goals, content (including such new topics as polymer applications in microelectronics), and selected term projects are described. (JN)

  1. Applications of exact traveling wave solutions of Modified Liouville and the Symmetric Regularized Long Wave equations via two new techniques

    NASA Astrophysics Data System (ADS)

    Lu, Dianchen; Seadawy, Aly R.; Ali, Asghar

    2018-06-01

    In this current work, we employ novel methods to find the exact travelling wave solutions of Modified Liouville equation and the Symmetric Regularized Long Wave equation, which are called extended simple equation and exp(-Ψ(ξ))-expansion methods. By assigning the different values to the parameters, different types of the solitary wave solutions are derived from the exact traveling wave solutions, which shows the efficiency and precision of our methods. Some solutions have been represented by graphical. The obtained results have several applications in physical science.

  2. Existence, uniqueness and regularity of a time-periodic probability density distribution arising in a sedimentation-diffusion problem

    NASA Technical Reports Server (NTRS)

    Nitsche, Ludwig C.; Nitsche, Johannes M.; Brenner, Howard

    1988-01-01

    The sedimentation and diffusion of a nonneutrally buoyant Brownian particle in vertical fluid-filled cylinder of finite length which is instantaneously inverted at regular intervals are investigated analytically. A one-dimensional convective-diffusive equation is derived to describe the temporal and spatial evolution of the probability density; a periodicity condition is formulated; the applicability of Fredholm theory is established; and the parameter-space regions are determined within which the existence and uniqueness of solutions are guaranteed. Numerical results for sample problems are presented graphically and briefly characterized.

  3. Supplementary data of “Impacts of mesic and xeric urban vegetation on outdoor thermal comfort and microclimate in Phoenix, AZ”

    PubMed Central

    Song, Jiyun; Wang, Zhi-Hua

    2015-01-01

    An advanced Markov-Chain Monte Carlo approach called Subset Simulation is described in Au and Beck (2001) [1] was used to quantify parameter uncertainty and model sensitivity of the urban land-atmospheric framework, viz. the coupled urban canopy model-single column model (UCM-SCM). The results show that the atmospheric dynamics are sensitive to land surface conditions. The most sensitive parameters are dimensional parameters, i.e. roof width, aspect ratio, roughness length of heat and momentum, since these parameters control the magnitude of sensible heat flux. The relative insensitive parameters are hydrological parameters since the lawns or green roofs in urban areas are regularly irrigated so that the water availability for evaporation is never constrained. PMID:26702421

  4. Long-Term Low-Dose Aspirin Use Reduces Gastric Cancer Incidence: A Nationwide Cohort Study.

    PubMed

    Kim, Young-Il; Kim, So Young; Kim, Ji Hyun; Lee, Jun Ho; Kim, Young-Woo; Ryu, Keun Won; Park, Jong-Hyock; Choi, Il Ju

    2016-04-01

    The aim of this study was to investigate whether aspirin use can reduce the incidence of gastric cancer in patients with hypertension or type 2 diabetes. A total of 200,000 patients with hypertension or type 2 diabetes were randomly selected from the Korean National Health Insurance claim database. Of these, 3,907 patients who used 100 mg of aspirin regularly (regular aspirin users) and 7,808 patients who did not use aspirin regularly (aspirin non-users) were selected at a frequency of 1:2, matched by age, sex, comorbid illnesses (type 2 diabetes and hypertension), and observation periods. The incidence of gastric cancer in this cohort was then assessed during the observation period of 2004 to 2010. In the matched cohort, the incidence rates of gastric cancer were 0.8% (31/3,907) for regular aspirin users and 1.1% (86/7,808) for aspirin non-users, but the cumulative incidence rates were not significantly different between groups (p=0.116, log-rank test). However, in multivariate analysis, regular aspirin users had a reduced risk of gastric cancer (adjusted hazard ratio [aHR], 0.71; 95% confidential interval [CI], 0.47 to 1.08; p=0.107). Duration of aspirin use showed significant association with reduction of gastric cancer risk (aHR for each year of aspirin use, 0.85; 95% CI, 0.73 to 0.99; p=0.044), particularly in patients who used aspirin for more than 3 years (aHR, 0.40; 95% CI, 0.16 to 0.98; p=0.045). Long-term low-dose aspirin use was associated with reduced gastric cancer risk in patients with hypertension or type 2 diabetes.

  5. Long-Term Low-Dose Aspirin Use Reduces Gastric Cancer Incidence: A Nationwide Cohort Study

    PubMed Central

    Kim, Young-Il; Kim, So Young; Kim, Ji Hyun; Lee, Jun Ho; Kim, Young-Woo; Ryu, Keun Won; Park, Jong-Hyock; Choi, Il Ju

    2016-01-01

    Purpose The aim of this study was to investigate whether aspirin use can reduce the incidence of gastric cancer in patients with hypertension or type 2 diabetes. Materials and Methods A total of 200,000 patients with hypertension or type 2 diabetes were randomly selected from the Korean National Health Insurance claim database. Of these, 3,907 patients who used 100 mg of aspirin regularly (regular aspirin users) and 7,808 patients who did not use aspirin regularly (aspirin non-users) were selected at a frequency of 1:2, matched by age, sex, comorbid illnesses (type 2 diabetes and hypertension), and observation periods. The incidence of gastric cancer in this cohort was then assessed during the observation period of 2004 to 2010. Results In the matched cohort, the incidence rates of gastric cancer were 0.8% (31/3,907) for regular aspirin users and 1.1% (86/7,808) for aspirin non-users, but the cumulative incidence rates were not significantly different between groups (p=0.116, log-rank test). However, in multivariate analysis, regular aspirin users had a reduced risk of gastric cancer (adjusted hazard ratio [aHR], 0.71; 95% confidential interval [CI], 0.47 to 1.08; p=0.107). Duration of aspirin use showed significant association with reduction of gastric cancer risk (aHR for each year of aspirin use, 0.85; 95% CI, 0.73 to 0.99; p=0.044), particularly in patients who used aspirin for more than 3 years (aHR, 0.40; 95% CI, 0.16 to 0.98; p=0.045). Conclusion Long-term low-dose aspirin use was associated with reduced gastric cancer risk in patients with hypertension or type 2 diabetes. PMID:26194372

  6. Calibration process of highly parameterized semi-distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Vidmar, Andrej; Brilly, Mitja

    2017-04-01

    Hydrological phenomena take place in the hydrological system, which is governed by nature, and are essentially stochastic. These phenomena are unique, non-recurring, and changeable across space and time. Since any river basin with its own natural characteristics and any hydrological event therein, are unique, this is a complex process that is not researched enough. Calibration is a procedure of determining the parameters of a model that are not known well enough. Input and output variables and mathematical model expressions are known, while only some parameters are unknown, which are determined by calibrating the model. The software used for hydrological modelling nowadays is equipped with sophisticated algorithms for calibration purposes without possibility to manage process by modeler. The results are not the best. We develop procedure for expert driven process of calibration. We use HBV-light-CLI hydrological model which has command line interface and coupling it with PEST. PEST is parameter estimation tool which is used widely in ground water modeling and can be used also on surface waters. Process of calibration managed by expert directly, and proportionally to the expert knowledge, affects the outcome of the inversion procedure and achieves better results than if the procedure had been left to the selected optimization algorithm. First step is to properly define spatial characteristic and structural design of semi-distributed model including all morphological and hydrological phenomena, like karstic area, alluvial area and forest area. This step includes and requires geological, meteorological, hydraulic and hydrological knowledge of modeler. Second step is to set initial parameter values at their preferred values based on expert knowledge. In this step we also define all parameter and observation groups. Peak data are essential in process of calibration if we are mainly interested in flood events. Each Sub Catchment in the model has own observations group. Third step is to set appropriate bounds to parameters in their range of realistic values. Fourth step is to use of singular value decomposition (SVD) ensures that PEST maintains numerical stability, regardless of how ill-posed is the inverse problem Fifth step is to run PWTADJ1. This creates a new PEST control file in which weights are adjusted such that the contribution made to the total objective function by each observation group is the same. This prevents the information content of any group from being invisible to the inversion process. Sixth step is to add Tikhonov regularization to the PEST control file by running the ADDREG1 utility (Doherty, J, 2013). In adding regularization to the PEST control file ADDREG1 automatically provides a prior information equation for each parameter in which the preferred value of that parameter is equated to its initial value. Last step is to run PEST. We run BeoPEST which a parallel version of PEST and can be run on multiple computers in parallel in same time on TCP communications and this speedup process of calibrations. The case study with results of calibration and validation of the model will be presented.

  7. Learning SAS’s Perl Regular Expression Matching the Easy Way: By Doing

    DTIC Science & Technology

    2015-01-12

    Doing 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Paul Genovesi 5d. PROJECT NUMBER 5e. TASK NUMBER 5f...regex_learning_tool allows both beginner and expert to efficiently practice PRX matching by selecting and processing only the match records that the user is interested...perl regular expression and/or source string. The regex_learning_tool allows both beginner and expert to efficiently practice PRX matching by

  8. Terminal attractors for addressable memory in neural networks

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    1988-01-01

    A new type of attractors - terminal attractors - for an addressable memory in neural networks operating in continuous time is introduced. These attractors represent singular solutions of the dynamical system. They intersect (or envelope) the families of regular solutions while each regular solution approaches the terminal attractor in a finite time period. It is shown that terminal attractors can be incorporated into neural networks such that any desired set of these attractors with prescribed basins is provided by an appropriate selection of the weight matrix.

  9. Global optimization for motion estimation with applications to ultrasound videos of carotid artery plaques

    NASA Astrophysics Data System (ADS)

    Murillo, Sergio; Pattichis, Marios; Soliz, Peter; Barriga, Simon; Loizou, C. P.; Pattichis, C. S.

    2010-03-01

    Motion estimation from digital video is an ill-posed problem that requires a regularization approach. Regularization introduces a smoothness constraint that can reduce the resolution of the velocity estimates. The problem is further complicated for ultrasound videos (US), where speckle noise levels can be significant. Motion estimation using optical flow models requires the modification of several parameters to satisfy the optical flow constraint as well as the level of imposed smoothness. Furthermore, except in simulations or mostly unrealistic cases, there is no ground truth to use for validating the velocity estimates. This problem is present in all real video sequences that are used as input to motion estimation algorithms. It is also an open problem in biomedical applications like motion analysis of US of carotid artery (CA) plaques. In this paper, we study the problem of obtaining reliable ultrasound video motion estimates for atherosclerotic plaques for use in clinical diagnosis. A global optimization framework for motion parameter optimization is presented. This framework uses actual carotid artery motions to provide optimal parameter values for a variety of motions and is tested on ten different US videos using two different motion estimation techniques.

  10. On decoupling of volatility smile and term structure in inverse option pricing

    NASA Astrophysics Data System (ADS)

    Egger, Herbert; Hein, Torsten; Hofmann, Bernd

    2006-08-01

    Correct pricing of options and other financial derivatives is of great importance to financial markets and one of the key subjects of mathematical finance. Usually, parameters specifying the underlying stochastic model are not directly observable, but have to be determined indirectly from observable quantities. The identification of local volatility surfaces from market data of European vanilla options is one very important example of this type. As with many other parameter identification problems, the reconstruction of local volatility surfaces is ill-posed, and reasonable results can only be achieved via regularization methods. Moreover, due to the sparsity of data, the local volatility is not uniquely determined, but depends strongly on the kind of regularization norm used and a good a priori guess for the parameter. By assuming a multiplicative structure for the local volatility, which is motivated by the specific data situation, the inverse problem can be decomposed into two separate sub-problems. This removes part of the non-uniqueness and allows us to establish convergence and convergence rates under weak assumptions. Additionally, a numerical solution of the two sub-problems is much cheaper than that of the overall identification problem. The theoretical results are illustrated by numerical tests.

  11. Bayesian Inference for Generalized Linear Models for Spiking Neurons

    PubMed Central

    Gerwinn, Sebastian; Macke, Jakob H.; Bethge, Matthias

    2010-01-01

    Generalized Linear Models (GLMs) are commonly used statistical methods for modelling the relationship between neural population activity and presented stimuli. When the dimension of the parameter space is large, strong regularization has to be used in order to fit GLMs to datasets of realistic size without overfitting. By imposing properly chosen priors over parameters, Bayesian inference provides an effective and principled approach for achieving regularization. Here we show how the posterior distribution over model parameters of GLMs can be approximated by a Gaussian using the Expectation Propagation algorithm. In this way, we obtain an estimate of the posterior mean and posterior covariance, allowing us to calculate Bayesian confidence intervals that characterize the uncertainty about the optimal solution. From the posterior we also obtain a different point estimate, namely the posterior mean as opposed to the commonly used maximum a posteriori estimate. We systematically compare the different inference techniques on simulated as well as on multi-electrode recordings of retinal ganglion cells, and explore the effects of the chosen prior and the performance measure used. We find that good performance can be achieved by choosing an Laplace prior together with the posterior mean estimate. PMID:20577627

  12. SAR data for river ice monitoring. How to meet requirements?

    NASA Astrophysics Data System (ADS)

    Łoś, Helena; Osińska-Skotak, Katarzyna; Pluto-Kossakowska, Joanna

    2017-04-01

    Although river ice is a natural element of rivers regime it can lead to severe problems such as winter floods or damages of bridges and bank revetments. Services that monitor river ice condition are still often based on field observation. For several year, however, Earth observation data have become of a great interest, especially SAR images, which allows to observe ice and river condition independently of clouds and sunlight. One of requirements of an effective monitoring system is frequent and regular data acquisition. To help to meet this requirement we assessed an impact of selected SAR data parameters into automatic ice types identification. Presented work consists of two parts. The first one focuses on comparison of C-band and X-band data in terms of the main ice type detection. The second part contains an analysis of polarisation reduction from quad-pol to dual-pol data. As the main element of data processing we chose the supervised classification with maximum likelihood algorithm adapted to Wishart distribution. The classification was preceded by statistical analysis of radar signal obtained for selected ice types including separability measures. Two river were selected as areas of interest - the Peace River in Canada and the Vistula in Poland. The results shows that using data registered in both bands similar accuracy of classification into main ice types can be obtain. Differences appear with details e.g. thin initial ice. Classification results obtained from quad-pol and dual-pol data were similar while four classes were selected. With six classes, however, differences between polarisation types have been noticed.

  13. Total variation superiorized conjugate gradient method for image reconstruction

    NASA Astrophysics Data System (ADS)

    Zibetti, Marcelo V. W.; Lin, Chuan; Herman, Gabor T.

    2018-03-01

    The conjugate gradient (CG) method is commonly used for the relatively-rapid solution of least squares problems. In image reconstruction, the problem can be ill-posed and also contaminated by noise; due to this, approaches such as regularization should be utilized. Total variation (TV) is a useful regularization penalty, frequently utilized in image reconstruction for generating images with sharp edges. When a non-quadratic norm is selected for regularization, as is the case for TV, then it is no longer possible to use CG. Non-linear CG is an alternative, but it does not share the efficiency that CG shows with least squares and methods such as fast iterative shrinkage-thresholding algorithms (FISTA) are preferred for problems with TV norm. A different approach to including prior information is superiorization. In this paper it is shown that the conjugate gradient method can be superiorized. Five different CG variants are proposed, including preconditioned CG. The CG methods superiorized by the total variation norm are presented and their performance in image reconstruction is demonstrated. It is illustrated that some of the proposed variants of the superiorized CG method can produce reconstructions of superior quality to those produced by FISTA and in less computational time, due to the speed of the original CG for least squares problems. In the Appendix we examine the behavior of one of the superiorized CG methods (we call it S-CG); one of its input parameters is a positive number ɛ. It is proved that, for any given ɛ that is greater than the half-squared-residual for the least squares solution, S-CG terminates in a finite number of steps with an output for which the half-squared-residual is less than or equal to ɛ. Importantly, it is also the case that the output will have a lower value of TV than what would be provided by unsuperiorized CG for the same value ɛ of the half-squared residual.

  14. Extremely Selective Attention: Eye-Tracking Studies of the Dynamic Allocation of Attention to Stimulus Features in Categorization

    ERIC Educational Resources Information Center

    Blair, Mark R.; Watson, Marcus R.; Walshe, R. Calen; Maj, Fillip

    2009-01-01

    Humans have an extremely flexible ability to categorize regularities in their environment, in part because of attentional systems that allow them to focus on important perceptual information. In formal theories of categorization, attention is typically modeled with weights that selectively bias the processing of stimulus features. These theories…

  15. Examination of a Social Problem-Solving Intervention to Treat Selective Mutism

    ERIC Educational Resources Information Center

    O'Reilly, Mark; McNally, Deirdre; Sigafoos, Jeff; Lancioni, Giulio E.; Green, Vanessa; Edrisinha, Chaturi; Machalicek, Wendy; Sorrells, Audrey; Lang, Russell; Didden, Robert

    2008-01-01

    The authors examined the use of a social problem-solving intervention to treat selective mutism with 2 sisters in an elementary school setting. Both girls were taught to answer teacher questions in front of their classroom peers during regular classroom instruction. Each girl received individualized instruction from a therapist and was taught to…

  16. Instructional Practices in Fifth-Through Eighth-Grade Science Classrooms of a Selected Seventh-Day Adventist Conference

    ERIC Educational Resources Information Center

    Burton, Larry D.; Nino, Ruth J.; Hollingsead, Candice C.

    2004-01-01

    This investigation focused on instructional practices within fifth- through eighth-grade science classes of selected Seventh-day Adventist schools. Teachers reported regular use of discussion, student projects, and tests or quizzes. Most respondents said they did not feel prepared or had "never heard of" inquiry, the learning cycle, or…

  17. 77 FR 37082 - Self-Regulatory Organizations; International Securities Exchange, LLC; Notice of Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-20

    ... Exchange for certain regular orders in 25 securities traded on the Exchange (``Special Non-Select Penny Pilot Symbols'').\\3\\ For trading in the Special Non-Select Penny Pilot Symbols, the Exchange currently... per contract for Non-ISE Market Maker \\5\\ orders. ISE Market Maker orders \\6\\ in these symbols are...

  18. School achievement of children with intellectual disability: the role of socioeconomic status, placement, and parents' engagement.

    PubMed

    Szumski, Grzegorz; Karwowski, Maciej

    2012-01-01

    The objective of this study was to describe the selected conditions for school achievement of students with mild intellectual disabilities from Polish elementary schools. Participants were 605 students with mild disabilities from integrative, regular, and special schools, and their parents (N=429). It was found that socioeconomic status (SES) was positively associated with child placement in integrative and regular schools rather than special schools, as well as with higher parental engagement in their children's studies. Parental engagement mediated the positive effects of SES and placement in regular and integrative schools on school achievement. The results are discussed in the context of inclusive education theory. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Damage identification using inverse methods.

    PubMed

    Friswell, Michael I

    2007-02-15

    This paper gives an overview of the use of inverse methods in damage detection and location, using measured vibration data. Inverse problems require the use of a model and the identification of uncertain parameters of this model. Damage is often local in nature and although the effect of the loss of stiffness may require only a small number of parameters, the lack of knowledge of the location means that a large number of candidate parameters must be included. This paper discusses a number of problems that exist with this approach to health monitoring, including modelling error, environmental effects, damage localization and regularization.

  20. Magnetic field effects on peristaltic flow of blood in a non-uniform channel

    NASA Astrophysics Data System (ADS)

    Latha, R.; Rushi Kumar, B.

    2017-11-01

    The objective of this paper is to carry out the effect of the MHD on the peristaltic transport of blood in a non-uniform channel have been explored under long wavelength approximation with low (zero) Reynolds number. Blood is made of an incompressible, viscous and electrically conducting. Explicit expressions for the axial velocity, axial pressure gradient are derived using long wavelength assumptions with slip and regularity conditions. It is determined that the pressure gradient diminishes as the couple stress parameter increments and it decreases as the magnetic parameter increments. We additionally concentrate the embedded parameters through graphs.

  1. [Formation of individual somatotype parameters and features of constitutional organization in 7-12-year-old boys (longitudinal)].

    PubMed

    Kornienko, I A; Panasiuk, T V; Tambovtseva, R V

    1997-01-01

    Individual somatotype parameters and peculiarities of constitution in 7-12 years boys were evaluated in the present investigation. The age range studied was shown to be divided on 3 stages. Regular growth processes with the prevalence of "infantal" proportions occur at the age of 7-9. Signs of definite constitutional type are expressed yet insufficiently. The age of 10-11 is transitional which shows in delay of muscle growth. At the age of 11-12 prepubescent sets in, during which features of constitution types and appropriate somatotype parameters are distinctly manifested.

  2. The effect of mydriasis from phenylephrine on corneal shape.

    PubMed

    Huang, Ronnie Y C; Lam, Andrew K C

    2007-01-01

    A previous study reported that pharmacologically-dilated pupils changed the corneal shape. Researchers used mydriatic agents with significant cycloplegic effect. The current study investigates the effect of mydriasis on corneal shape using phenylephrine alone, where phenylephrine has minimal effect on the accommodative system and whether corneal topography can be done after pupil dilation. Forty-four young healthy subjects with one eye randomly selected for mydriasis were used in this study. Twenty-two received one drop of 2.5% phenylephrine (group 1); the other 22 subjects had one drop of 0.4% benoxinate instilled prior to the application of 2.5% phenylephrine (group 2). They were matched for age and refractive error. Anterior chamber depth, pupil size and corneal parameters were compared before and after mydriasis. The corneal parameters included best-fit sphere (BFS), surface asymmetry index (SAI), surface regularity index (SRI) and the axial and tangential powers in the form of flattest and steepest powers, and in the form of M, J(0), and J(45) vector presentation. Group 1 and group 2 subjects had similar pre-mydriatic baseline ocular parameters. The mean (+/- SD) pupil dilation was 1.24 +/- 0.59 mm for group 1 and 1.80 +/- 0.95 mm for group 2. The dilation was significantly larger in group 2 (unpaired t-tests: t = 2.36, p = 0.02). There were no significant changes in corneal parameters from mydriasis in either group. Previous investigations used mydriatic agents, which affected not only the pupil size but also accommodation. The current study found that mydriasis from phenylephrine, with minimal effect on accommodation, did not result in significant corneal alteration, and corneal topography can be measured after pupil dilation with phenylephrine.

  3. Phosphatidylserine exposure on stored red blood cells as a parameter for donor-dependent variation in product quality.

    PubMed

    Dinkla, Sip; Peppelman, Malou; Van Der Raadt, Jori; Atsma, Femke; Novotný, Vera M J; Van Kraaij, Marian G J; Joosten, Irma; Bosman, Giel J C G M

    2014-04-01

    Exposure of phosphatidylserine on the outside of red blood cells contributes to recognition and removal of old and damaged cells. The fraction of phosphatidylserine-exposing red blood cells varies between donors, and increases in red blood cell concentrates during storage. The susceptibility of red blood cells to stress-induced phosphatidylserine exposure increases with storage. Phosphatidylserine exposure may, therefore, constitute a link between donor variation and the quality of red blood cell concentrates. In order to examine the relationship between storage parameters and donor characteristics, the percentage of phosphatidylserine-exposing red blood cells was measured in red blood cell concentrates during storage and in fresh red blood cells from blood bank donors. The percentage of phosphatidylserine-exposing red blood cells was compared with red blood cell susceptibility to osmotic stress-induced phosphatidylserine exposure in vitro, with the regular red blood cell concentrate quality parameters, and with the donor characteristics age, body mass index, haemoglobin level, gender and blood group. Phosphatidylserine exposure varies between donors, both on red blood cells freshly isolated from the blood, and on red blood cells in red blood cell concentrates. Phosphatidylserine exposure increases with storage time, and is correlated with stress-induced phosphatidylserine exposure. Increased phosphatidylserine exposure during storage was found to be associated with haemolysis and vesicle concentration in red blood cell concentrates. The percentage of phosphatidylserine-exposing red blood cells showed a positive correlation with the plasma haemoglobin concentration of the donor. The fraction of phosphatidylserine-exposing red blood cells is a parameter of red blood cell integrity in red blood cell concentrates and may be an indicator of red blood cell survival after transfusion. Measurement of phosphatidylserine exposure may be useful in the selection of donors and red blood cell concentrates for specific groups of patients.

  4. Predicting Near-Term Water Quality from Satellite Observations of Watershed Conditions

    NASA Astrophysics Data System (ADS)

    Weiss, W. J.; Wang, L.; Hoffman, K.; West, D.; Mehta, A. V.; Lee, C.

    2017-12-01

    Despite the strong influence of watershed conditions on source water quality, most water utilities and water resource agencies do not currently have the capability to monitor watershed sources of contamination with great temporal or spatial detail. Typically, knowledge of source water quality is limited to periodic grab sampling; automated monitoring of a limited number of parameters at a few select locations; and/or monitoring relevant constituents at a treatment plant intake. While important, such observations are not sufficient to inform proactive watershed or source water management at a monthly or seasonal scale. Satellite remote sensing data on the other hand can provide a snapshot of an entire watershed at regular, sub-monthly intervals, helping analysts characterize watershed conditions and identify trends that could signal changes in source water quality. Accordingly, the authors are investigating correlations between satellite remote sensing observations of watersheds and source water quality, at a variety of spatial and temporal scales and lags. While correlations between remote sensing observations and direct in situ measurements of water quality have been well described in the literature, there are few studies that link remote sensing observations across a watershed with near-term predictions of water quality. In this presentation, the authors will describe results of statistical analyses and discuss how these results are being used to inform development of a desktop decision support tool to support predictive application of remote sensing data. Predictor variables under evaluation include parameters that describe vegetative conditions; parameters that describe climate/weather conditions; and non-remote sensing, in situ measurements. Water quality parameters under investigation include nitrogen, phosphorus, organic carbon, chlorophyll-a, and turbidity.

  5. Semi-supervised vibration-based classification and condition monitoring of compressors

    NASA Astrophysics Data System (ADS)

    Potočnik, Primož; Govekar, Edvard

    2017-09-01

    Semi-supervised vibration-based classification and condition monitoring of the reciprocating compressors installed in refrigeration appliances is proposed in this paper. The method addresses the problem of industrial condition monitoring where prior class definitions are often not available or difficult to obtain from local experts. The proposed method combines feature extraction, principal component analysis, and statistical analysis for the extraction of initial class representatives, and compares the capability of various classification methods, including discriminant analysis (DA), neural networks (NN), support vector machines (SVM), and extreme learning machines (ELM). The use of the method is demonstrated on a case study which was based on industrially acquired vibration measurements of reciprocating compressors during the production of refrigeration appliances. The paper presents a comparative qualitative analysis of the applied classifiers, confirming the good performance of several nonlinear classifiers. If the model parameters are properly selected, then very good classification performance can be obtained from NN trained by Bayesian regularization, SVM and ELM classifiers. The method can be effectively applied for the industrial condition monitoring of compressors.

  6. Multi-frame super-resolution with quality self-assessment for retinal fundus videos.

    PubMed

    Köhler, Thomas; Brost, Alexander; Mogalle, Katja; Zhang, Qianyi; Köhler, Christiane; Michelson, Georg; Hornegger, Joachim; Tornow, Ralf P

    2014-01-01

    This paper proposes a novel super-resolution framework to reconstruct high-resolution fundus images from multiple low-resolution video frames in retinal fundus imaging. Natural eye movements during an examination are used as a cue for super-resolution in a robust maximum a-posteriori scheme. In order to compensate heterogeneous illumination on the fundus, we integrate retrospective illumination correction for photometric registration to the underlying imaging model. Our method utilizes quality self-assessment to provide objective quality scores for reconstructed images as well as to select regularization parameters automatically. In our evaluation on real data acquired from six human subjects with a low-cost video camera, the proposed method achieved considerable enhancements of low-resolution frames and improved noise and sharpness characteristics by 74%. In terms of image analysis, we demonstrate the importance of our method for the improvement of automatic blood vessel segmentation as an example application, where the sensitivity was increased by 13% using super-resolution reconstruction.

  7. [Knowledge and gaps on the role of nutrition and physical activity on the onset of childhood obesity].

    PubMed

    Bautista-Castaño, Inmaculada; Sangil-Monroy, Marta; Serra-Majem, Lluís

    2004-12-04

    Childhood and adolescent obesity has increased at alarming rates over the last few years, due to the concurrence of a variety of genetic and environmental factors. The aim of this study was to conduct a review of published studies in the past ten years evaluating the development of childhood obesity in relation to energy and macronutrients intake, their distribution throughout the day and physical activity patterns. 31 articles dealing with this subject were selected. Results obtained appear to indicate that reducing dietary fat and increasing dietary carbohydrate intakes along with consuming an adequate breakfast and carrying out leisure time physical activity on a regular basis act as determining factors to prevent childhood and adolescent obesity, even though the strength of the evidence from these studies is low. It should be a priority to conduct follow-up studies with comparable methodologies in Mediterranean countries, in order to establish parameters for the prevention and control of childhood and adolescent obesity.

  8. Robust estimation for partially linear models with large-dimensional covariates

    PubMed Central

    Zhu, LiPing; Li, RunZe; Cui, HengJian

    2014-01-01

    We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of o(n), where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures. PMID:24955087

  9. Automated processing of the single-lead electrocardiogram for the detection of obstructive sleep apnoea.

    PubMed

    de Chazal, Philip; Heneghan, Conor; Sheridan, Elaine; Reilly, Richard; Nolan, Philip; O'Malley, Mark

    2003-06-01

    A method for the automatic processing of the electrocardiogram (ECG) for the detection of obstructive apnoea is presented. The method screens nighttime single-lead ECG recordings for the presence of major sleep apnoea and provides a minute-by-minute analysis of disordered breathing. A large independently validated database of 70 ECG recordings acquired from normal subjects and subjects with obstructive and mixed sleep apnoea, each of approximately eight hours in duration, was used throughout the study. Thirty-five of these recordings were used for training and 35 retained for independent testing. A wide variety of features based on heartbeat intervals and an ECG-derived respiratory signal were considered. Classifiers based on linear and quadratic discriminants were compared. Feature selection and regularization of classifier parameters were used to optimize classifier performance. Results show that the normal recordings could be separated from the apnoea recordings with a 100% success rate and a minute-by-minute classification accuracy of over 90% is achievable.

  10. Self-assembly of triangular particles via capillary interactions

    NASA Astrophysics Data System (ADS)

    Bedi, Deshpreet; Zhou, Shangnan; Ferrar, Joseph; Solomon, Michael; Mao, Xiaoming

    Colloidal particles adsorbed to a fluid interface deform the interface around them, resulting in either attractive or repulsive forces mediated by the interface. In particular, particle shape and surface roughness can produce an undulating contact line, such that the particles will assume energetically-favorable relative orientations and inter-particle distances to minimize the excess interfacial surface area. By expediently selecting specific particle shapes and associated design parameters, capillary interactions can be utilized to promote self-assembly of these particles into extended regular open structures, such as the kagome lattice, which have novel mechanical properties. We present the results of numerical simulations of equilateral triangle microprisms at an interface, including individually and in pairs. We show how particle bowing can yield two distinct binding events and connect it to theory in terms of a capillary multipole expansion and also to experiment, as presented in an accompanying talk. We also discuss and suggest design principles that can be used to create desirable open structures.

  11. Hypervelocity Impact Test Facility: A gun for hire

    NASA Technical Reports Server (NTRS)

    Johnson, Calvin R.; Rose, M. F.; Hill, D. C.; Best, S.; Chaloupka, T.; Crawford, G.; Crumpler, M.; Stephens, B.

    1994-01-01

    An affordable technique has been developed to duplicate the types of impacts observed on spacecraft, including the Shuttle, by use of a certified Hypervelocity Impact Facility (HIF) which propels particulates using capacitor driven electric gun techniques. The fully operational facility provides a flux of particles in the 10-100 micron diameter range with a velocity distribution covering the space debris and interplanetary dust particle environment. HIF measurements of particle size, composition, impact angle and velocity distribution indicate that such parameters can be controlled in a specified, tailored test designed for or by the user. Unique diagnostics enable researchers to fully describe the impact for evaluating the 'targets' under full power or load. Users regularly evaluate space hardware, including solar cells, coatings, and materials, exposing selected portions of space-qualified items to a wide range of impact events and environmental conditions. Benefits include corroboration of data obtained from impact events, flight simulation of designs, accelerated aging of systems, and development of manufacturing techniques.

  12. [Characteristics of wheat powdery mildew growth along and across the longitudinal axis of a leaf under the action of exogenous zeatin].

    PubMed

    Riabchenko, A S; Avetisian, T V; Babosha, A V

    2009-01-01

    Scanning electronic microscopy was used to investigate the regularities of growth direction of infectious structures and colonies of the agent of powdery mildew of wheat Erysiphe graminis f. sp. tritici. The growth of appressoria with normal morphology in wheat leaves occurs predominantly along the long axis of the cell. Most anomalous appressoria grow perpendicularly. Treatment with zeatin changes the ratio of the directions of growth of normal appressoria and hyphae of the colonies. The dependence of these parameters and of the surficial density of colonies on the concentration of phytohormone is monophasic. The hypothesis is suggested that the strategy of selection of the direction of growth of infectious structures on leaves with an anisotropic surface depends on the most probable position of the receptor cell and the action of cytokinins on their participation in redistribution of nutrients between the infected and noninfected cells of the host plant.

  13. Robust estimation for partially linear models with large-dimensional covariates.

    PubMed

    Zhu, LiPing; Li, RunZe; Cui, HengJian

    2013-10-01

    We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of [Formula: see text], where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures.

  14. 32 CFR 901.14 - Regular airmen category.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Personnel Office (CBPO) to hold any reassignment action of the airman pending selection for an appointment... statement of the applicant's character, ability, and motivation to become a career officer. Statements in...

  15. The degree of cycle irregularity correlates with the grade of endocrine and metabolic disorders in PCOS patients.

    PubMed

    Strowitzki, Thomas; Capp, Edison; von Eye Corleta, Helena

    2010-04-01

    PCOS (polycystic ovarian syndrome) is a clinically heterogeneous endocrine disorder which affects up to 4-10% of women of reproductive age. A standardized definition is still difficult because of a huge variety of different phenotypes. The aim of this study was to evaluate possible correlations between the degree of cycle irregularity and the grade of endocrine and metabolic abnormalities. A cross-sectional study was carried out. Hyperandrogenic and/or hirsute women with regular menstrual cycles and polycystic ovaries on ultrasound (PCOS eumenorr, n=45), PCOS patients with oligomenorrhea (PCOS oligo, n=42) and PCOS patients with amenorrhea (PCOS amenorr, n=31) were recruited from the Department of Gynecological Endocrinology and Reproductive Medicine of the Women's University Hospital Heidelberg (Heidelberg, Germany). Normocyclic patients demonstrated significantly better metabolic parameters (BMI, fasting insulin, HOMA-IR) than patients with oligo/amenorrhea. Hormonal parameters (LH, FSH, FAI and testosterone) were significantly different between patients with different menstrual patterns and patients with regular cycles. Determining the degree of cycle irregularity as a simple clinical parameter might be a valuable instrument to estimate the degree of metabolic and endocrine disorders. Emphasis should be given to those parameters as a first step to characterize PCOS patients with a risk of endocrine and metabolic disorders leading to consequent detailed examination. Copyright (c) 2010 Elsevier Ireland Ltd. All rights reserved.

  16. Decaffeinated coffee improves insulin sensitivity in healthy men.

    PubMed

    Reis, Caio E G; Paiva, Cicília L R Dos S; Amato, Angélica A; Lofrano-Porto, Adriana; Wassell, Sara; Bluck, Leslie J C; Dórea, José G; da Costa, Teresa H M

    2018-05-01

    Epidemiological studies have found coffee consumption is associated with a lower risk for type 2 diabetes mellitus, but the underlying mechanisms remain unclear. Thus, the aim of this randomised, cross-over single-blind study was to investigate the effects of regular coffee, regular coffee with sugar and decaffeinated coffee consumption on glucose metabolism and incretin hormones. Seventeen healthy men participated in five trials each, during which they consumed coffee (decaffeinated, regular (containing caffeine) or regular with sugar) or water (with or without sugar). After 1 h of each intervention, they received an oral glucose tolerance test with one intravenous dose of [1-13C]glucose. The Oral Dose Intravenous Label Experiment was applied and glucose and insulin levels were interpreted using a stable isotope two-compartment minimal model. A mixed-model procedure (PROC MIXED), with subject as random effect and time as repeated measure, was used to compare the effects of the beverages on glucose metabolism and incretin parameters (glucose-dependent insulinotropic peptide (GIP)) and glucagon-like peptide-1 (GLP-1)). Insulin sensitivity was higher with decaffeinated coffee than with water (P<0·05). Regular coffee with sugar did not significantly affect glucose, insulin, C-peptide and incretin hormones, compared with water with sugar. Glucose, insulin, C-peptide, GLP-1 and GIP levels were not statistically different after regular and decaffeinated coffee compared with water. Our findings demonstrated that the consumption of decaffeinated coffee improves insulin sensitivity without changing incretin hormones levels. There was no short-term adverse effect on glucose homoeostasis, after an oral glucose challenge, attributable to the consumption of regular coffee with sugar.

  17. Improving HybrID: How to best combine indirect and direct encoding in evolutionary algorithms.

    PubMed

    Helms, Lucas; Clune, Jeff

    2017-01-01

    Many challenging engineering problems are regular, meaning solutions to one part of a problem can be reused to solve other parts. Evolutionary algorithms with indirect encoding perform better on regular problems because they reuse genomic information to create regular phenotypes. However, on problems that are mostly regular, but contain some irregularities, which describes most real-world problems, indirect encodings struggle to handle the irregularities, hurting performance. Direct encodings are better at producing irregular phenotypes, but cannot exploit regularity. An algorithm called HybrID combines the best of both: it first evolves with indirect encoding to exploit problem regularity, then switches to direct encoding to handle problem irregularity. While HybrID has been shown to outperform both indirect and direct encoding, its initial implementation required the manual specification of when to switch from indirect to direct encoding. In this paper, we test two new methods to improve HybrID by eliminating the need to manually specify this parameter. Auto-Switch-HybrID automatically switches from indirect to direct encoding when fitness stagnates. Offset-HybrID simultaneously evolves an indirect encoding with directly encoded offsets, eliminating the need to switch. We compare the original HybrID to these alternatives on three different problems with adjustable regularity. The results show that both Auto-Switch-HybrID and Offset-HybrID outperform the original HybrID on different types of problems, and thus offer more tools for researchers to solve challenging problems. The Offset-HybrID algorithm is particularly interesting because it suggests a path forward for automatically and simultaneously combining the best traits of indirect and direct encoding.

  18. A computational investigation of the finite-time blow-up of the 3D incompressible Euler equations based on the Voigt regularization

    DOE PAGES

    Larios, Adam; Petersen, Mark R.; Titi, Edriss S.; ...

    2017-04-29

    We report the results of a computational investigation of two blow-up criteria for the 3D incompressible Euler equations. One criterion was proven in a previous work, and a related criterion is proved here. These criteria are based on an inviscid regularization of the Euler equations known as the 3D Euler-Voigt equations, which are known to be globally well-posed. Moreover, simulations of the 3D Euler-Voigt equations also require less resolution than simulations of the 3D Euler equations for xed values of the regularization parameter α > 0. Therefore, the new blow-up criteria allow one to gain information about possible singularity formationmore » in the 3D Euler equations indirectly; namely, by simulating the better-behaved 3D Euler-Voigt equations. The new criteria are only known to be suficient for blow-up. Therefore, to test the robustness of the inviscid-regularization approach, we also investigate analogous criteria for blow-up of the 1D Burgers equation, where blow-up is well-known to occur.« less

  19. A computational investigation of the finite-time blow-up of the 3D incompressible Euler equations based on the Voigt regularization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larios, Adam; Petersen, Mark R.; Titi, Edriss S.

    We report the results of a computational investigation of two blow-up criteria for the 3D incompressible Euler equations. One criterion was proven in a previous work, and a related criterion is proved here. These criteria are based on an inviscid regularization of the Euler equations known as the 3D Euler-Voigt equations, which are known to be globally well-posed. Moreover, simulations of the 3D Euler-Voigt equations also require less resolution than simulations of the 3D Euler equations for xed values of the regularization parameter α > 0. Therefore, the new blow-up criteria allow one to gain information about possible singularity formationmore » in the 3D Euler equations indirectly; namely, by simulating the better-behaved 3D Euler-Voigt equations. The new criteria are only known to be suficient for blow-up. Therefore, to test the robustness of the inviscid-regularization approach, we also investigate analogous criteria for blow-up of the 1D Burgers equation, where blow-up is well-known to occur.« less

  20. Tool-assisted rhythmic drumming in palm cockatoos shares key elements of human instrumental music

    PubMed Central

    Heinsohn, Robert; Zdenek, Christina N.; Cunningham, Ross B.; Endler, John A.; Langmore, Naomi E.

    2017-01-01

    All human societies have music with a rhythmic “beat,” typically produced with percussive instruments such as drums. The set of capacities that allows humans to produce and perceive music appears to be deeply rooted in human biology, but an understanding of its evolutionary origins requires cross-taxa comparisons. We show that drumming by palm cockatoos (Probosciger aterrimus) shares the key rudiments of human instrumental music, including manufacture of a sound tool, performance in a consistent context, regular beat production, repeated components, and individual styles. Over 131 drumming sequences produced by 18 males, the beats occurred at nonrandom, regular intervals, yet individual males differed significantly in the shape parameters describing the distribution of their beat patterns, indicating individual drumming styles. Autocorrelation analyses of the longest drumming sequences further showed that they were highly regular and predictable like human music. These discoveries provide a rare comparative perspective on the evolution of rhythmicity and instrumental music in our own species, and show that a preference for a regular beat can have other origins before being co-opted into group-based music and dance. PMID:28782005

  1. Minimum entropy deconvolution optimized sinusoidal synthesis and its application to vibration based fault detection

    NASA Astrophysics Data System (ADS)

    Li, Gang; Zhao, Qing

    2017-03-01

    In this paper, a minimum entropy deconvolution based sinusoidal synthesis (MEDSS) filter is proposed to improve the fault detection performance of the regular sinusoidal synthesis (SS) method. The SS filter is an efficient linear predictor that exploits the frequency properties during model construction. The phase information of the harmonic components is not used in the regular SS filter. However, the phase relationships are important in differentiating noise from characteristic impulsive fault signatures. Therefore, in this work, the minimum entropy deconvolution (MED) technique is used to optimize the SS filter during the model construction process. A time-weighted-error Kalman filter is used to estimate the MEDSS model parameters adaptively. Three simulation examples and a practical application case study are provided to illustrate the effectiveness of the proposed method. The regular SS method and the autoregressive MED (ARMED) method are also implemented for comparison. The MEDSS model has demonstrated superior performance compared to the regular SS method and it also shows comparable or better performance with much less computational intensity than the ARMED method.

  2. Comparison of least squares and exponential sine sweep methods for Parallel Hammerstein Models estimation

    NASA Astrophysics Data System (ADS)

    Rebillat, Marc; Schoukens, Maarten

    2018-05-01

    Linearity is a common assumption for many real-life systems, but in many cases the nonlinear behavior of systems cannot be ignored and must be modeled and estimated. Among the various existing classes of nonlinear models, Parallel Hammerstein Models (PHM) are interesting as they are at the same time easy to interpret as well as to estimate. One way to estimate PHM relies on the fact that the estimation problem is linear in the parameters and thus that classical least squares (LS) estimation algorithms can be used. In that area, this article introduces a regularized LS estimation algorithm inspired on some of the recently developed regularized impulse response estimation techniques. Another mean to estimate PHM consists in using parametric or non-parametric exponential sine sweeps (ESS) based methods. These methods (LS and ESS) are founded on radically different mathematical backgrounds but are expected to tackle the same issue. A methodology is proposed here to compare them with respect to (i) their accuracy, (ii) their computational cost, and (iii) their robustness to noise. Tests are performed on simulated systems for several values of methods respective parameters and of signal to noise ratio. Results show that, for a given set of data points, the ESS method is less demanding in computational resources than the LS method but that it is also less accurate. Furthermore, the LS method needs parameters to be set in advance whereas the ESS method is not subject to conditioning issues and can be fully non-parametric. In summary, for a given set of data points, ESS method can provide a first, automatic, and quick overview of a nonlinear system than can guide more computationally demanding and precise methods, such as the regularized LS one proposed here.

  3. W-phase estimation of first-order rupture distribution for megathrust earthquakes

    NASA Astrophysics Data System (ADS)

    Benavente, Roberto; Cummins, Phil; Dettmer, Jan

    2014-05-01

    Estimating the rupture pattern for large earthquakes during the first hour after the origin time can be crucial for rapid impact assessment and tsunami warning. However, the estimation of coseismic slip distribution models generally involves complex methodologies that are difficult to implement rapidly. Further, while model parameter uncertainty can be crucial for meaningful estimation, they are often ignored. In this work we develop a finite fault inversion for megathrust earthquakes which rapidly generates good first order estimates and uncertainties of spatial slip distributions. The algorithm uses W-phase waveforms and a linear automated regularization approach to invert for rupture models of some recent megathrust earthquakes. The W phase is a long period (100-1000 s) wave which arrives together with the P wave. Because it is fast, has small amplitude and a long-period character, the W phase is regularly used to estimate point source moment tensors by the NEIC and PTWC, among others, within an hour of earthquake occurrence. We use W-phase waveforms processed in a manner similar to that used for such point-source solutions. The inversion makes use of 3 component W-phase records retrieved from the Global Seismic Network. The inverse problem is formulated by a multiple time window method, resulting in a linear over-parametrized problem. The over-parametrization is addressed by Tikhonov regularization and regularization parameters are chosen according to the discrepancy principle by grid search. Noise on the data is addressed by estimating the data covariance matrix from data residuals. The matrix is obtained by starting with an a priori covariance matrix and then iteratively updating the matrix based on the residual errors of consecutive inversions. Then, a covariance matrix for the parameters is computed using a Bayesian approach. The application of this approach to recent megathrust earthquakes produces models which capture the most significant features of their slip distributions. Also, reliable solutions are generally obtained with data in a 30-minute window following the origin time, suggesting that a real-time system could obtain solutions in less than one hour following the origin time.

  4. Exceptionality in vowel harmony

    NASA Astrophysics Data System (ADS)

    Szeredi, Daniel

    Vowel harmony has been of great interest in phonological research. It has been widely accepted that vowel harmony is a phonetically natural phenomenon, which means that it is a common pattern because it provides advantages to the speaker in articulation and to the listener in perception. Exceptional patterns proved to be a challenge to the phonetically grounded analysis as they, by their nature, introduce phonetically disadvantageous sequences to the surface form, that consist of harmonically different vowels. Such forms are found, for example in the Finnish stem tuoli 'chair' or in the Hungarian suffixed form hi:d-hoz 'to the bridge', both word forms containing a mix of front and back vowels. There has recently been evidence shown that there might be a phonetic level explanation for some exceptional patterns, as the possibility that some vowels participating in irregular stems (like the vowel [i] in the Hungarian stem hi:d 'bridge' above) differ in some small phonetic detail from vowels in regular stems. The main question has not been raised, though: does this phonetic detail matter for speakers? Would they use these minor differences when they have to categorize a new word as regular or irregular? A different recent trend in explaining morphophonological exceptionality by looking at the phonotactic regularities characteristic of classes of stems based on their morphological behavior. Studies have shown that speakers are aware of these regularities, and use them as cues when they have to decide what class a novel stem belongs to. These sublexical phonotactic regularities have already been shown to be present in some exceptional patterns vowel harmony, but many questions remain open: how is learning the static generalization linked to learning the allomorph selection facet of vowel harmony? How much does the effect of consonants on vowel harmony matter, when compared to the effect of vowel-to-vowel correspondences? This dissertation aims to test these two ideas -- that speakers use phonetic cues and/or that they use sublexical phonotactic regularities in categorizing stems as regular or irregular -- and attempt to answer the more detailed questions, like the effect of consonantal patterns on exceptional patterns or the link between allomorph selection and static phonotactic generalizations as well. The phonetic hypothesis is tested on the Hungarian antiharmonicity pattern (stems with front vowels consistently selecting back suffixes, like in the example hi:d-hoz 'to the bridge' above), and the results indicate that while there may be some small phonetic differences between vowels in regular and irregular stems, speakers do not use these, or even enhanced differences when they have to categorize stems. The sublexical hypothesis is tested and confirmed by looking at the disharmonicity pattern in Finnish. In Finnish, stems that contain both back and certain front vowels are frequent and perfectly grammatical, like in the example tuoli 'chair' above, while the mixing of back and some other front vowels is very rare and mostly confined to loanwords like amatoori 'amateur'. It will be seen that speakers do use sublexical phonotactic regularities to decide on the acceptability of novel stems, but certain patterns that are phonetically or phonologically more natural (vowel-to-vowel correspondences) seem to matter much more than other effects (like consonantal effects). Finally, a computational account will be given on how exceptionality might be learned by speakers by using maximum entropy grammars available in the literature to simulate the acquisition of the Finnish disharmonicity pattern. It will be shown that in order to clearly model the overall behavior on the exact pattern, the learner has to have access not only to the lexicon, but also to the allomorph selection patterns in the language.

  5. Comparison of hemodynamic and nutritional parameters between older persons practicing regular physical activity, nonsmokers and ex-smokers.

    PubMed

    Francisco, Cristina O; Ricci, Natalia A; Rebelatto, Marcelo N; Rebelatto, José R

    2010-11-01

    Sedentary lifestyle combined with smoking, contributes to the development of a set of chronic diseases and to accelerating the course of aging. The aim of the study was to compare the hemodynamic and nutritional parameters between elderly persons practicing regular physical activity, nonsmokers and ex-smokers. The sample was comprised of 40 elderly people practicing regular physical activity for 12 months, divided into a Nonsmoker Group and an Ex-smoker Group. During a year four trimestrial evaluations were performed, in which the hemodynamic (blood pressure, heart rate- HR and VO2) and nutritional status (measured by body mass index) data were collected. The paired t-test and t-test for independent samples were applied in the intragroup and intergroup analysis, respectively. The mean age of the groups was 68.35 years, with the majority of individuals in the Nonsmoker Group being women (n = 15) and the Ex-smoker Group composed of men (n = 11). In both groups the variables studied were within the limits of normality for the age. HR was diminished in the Nonsmoker Group in comparison with the Ex-smoker Group (p = 0.045) between the first and last evaluation. In the intragroup analysis it was verified that after one year of exercise, there was significant reduction in the HR in the Nonsmoker Group (p = 0.002) and a significant increase in VO2 for the Ex-smoker Group (p = 0.010). There are no significant differences between the hemodynamic and nutritional conditions in both groups. In elderly persons practicing regular physical activity, it was observed that the studied variables were maintained over the course of a year, and there was no association with the history of smoking, except for HR and VO2.

  6. Fast nonlinear gravity inversion in spherical coordinates with application to the South American Moho

    NASA Astrophysics Data System (ADS)

    Uieda, Leonardo; Barbosa, Valéria C. F.

    2017-01-01

    Estimating the relief of the Moho from gravity data is a computationally intensive nonlinear inverse problem. What is more, the modelling must take the Earths curvature into account when the study area is of regional scale or greater. We present a regularized nonlinear gravity inversion method that has a low computational footprint and employs a spherical Earth approximation. To achieve this, we combine the highly efficient Bott's method with smoothness regularization and a discretization of the anomalous Moho into tesseroids (spherical prisms). The computational efficiency of our method is attained by harnessing the fact that all matrices involved are sparse. The inversion results are controlled by three hyperparameters: the regularization parameter, the anomalous Moho density-contrast, and the reference Moho depth. We estimate the regularization parameter using the method of hold-out cross-validation. Additionally, we estimate the density-contrast and the reference depth using knowledge of the Moho depth at certain points. We apply the proposed method to estimate the Moho depth for the South American continent using satellite gravity data and seismological data. The final Moho model is in accordance with previous gravity-derived models and seismological data. The misfit to the gravity and seismological data is worse in the Andes and best in oceanic areas, central Brazil and Patagonia, and along the Atlantic coast. Similarly to previous results, the model suggests a thinner crust of 30-35 km under the Andean foreland basins. Discrepancies with the seismological data are greatest in the Guyana Shield, the central Solimões and Amazonas Basins, the Paraná Basin, and the Borborema province. These differences suggest the existence of crustal or mantle density anomalies that were unaccounted for during gravity data processing.

  7. Confirming the validity of the CONUT system for early detection and monitoring of clinical undernutrition: comparison with two logistic regression models developed using SGA as the gold standard.

    PubMed

    González-Madroño, A; Mancha, A; Rodríguez, F J; Culebras, J; de Ulibarri, J I

    2012-01-01

    To ratify previous validations of the CONUT nutritional screening tool by the development of two probabilistic models using the parameters included in the CONUT, to see if the CONUT´s effectiveness could be improved. It is a two step prospective study. In Step 1, 101 patients were randomly selected, and SGA and CONUT was made. With data obtained an unconditional logistic regression model was developed, and two variants of CONUT were constructed: Model 1 was made by a method of logistic regression. Model 2 was made by dividing the probabilities of undernutrition obtained in model 1 in seven regular intervals. In step 2, 60 patients were selected and underwent the SGA, the original CONUT and the new models developed. The diagnostic efficacy of the original CONUT and the new models was tested by means of ROC curves. Both samples 1 and 2 were put together to measure the agreement degree between the original CONUT and SGA, and diagnostic efficacy parameters were calculated. No statistically significant differences were found between sample 1 and 2, regarding age, sex and medical/surgical distribution and undernutrition rates were similar (over 40%). The AUC for the ROC curves were 0.862 for the original CONUT, and 0.839 and 0.874, for model 1 and 2 respectively. The kappa index for the CONUT and SGA was 0.680. The CONUT, with the original scores assigned by the authors is equally good than mathematical models and thus is a valuable tool, highly useful and efficient for the purpose of Clinical Undernutrition screening.

  8. Sparsely sampling the sky: Regular vs. random sampling

    NASA Astrophysics Data System (ADS)

    Paykari, P.; Pires, S.; Starck, J.-L.; Jaffe, A. H.

    2015-09-01

    Aims: The next generation of galaxy surveys, aiming to observe millions of galaxies, are expensive both in time and money. This raises questions regarding the optimal investment of this time and money for future surveys. In a previous work, we have shown that a sparse sampling strategy could be a powerful substitute for the - usually favoured - contiguous observation of the sky. In our previous paper, regular sparse sampling was investigated, where the sparse observed patches were regularly distributed on the sky. The regularity of the mask introduces a periodic pattern in the window function, which induces periodic correlations at specific scales. Methods: In this paper, we use a Bayesian experimental design to investigate a "random" sparse sampling approach, where the observed patches are randomly distributed over the total sparsely sampled area. Results: We find that in this setting, the induced correlation is evenly distributed amongst all scales as there is no preferred scale in the window function. Conclusions: This is desirable when we are interested in any specific scale in the galaxy power spectrum, such as the matter-radiation equality scale. As the figure of merit shows, however, there is no preference between regular and random sampling to constrain the overall galaxy power spectrum and the cosmological parameters.

  9. Applications of random forest feature selection for fine-scale genetic population assignment.

    PubMed

    Sylvester, Emma V A; Bentzen, Paul; Bradbury, Ian R; Clément, Marie; Pearce, Jon; Horne, John; Beiko, Robert G

    2018-02-01

    Genetic population assignment used to inform wildlife management and conservation efforts requires panels of highly informative genetic markers and sensitive assignment tests. We explored the utility of machine-learning algorithms (random forest, regularized random forest and guided regularized random forest) compared with F ST ranking for selection of single nucleotide polymorphisms (SNP) for fine-scale population assignment. We applied these methods to an unpublished SNP data set for Atlantic salmon ( Salmo salar ) and a published SNP data set for Alaskan Chinook salmon ( Oncorhynchus tshawytscha ). In each species, we identified the minimum panel size required to obtain a self-assignment accuracy of at least 90% using each method to create panels of 50-700 markers Panels of SNPs identified using random forest-based methods performed up to 7.8 and 11.2 percentage points better than F ST -selected panels of similar size for the Atlantic salmon and Chinook salmon data, respectively. Self-assignment accuracy ≥90% was obtained with panels of 670 and 384 SNPs for each data set, respectively, a level of accuracy never reached for these species using F ST -selected panels. Our results demonstrate a role for machine-learning approaches in marker selection across large genomic data sets to improve assignment for management and conservation of exploited populations.

  10. Minimum Fisher regularization of image reconstruction for infrared imaging bolometer on HL-2A

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, J. M.; Liu, Y.; Li, W.

    2013-09-15

    An infrared imaging bolometer diagnostic has been developed recently for the HL-2A tokamak to measure the temporal and spatial distribution of plasma radiation. The three-dimensional tomography, reduced to a two-dimensional problem by the assumption of plasma radiation toroidal symmetry, has been performed. A three-dimensional geometry matrix is calculated with the one-dimensional pencil beam approximation. The solid angles viewed by the detector elements are taken into account in defining the chord brightness. And the local plasma emission is obtained by inverting the measured brightness with the minimum Fisher regularization method. A typical HL-2A plasma radiation model was chosen to optimize amore » regularization parameter on the criterion of generalized cross validation. Finally, this method was applied to HL-2A experiments, demonstrating the plasma radiated power density distribution in limiter and divertor discharges.« less

  11. Statistical regularities in the rank-citation profile of scientists

    PubMed Central

    Petersen, Alexander M.; Stanley, H. Eugene; Succi, Sauro

    2011-01-01

    Recent science of science research shows that scientific impact measures for journals and individual articles have quantifiable regularities across both time and discipline. However, little is known about the scientific impact distribution at the scale of an individual scientist. We analyze the aggregate production and impact using the rank-citation profile ci(r) of 200 distinguished professors and 100 assistant professors. For the entire range of paper rank r, we fit each ci(r) to a common distribution function. Since two scientists with equivalent Hirsch h-index can have significantly different ci(r) profiles, our results demonstrate the utility of the βi scaling parameter in conjunction with hi for quantifying individual publication impact. We show that the total number of citations Ci tallied from a scientist's Ni papers scales as . Such statistical regularities in the input-output patterns of scientists can be used as benchmarks for theoretical models of career progress. PMID:22355696

  12. Least square regularized regression in sum space.

    PubMed

    Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu

    2013-04-01

    This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.

  13. Microscopic Spin Model for the STOCK Market with Attractor Bubbling on Regular and Small-World Lattices

    NASA Astrophysics Data System (ADS)

    Krawiecki, A.

    A multi-agent spin model for changes of prices in the stock market based on the Ising-like cellular automaton with interactions between traders randomly varying in time is investigated by means of Monte Carlo simulations. The structure of interactions has topology of a small-world network obtained from regular two-dimensional square lattices with various coordination numbers by randomly cutting and rewiring edges. Simulations of the model on regular lattices do not yield time series of logarithmic price returns with statistical properties comparable with the empirical ones. In contrast, in the case of networks with a certain degree of randomness for a wide range of parameters the time series of the logarithmic price returns exhibit intermittent bursting typical of volatility clustering. Also the tails of distributions of returns obey a power scaling law with exponents comparable to those obtained from the empirical data.

  14. Techno-economic assessment of pellets produced from steam pretreated biomass feedstock

    DOE PAGES

    Shahrukh, Hassan; Oyedun, Adetoyese Olajire; Kumar, Amit; ...

    2016-03-10

    Minimum production cost and optimum plant size are determined for pellet plants for three types of biomass feedstock e forest residue, agricultural residue, and energy crops. The life cycle cost from harvesting to the delivery of the pellets to the co-firing facility is evaluated. The cost varies from 95 to 105 t -1 for regular pellets and 146–156 t -1 for steam pretreated pellets. The difference in the cost of producing regular and steam pretreated pellets per unit energy is in the range of 2e3 GJ -1. The economic optimum plant size (i.e., the size at which pellet production costmore » is minimum) is found to be 190 kt for regular pellet production and 250 kt for steam pretreated pellet. Furthermore, sensitivity and uncertainty analyses were carried out to identify sensitivity parameters and effects of model error.« less

  15. The extended Fourier transform for 2D spectral estimation.

    PubMed

    Armstrong, G S; Mandelshtam, V A

    2001-11-01

    We present a linear algebraic method, named the eXtended Fourier Transform (XFT), for spectral estimation from truncated time signals. The method is a hybrid of the discrete Fourier transform (DFT) and the regularized resolvent transform (RRT) (J. Chen et al., J. Magn. Reson. 147, 129-137 (2000)). Namely, it estimates the remainder of a finite DFT by RRT. The RRT estimation corresponds to solution of an ill-conditioned problem, which requires regularization. The regularization depends on a parameter, q, that essentially controls the resolution. By varying q from 0 to infinity one can "tune" the spectrum between a high-resolution spectral estimate and the finite DFT. The optimal value of q is chosen according to how well the data fits the form of a sum of complex sinusoids and, in particular, the signal-to-noise ratio. Both 1D and 2D XFT are presented with applications to experimental NMR signals. Copyright 2001 Academic Press.

  16. Novel cooperative neural fusion algorithms for image restoration and image fusion.

    PubMed

    Xia, Youshen; Kamel, Mohamed S

    2007-02-01

    To deal with the problem of restoring degraded images with non-Gaussian noise, this paper proposes a novel cooperative neural fusion regularization (CNFR) algorithm for image restoration. Compared with conventional regularization algorithms for image restoration, the proposed CNFR algorithm can relax need of the optimal regularization parameter to be estimated. Furthermore, to enhance the quality of restored images, this paper presents a cooperative neural fusion (CNF) algorithm for image fusion. Compared with existing signal-level image fusion algorithms, the proposed CNF algorithm can greatly reduce the loss of contrast information under blind Gaussian noise environments. The performance analysis shows that the proposed two neural fusion algorithms can converge globally to the robust and optimal image estimate. Simulation results confirm that in different noise environments, the proposed two neural fusion algorithms can obtain a better image estimate than several well known image restoration and image fusion methods.

  17. Accuracy of AFM force distance curves via direct solution of the Euler-Bernoulli equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eppell, Steven J., E-mail: steven.eppell@case.edu; Liu, Yehe; Zypman, Fredy R.

    2016-03-15

    In an effort to improve the accuracy of force-separation curves obtained from atomic force microscope data, we compare force-separation curves computed using two methods to solve the Euler-Bernoulli equation. A recently introduced method using a direct sequential forward solution, Causal Time-Domain Analysis, is compared against a previously introduced Tikhonov Regularization method. Using the direct solution as a benchmark, it is found that the regularization technique is unable to reproduce accurate curve shapes. Using L-curve analysis and adjusting the regularization parameter, λ, to match either the depth or the full width at half maximum of the force curves, the two techniquesmore » are contrasted. Matched depths result in full width at half maxima that are off by an average of 27% and matched full width at half maxima produce depths that are off by an average of 109%.« less

  18. Effects of loading modes on densification efficiency of spark plasma sintering: sample study of zirconium carbide consolidation

    NASA Astrophysics Data System (ADS)

    Wei, Xialu; Maximenko, Andrey L.; Back, Christina; Izhvanov, Oleg; Olevsky, Eugene A.

    2017-07-01

    Theoretical studies on the densification kinetics of the new spark plasma sinter-forging (SPS-forging) consolidation technique and of the regular SPS have been carried out based on the continuum theory of sintering. Both modelling and verifying experimental results indicate that the loading modes play important roles in the densification efficiency of SPS of porous ZrC specimens. Compared to regular SPS, SPS-forging is shown to be able to enhance the densification more significantly during later sintering stages. The derived analytical constitutive equations are utilised to evaluate the high-temperature creep parameters of ZrC under SPS conditions. SPS-forging and regular SPS setups are combined to form a new SPS hybrid loading mode with the purpose of reducing shape irregularity in the SPS-forged specimens. Loading control is imposed to secure the geometry as well as the densification of ZrC specimens during hybrid SPS process.

  19. Choosing the Allometric Exponent in Covariate Model Building.

    PubMed

    Sinha, Jaydeep; Al-Sallami, Hesham S; Duffull, Stephen B

    2018-04-27

    Allometric scaling is often used to describe the covariate model linking total body weight (WT) to clearance (CL); however, there is no consensus on how to select its value. The aims of this study were to assess the influence of between-subject variability (BSV) and study design on (1) the power to correctly select the exponent from a priori choices, and (2) the power to obtain unbiased exponent estimates. The influence of WT distribution range (randomly sampled from the Third National Health and Nutrition Examination Survey, 1988-1994 [NHANES III] database), sample size (N = 10, 20, 50, 100, 200, 500, 1000 subjects), and BSV on CL (low 20%, normal 40%, high 60%) were assessed using stochastic simulation estimation. A priori exponent values used for the simulations were 0.67, 0.75, and 1, respectively. For normal to high BSV drugs, it is almost impossible to correctly select the exponent from an a priori set of exponents, i.e. 1 vs. 0.75, 1 vs. 0.67, or 0.75 vs. 0.67 in regular studies involving < 200 adult participants. On the other hand, such regular study designs are sufficient to appropriately estimate the exponent. However, regular studies with < 100 patients risk potential bias in estimating the exponent. Those study designs with limited sample size and narrow range of WT (e.g. < 100 adult participants) potentially risk either selection of a false value or yielding a biased estimate of the allometric exponent; however, such bias is only relevant in cases of extrapolating the value of CL outside the studied population, e.g. analysis of a study of adults that is used to extrapolate to children.

  20. Effects of Individual's Self-Examination on Cooperation in Prisoner's Dilemma Game

    NASA Astrophysics Data System (ADS)

    Guan, Jian-Yue; Sun, Jin-Tu; Wang, Ying-Hai

    We study a spatial evolutionary prisoner's dilemma game on regular network's one-dimensional regular ring and two-dimensional square lattice. The individuals located on the sites of networks can either cooperate with their neighbors or defect. The effects of individual's self-examination are introduced. Using Monte Carlo simulations and pair approximation method, we investigate the average density of cooperators in the stationary state for various values of payoff parameters b and the time interval Δt. The effects of the fraction p of players in the system who are using the self-examination on cooperation are also discussed. It is shown that compared with the case of no individual's self-examination, the persistence of cooperation is inhibited when the payoff parameter b is small and at certain Δt (Δt > 0) or p (p > 0), cooperation is mostly inhibited, while when b is large, the emergence of cooperation can be remarkably enhanced and mostly enhanced at Δt = 0 or p = 1.

Top