Sample records for weight function method

  1. Two-dimensional analytic weighting functions for limb scattering

    NASA Astrophysics Data System (ADS)

    Zawada, D. J.; Bourassa, A. E.; Degenstein, D. A.

    2017-10-01

    Through the inversion of limb scatter measurements it is possible to obtain vertical profiles of trace species in the atmosphere. Many of these inversion methods require what is often referred to as weighting functions, or derivatives of the radiance with respect to concentrations of trace species in the atmosphere. Several radiative transfer models have implemented analytic methods to calculate weighting functions, alleviating the computational burden of traditional numerical perturbation methods. Here we describe the implementation of analytic two-dimensional weighting functions, where derivatives are calculated relative to atmospheric constituents in a two-dimensional grid of altitude and angle along the line of sight direction, in the SASKTRAN-HR radiative transfer model. Two-dimensional weighting functions are required for two-dimensional inversions of limb scatter measurements. Examples are presented where the analytic two-dimensional weighting functions are calculated with an underlying one-dimensional atmosphere. It is shown that the analytic weighting functions are more accurate than ones calculated with a single scatter approximation, and are orders of magnitude faster than a typical perturbation method. Evidence is presented that weighting functions for stratospheric aerosols calculated under a single scatter approximation may not be suitable for use in retrieval algorithms under solar backscatter conditions.

  2. Method for determining the weight of functional objectives on manufacturing system.

    PubMed

    Zhang, Qingshan; Xu, Wei; Zhang, Jiekun

    2014-01-01

    We propose a three-dimensional integrated weight determination to solve manufacturing system functional objectives, where consumers are weighted by triangular fuzzy numbers to determine the enterprises. The weights, subjective parts are determined by the expert scoring method, the objective parts are determined by the entropy method with the competitive advantage of determining. Based on the integration of three methods and comprehensive weight, we provide some suggestions for the manufacturing system. This paper provides the numerical example analysis to illustrate the feasibility of this method.

  3. Method for Determining the Weight of Functional Objectives on Manufacturing System

    PubMed Central

    Zhang, Qingshan; Xu, Wei; Zhang, Jiekun

    2014-01-01

    We propose a three-dimensional integrated weight determination to solve manufacturing system functional objectives, where consumers are weighted by triangular fuzzy numbers to determine the enterprises. The weights, subjective parts are determined by the expert scoring method, the objective parts are determined by the entropy method with the competitive advantage of determining. Based on the integration of three methods and comprehensive weight, we provide some suggestions for the manufacturing system. This paper provides the numerical example analysis to illustrate the feasibility of this method. PMID:25243203

  4. Dynamic Mesh Adaptation for Front Evolution Using Discontinuous Galerkin Based Weighted Condition Number Mesh Relaxation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greene, Patrick T.; Schofield, Samuel P.; Nourgaliev, Robert

    2016-06-21

    A new mesh smoothing method designed to cluster mesh cells near a dynamically evolving interface is presented. The method is based on weighted condition number mesh relaxation with the weight function being computed from a level set representation of the interface. The weight function is expressed as a Taylor series based discontinuous Galerkin projection, which makes the computation of the derivatives of the weight function needed during the condition number optimization process a trivial matter. For cases when a level set is not available, a fast method for generating a low-order level set from discrete cell-centered elds, such as amore » volume fraction or index function, is provided. Results show that the low-order level set works equally well for the weight function as the actual level set. Meshes generated for a number of interface geometries are presented, including cases with multiple level sets. Dynamic cases for moving interfaces are presented to demonstrate the method's potential usefulness to arbitrary Lagrangian Eulerian (ALE) methods.« less

  5. An improved adaptive weighting function method for State Estimation in Power Systems with VSC-MTDC

    NASA Astrophysics Data System (ADS)

    Zhao, Kun; Yang, Xiaonan; Lang, Yansheng; Song, Xuri; Wang, Minkun; Luo, Yadi; Wu, Lingyun; Liu, Peng

    2017-04-01

    This paper presents an effective approach for state estimation in power systems that include multi-terminal voltage source converter based high voltage direct current (VSC-MTDC), called improved adaptive weighting function method. The proposed approach is simplified in which the VSC-MTDC system is solved followed by the AC system. Because the new state estimation method only changes the weight and keeps the matrix dimension unchanged. Accurate and fast convergence of AC/DC system can be realized by adaptive weight function method. This method also provides the technical support for the simulation analysis and accurate regulation of AC/DC system. Both the oretical analysis and numerical tests verify practicability, validity and convergence of new method.

  6. Fracture mechanics analysis of cracked structures using weight function and neural network method

    NASA Astrophysics Data System (ADS)

    Chen, J. G.; Zang, F. G.; Yang, Y.; Shi, K. K.; Fu, X. L.

    2018-06-01

    Stress intensity factors(SIFs) due to thermal-mechanical load has been established by using weight function method. Two reference stress states sere used to determine the coefficients in the weight function. Results were evaluated by using data from literature and show a good agreement between them. So, the SIFs can be determined quickly using the weight function obtained when cracks subjected to arbitrary loads, and presented method can be used for probabilistic fracture mechanics analysis. A probabilistic methodology considering Monte-Carlo with neural network (MCNN) has been developed. The results indicate that an accurate probabilistic characteristic of the KI can be obtained by using the developed method. The probability of failure increases with the increasing of loads, and the relationship between is nonlinear.

  7. Weighted Feature Gaussian Kernel SVM for Emotion Recognition

    PubMed Central

    Jia, Qingxuan

    2016-01-01

    Emotion recognition with weighted feature based on facial expression is a challenging research topic and has attracted great attention in the past few years. This paper presents a novel method, utilizing subregion recognition rate to weight kernel function. First, we divide the facial expression image into some uniform subregions and calculate corresponding recognition rate and weight. Then, we get a weighted feature Gaussian kernel function and construct a classifier based on Support Vector Machine (SVM). At last, the experimental results suggest that the approach based on weighted feature Gaussian kernel function has good performance on the correct rate in emotion recognition. The experiments on the extended Cohn-Kanade (CK+) dataset show that our method has achieved encouraging recognition results compared to the state-of-the-art methods. PMID:27807443

  8. Bag-of-features based medical image retrieval via multiple assignment and visual words weighting.

    PubMed

    Wang, Jingyan; Li, Yongping; Zhang, Ying; Wang, Chao; Xie, Honglan; Chen, Guoling; Gao, Xin

    2011-11-01

    Bag-of-features based approaches have become prominent for image retrieval and image classification tasks in the past decade. Such methods represent an image as a collection of local features, such as image patches and key points with scale invariant feature transform (SIFT) descriptors. To improve the bag-of-features methods, we first model the assignments of local descriptors as contribution functions, and then propose a novel multiple assignment strategy. Assuming the local features can be reconstructed by their neighboring visual words in a vocabulary, reconstruction weights can be solved by quadratic programming. The weights are then used to build contribution functions, resulting in a novel assignment method, called quadratic programming (QP) assignment. We further propose a novel visual word weighting method. The discriminative power of each visual word is analyzed by the sub-similarity function in the bin that corresponds to the visual word. Each sub-similarity function is then treated as a weak classifier. A strong classifier is learned by boosting methods that combine those weak classifiers. The weighting factors of the visual words are learned accordingly. We evaluate the proposed methods on medical image retrieval tasks. The methods are tested on three well-known data sets, i.e., the ImageCLEFmed data set, the 304 CT Set, and the basal-cell carcinoma image set. Experimental results demonstrate that the proposed QP assignment outperforms the traditional nearest neighbor assignment, the multiple assignment, and the soft assignment, whereas the proposed boosting based weighting strategy outperforms the state-of-the-art weighting methods, such as the term frequency weights and the term frequency-inverse document frequency weights.

  9. Dynamic mesh adaptation for front evolution using discontinuous Galerkin based weighted condition number relaxation

    DOE PAGES

    Greene, Patrick T.; Schofield, Samuel P.; Nourgaliev, Robert

    2017-01-27

    A new mesh smoothing method designed to cluster cells near a dynamically evolving interface is presented. The method is based on weighted condition number mesh relaxation with the weight function computed from a level set representation of the interface. The weight function is expressed as a Taylor series based discontinuous Galerkin projection, which makes the computation of the derivatives of the weight function needed during the condition number optimization process a trivial matter. For cases when a level set is not available, a fast method for generating a low-order level set from discrete cell-centered fields, such as a volume fractionmore » or index function, is provided. Results show that the low-order level set works equally well as the actual level set for mesh smoothing. Meshes generated for a number of interface geometries are presented, including cases with multiple level sets. Lastly, dynamic cases with moving interfaces show the new method is capable of maintaining a desired resolution near the interface with an acceptable number of relaxation iterations per time step, which demonstrates the method's potential to be used as a mesh relaxer for arbitrary Lagrangian Eulerian (ALE) methods.« less

  10. Application of Entropy Method in River Health Evaluation Based on Aquatic Ecological Function Regionalization

    NASA Astrophysics Data System (ADS)

    Shi, Yan-ting; Liu, Jie; Wang, Peng; Zhang, Xu-nuo; Wang, Jun-qiang; Guo, Liang

    2017-05-01

    With the implementation of water environment management in key basins in China, the monitoring and evaluation system of basins are in urgent need of innovation and upgrading. In view of the heavy workload of existing evaluation methods and the cumbersome calculation of multi-factor weighting method, the idea of using entroy method to assess river health based on aquatic ecological function regionalization was put forward. According to the monitoring data of songhua river in the year of 2011-2015, the entropy weight method was used to calculate the weight of 9 evaluation factors of 29 monitoring sections, and the river health assessment was carried out. In the study area, the river health status of the biodiversity conservation function area (4.111 point) was good, the water conservation function area (3.371 point), the habitat maintenance functional area (3.262 point), the agricultural production maintenance functional area (3.695 point) and the urban supporting functional area (3.399 point) was light pollution.

  11. A method for determining customer requirement weights based on TFMF and TLR

    NASA Astrophysics Data System (ADS)

    Ai, Qingsong; Shu, Ting; Liu, Quan; Zhou, Zude; Xiao, Zheng

    2013-11-01

    'Customer requirements' (CRs) management plays an important role in enterprise systems (ESs) by processing customer-focused information. Quality function deployment (QFD) is one of the main CRs analysis methods. Because CR weights are crucial for the input of QFD, we developed a method for determining CR weights based on trapezoidal fuzzy membership function (TFMF) and 2-tuple linguistic representation (TLR). To improve the accuracy of CR weights, we propose to apply TFMF to describe CR weights so that they can be appropriately represented. Because the fuzzy logic is not capable of aggregating information without loss, TLR model is adopted as well. We first describe the basic concepts of TFMF and TLR and then introduce an approach to compute CR weights. Finally, an example is provided to explain and verify the proposed method.

  12. Stochastic Least-Squares Petrov--Galerkin Method for Parameterized Linear Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Kookjin; Carlberg, Kevin; Elman, Howard C.

    Here, we consider the numerical solution of parameterized linear systems where the system matrix, the solution, and the right-hand side are parameterized by a set of uncertain input parameters. We explore spectral methods in which the solutions are approximated in a chosen finite-dimensional subspace. It has been shown that the stochastic Galerkin projection technique fails to minimize any measure of the solution error. As a remedy for this, we propose a novel stochatic least-squares Petrov--Galerkin (LSPG) method. The proposed method is optimal in the sense that it produces the solution that minimizes a weightedmore » $$\\ell^2$$-norm of the residual over all solutions in a given finite-dimensional subspace. Moreover, the method can be adapted to minimize the solution error in different weighted $$\\ell^2$$-norms by simply applying a weighting function within the least-squares formulation. In addition, a goal-oriented seminorm induced by an output quantity of interest can be minimized by defining a weighting function as a linear functional of the solution. We establish optimality and error bounds for the proposed method, and extensive numerical experiments show that the weighted LSPG method outperforms other spectral methods in minimizing corresponding target weighted norms.« less

  13. A modified weighted function method for parameter estimation of Pearson type three distribution

    NASA Astrophysics Data System (ADS)

    Liang, Zhongmin; Hu, Yiming; Li, Binquan; Yu, Zhongbo

    2014-04-01

    In this paper, an unconventional method called Modified Weighted Function (MWF) is presented for the conventional moment estimation of a probability distribution function. The aim of MWF is to estimate the coefficient of variation (CV) and coefficient of skewness (CS) from the original higher moment computations to the first-order moment calculations. The estimators for CV and CS of Pearson type three distribution function (PE3) were derived by weighting the moments of the distribution with two weight functions, which were constructed by combining two negative exponential-type functions. The selection of these weight functions was based on two considerations: (1) to relate weight functions to sample size in order to reflect the relationship between the quantity of sample information and the role of weight function and (2) to allocate more weights to data close to medium-tail positions in a sample series ranked in an ascending order. A Monte-Carlo experiment was conducted to simulate a large number of samples upon which statistical properties of MWF were investigated. For the PE3 parent distribution, results of MWF were compared to those of the original Weighted Function (WF) and Linear Moments (L-M). The results indicate that MWF was superior to WF and slightly better than L-M, in terms of statistical unbiasness and effectiveness. In addition, the robustness of MWF, WF, and L-M were compared by designing the Monte-Carlo experiment that samples are obtained from Log-Pearson type three distribution (LPE3), three parameter Log-Normal distribution (LN3), and Generalized Extreme Value distribution (GEV), respectively, but all used as samples from the PE3 distribution. The results show that in terms of statistical unbiasness, no one method possesses the absolutely overwhelming advantage among MWF, WF, and L-M, while in terms of statistical effectiveness, the MWF is superior to WF and L-M.

  14. Weighted functional linear regression models for gene-based association analysis.

    PubMed

    Belonogova, Nadezhda M; Svishcheva, Gulnara R; Wilson, James F; Campbell, Harry; Axenovich, Tatiana I

    2018-01-01

    Functional linear regression models are effectively used in gene-based association analysis of complex traits. These models combine information about individual genetic variants, taking into account their positions and reducing the influence of noise and/or observation errors. To increase the power of methods, where several differently informative components are combined, weights are introduced to give the advantage to more informative components. Allele-specific weights have been introduced to collapsing and kernel-based approaches to gene-based association analysis. Here we have for the first time introduced weights to functional linear regression models adapted for both independent and family samples. Using data simulated on the basis of GAW17 genotypes and weights defined by allele frequencies via the beta distribution, we demonstrated that type I errors correspond to declared values and that increasing the weights of causal variants allows the power of functional linear models to be increased. We applied the new method to real data on blood pressure from the ORCADES sample. Five of the six known genes with P < 0.1 in at least one analysis had lower P values with weighted models. Moreover, we found an association between diastolic blood pressure and the VMP1 gene (P = 8.18×10-6), when we used a weighted functional model. For this gene, the unweighted functional and weighted kernel-based models had P = 0.004 and 0.006, respectively. The new method has been implemented in the program package FREGAT, which is freely available at https://cran.r-project.org/web/packages/FREGAT/index.html.

  15. Stochastic weighted particle methods for population balance equations with coagulation, fragmentation and spatial inhomogeneity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Kok Foong; Patterson, Robert I.A.; Wagner, Wolfgang

    2015-12-15

    Graphical abstract: -- Highlights: •Problems concerning multi-compartment population balance equations are studied. •A class of fragmentation weight transfer functions is presented. •Three stochastic weighted algorithms are compared against the direct simulation algorithm. •The numerical errors of the stochastic solutions are assessed as a function of fragmentation rate. •The algorithms are applied to a multi-dimensional granulation model. -- Abstract: This paper introduces stochastic weighted particle algorithms for the solution of multi-compartment population balance equations. In particular, it presents a class of fragmentation weight transfer functions which are constructed such that the number of computational particles stays constant during fragmentation events. Themore » weight transfer functions are constructed based on systems of weighted computational particles and each of it leads to a stochastic particle algorithm for the numerical treatment of population balance equations. Besides fragmentation, the algorithms also consider physical processes such as coagulation and the exchange of mass with the surroundings. The numerical properties of the algorithms are compared to the direct simulation algorithm and an existing method for the fragmentation of weighted particles. It is found that the new algorithms show better numerical performance over the two existing methods especially for systems with significant amount of large particles and high fragmentation rates.« less

  16. The invariant of the stiffness filter function with the weight filter function of the power function form

    NASA Astrophysics Data System (ADS)

    Shang, Zhen; Sui, Yun-Kang

    2012-12-01

    Based on the independent, continuous and mapping (ICM) method and homogenization method, a research model is constructed to propose and deduce a theorem and corollary from the invariant between the weight filter function and the corresponding stiffness filter function of the form of power function. The efficiency in searching for optimum solution will be raised via the choice of rational filter functions, so the above mentioned results are very important to the further study of structural topology optimization.

  17. Bayesian extraction of the parton distribution amplitude from the Bethe-Salpeter wave function

    NASA Astrophysics Data System (ADS)

    Gao, Fei; Chang, Lei; Liu, Yu-xin

    2017-07-01

    We propose a new numerical method to compute the parton distribution amplitude (PDA) from the Euclidean Bethe-Salpeter wave function. The essential step is to extract the weight function in the Nakanishi representation of the Bethe-Salpeter wave function in Euclidean space, which is an ill-posed inversion problem, via the maximum entropy method (MEM). The Nakanishi weight function as well as the corresponding light-front parton distribution amplitude (PDA) can be well determined. We confirm prior work on PDA computations, which was based on different methods.

  18. Analysis of corner cracks at hole by a 3-D weight function method with stresses from finite element method

    NASA Technical Reports Server (NTRS)

    Zhao, W.; Newman, J. C., Jr.; Sutton, M. A.; Wu, X. R.; Shivakumar, K. N.

    1995-01-01

    Stress intensity factors for quarter-elliptical corner cracks emanating from a circular hole are determined using a 3-D weight function method combined with a 3-D finite element method. The 3-D finite element method is used to analyze uncracked configuration and provide stress distribution in the region where crack is to occur. Using this stress distribution as input, the 3-D weight function method is used to determine stress intensity factors. Three different loading conditions, i.e. remote tension, remote bending and wedge loading, are considered for a wide range in geometrical parameters. The significance in using 3-D uncracked stress distribution and the difference between single and double corner cracks are studied. Typical crack opening displacements are also provided. Comparisons are made with solutions available in the literature.

  19. A New Computational Method to Fit the Weighted Euclidean Distance Model.

    ERIC Educational Resources Information Center

    De Leeuw, Jan; Pruzansky, Sandra

    1978-01-01

    A computational method for weighted euclidean distance scaling (a method of multidimensional scaling) which combines aspects of an "analytic" solution with an approach using loss functions is presented. (Author/JKS)

  20. [Spectral scatter correction of coal samples based on quasi-linear local weighted method].

    PubMed

    Lei, Meng; Li, Ming; Ma, Xiao-Ping; Miao, Yan-Zi; Wang, Jian-Sheng

    2014-07-01

    The present paper puts forth a new spectral correction method based on quasi-linear expression and local weighted function. The first stage of the method is to search 3 quasi-linear expressions to replace the original linear expression in MSC method, such as quadratic, cubic and growth curve expression. Then the local weighted function is constructed by introducing 4 kernel functions, such as Gaussian, Epanechnikov, Biweight and Triweight kernel function. After adding the function in the basic estimation equation, the dependency between the original and ideal spectra is described more accurately and meticulously at each wavelength point. Furthermore, two analytical models were established respectively based on PLS and PCA-BP neural network method, which can be used for estimating the accuracy of corrected spectra. At last, the optimal correction mode was determined by the analytical results with different combination of quasi-linear expression and local weighted function. The spectra of the same coal sample have different noise ratios while the coal sample was prepared under different particle sizes. To validate the effectiveness of this method, the experiment analyzed the correction results of 3 spectral data sets with the particle sizes of 0.2, 1 and 3 mm. The results show that the proposed method can eliminate the scattering influence, and also can enhance the information of spectral peaks. This paper proves a more efficient way to enhance the correlation between corrected spectra and coal qualities significantly, and improve the accuracy and stability of the analytical model substantially.

  1. The general 2-D moments via integral transform method for acoustic radiation and scattering

    NASA Astrophysics Data System (ADS)

    Smith, Jerry R.; Mirotznik, Mark S.

    2004-05-01

    The moments via integral transform method (MITM) is a technique to analytically reduce the 2-D method of moments (MoM) impedance double integrals into single integrals. By using a special integral representation of the Green's function, the impedance integral can be analytically simplified to a single integral in terms of transformed shape and weight functions. The reduced expression requires fewer computations and reduces the fill times of the MoM impedance matrix. Furthermore, the resulting integral is analytic for nearly arbitrary shape and weight function sets. The MITM technique is developed for mixed boundary conditions and predictions with basic shape and weight function sets are presented. Comparisons of accuracy and speed between MITM and brute force are presented. [Work sponsored by ONR and NSWCCD ILIR Board.

  2. Methods of Constructing a Blended Performance Function Suitable for Formation Flight

    NASA Technical Reports Server (NTRS)

    Ryan, John J.

    2017-01-01

    This paper presents two methods for constructing an approximate performance function of a desired parameter using correlated parameters. The methods are useful when real-time measurements of a desired performance function are not available to applications such as extremum-seeking control systems. The first method approximates an a priori measured or estimated desired performance function by combining real-time measurements of readily available correlated parameters. The parameters are combined using a weighting vector determined from a minimum-squares optimization to form a blended performance function. The blended performance function better matches the desired performance function mini- mum than single-measurement performance functions. The second method expands upon the first by replacing the a priori data with near-real-time measurements of the desired performance function. The resulting blended performance function weighting vector is up- dated when measurements of the desired performance function are available. Both methods are applied to data collected during formation- flight-for-drag-reduction flight experiments.

  3. A comparison of functional brain changes associated with surgical versus behavioral weight loss

    PubMed Central

    Bruce, Amanda S.; Bruce, Jared M.; Ness, Abigail R.; Lepping, Rebecca J.; Malley, Stephen; Hancock, Laura; Powell, Josh; Patrician, Trisha M.; Breslin, Florence J.; Martin, Laura E.; Donnelly, Joseph E.; Brooks, William M.; Savage, Cary R.

    2013-01-01

    Objective Few studies have examined brain changes in response to effective weight loss; none have compared different methods of weight-loss intervention. We compared functional brain changes associated with a behavioral weight loss intervention to those associated with bariatric surgery. Methods 15 obese participants were recruited prior to adjustable gastric banding surgery and 16 obese participants were recruited prior to a behavioral diet intervention. Groups were matched for demographics and amount of weight lost. fMRI scans (visual food motivation paradigm while hungry and following a meal) were conducted before, and 12 weeks after surgery/behavioral intervention. Results When compared to bariatric patients in the pre-meal analyses, behavioral dieters showed increased activation to food images in right medial PFC and left precuneus following weight loss. When compared to behavioral dieters, bariatric patients showed increased activation in in bilateral temporal cortex following the weight loss. Conclusions Behavioral dieters showed increased responses to food cues in medial PFC – a region associated with valuation and processing of self-referent information – when compared to bariatric patients. Bariatric patients showed increased responses to food cues in brain regions associated with higher level perception—when compared to behavioral dieters. The method of weight loss determines unique changes in brain function. PMID:24115765

  4. Weighted comparison of two cumulative incidence functions with R-CIFsmry package.

    PubMed

    Li, Jianing; Le-Rademacher, Jennifer; Zhang, Mei-Jie

    2014-10-01

    In this paper we propose a class of flexible weight functions for use in comparison of two cumulative incidence functions. The proposed weights allow the users to focus their comparison on an early or a late time period post treatment or to treat all time points with equal emphasis. These weight functions can be used to compare two cumulative incidence functions via their risk difference, their relative risk, or their odds ratio. The proposed method has been implemented in the R-CIFsmry package which is readily available for download and is easy to use as illustrated in the example. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  5. SU-F-BRD-01: A Logistic Regression Model to Predict Objective Function Weights in Prostate Cancer IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boutilier, J; Chan, T; Lee, T

    2014-06-15

    Purpose: To develop a statistical model that predicts optimization objective function weights from patient geometry for intensity-modulation radiotherapy (IMRT) of prostate cancer. Methods: A previously developed inverse optimization method (IOM) is applied retrospectively to determine optimal weights for 51 treated patients. We use an overlap volume ratio (OVR) of bladder and rectum for different PTV expansions in order to quantify patient geometry in explanatory variables. Using the optimal weights as ground truth, we develop and train a logistic regression (LR) model to predict the rectum weight and thus the bladder weight. Post hoc, we fix the weights of the leftmore » femoral head, right femoral head, and an artificial structure that encourages conformity to the population average while normalizing the bladder and rectum weights accordingly. The population average of objective function weights is used for comparison. Results: The OVR at 0.7cm was found to be the most predictive of the rectum weights. The LR model performance is statistically significant when compared to the population average over a range of clinical metrics including bladder/rectum V53Gy, bladder/rectum V70Gy, and mean voxel dose to the bladder, rectum, CTV, and PTV. On average, the LR model predicted bladder and rectum weights that are both 63% closer to the optimal weights compared to the population average. The treatment plans resulting from the LR weights have, on average, a rectum V70Gy that is 35% closer to the clinical plan and a bladder V70Gy that is 43% closer. Similar results are seen for bladder V54Gy and rectum V54Gy. Conclusion: Statistical modelling from patient anatomy can be used to determine objective function weights in IMRT for prostate cancer. Our method allows the treatment planners to begin the personalization process from an informed starting point, which may lead to more consistent clinical plans and reduce overall planning time.« less

  6. Spectral data compression using weighted principal component analysis with consideration of human visual system and light sources

    NASA Astrophysics Data System (ADS)

    Cao, Qian; Wan, Xiaoxia; Li, Junfeng; Liu, Qiang; Liang, Jingxing; Li, Chan

    2016-10-01

    This paper proposed two weight functions based on principal component analysis (PCA) to reserve more colorimetric information in spectral data compression process. One weight function consisted of the CIE XYZ color-matching functions representing the characteristic of the human visual system, while another was made up of the CIE XYZ color-matching functions of human visual system and relative spectral power distribution of the CIE standard illuminant D65. The improvement obtained from the proposed two methods were tested to compress and reconstruct the reflectance spectra of 1600 glossy Munsell color chips and 1950 Natural Color System color chips as well as six multispectral images. The performance was evaluated by the mean values of color difference under the CIE 1931 standard colorimetric observer and the CIE standard illuminant D65 and A. The mean values of root mean square errors between the original and reconstructed spectra were also calculated. The experimental results show that the proposed two methods significantly outperform the standard PCA and another two weighted PCA in the aspects of colorimetric reconstruction accuracy with very slight degradation in spectral reconstruction accuracy. In addition, weight functions with the CIE standard illuminant D65 can improve the colorimetric reconstruction accuracy compared to weight functions without the CIE standard illuminant D65.

  7. The proper weighting function for retrieving temperatures from satellite measured radiances

    NASA Technical Reports Server (NTRS)

    Arking, A.

    1976-01-01

    One class of methods for converting satellite measured radiances into atmospheric temperature profiles, involves a linearization of the radiative transfer equation: delta r = the sum of (W sub i) (delta T sub i) where (i=1...s) and where delta T sub i is the deviation of the temperature in layer i from that of a reference atmosphere, delta R is the difference in the radiance at satellite altitude from the corresponding radiance for the reference atmosphere, and W sub i is the discrete (or vector) form of the T-weighting (i.e., temperature weighting) function W(P), where P is pressure. The top layer of the atmosphere corresponds to i = 1, the bottom layer to i = s - 1, and i = s refers to the surface. Linearization in temperature (or some function of temperature) is at the heart of all linear or matrix methods. The weighting function that should be used is developed.

  8. A Christoffel function weighted least squares algorithm for collocation approximations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narayan, Akil; Jakeman, John D.; Zhou, Tao

    Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less

  9. A Christoffel function weighted least squares algorithm for collocation approximations

    DOE PAGES

    Narayan, Akil; Jakeman, John D.; Zhou, Tao

    2016-11-28

    Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less

  10. Statistically generated weighted curve fit of residual functions for modal analysis of structures

    NASA Technical Reports Server (NTRS)

    Bookout, P. S.

    1995-01-01

    A statistically generated weighting function for a second-order polynomial curve fit of residual functions has been developed. The residual flexibility test method, from which a residual function is generated, is a procedure for modal testing large structures in an external constraint-free environment to measure the effects of higher order modes and interface stiffness. This test method is applicable to structures with distinct degree-of-freedom interfaces to other system components. A theoretical residual function in the displacement/force domain has the characteristics of a relatively flat line in the lower frequencies and a slight upward curvature in the higher frequency range. In the test residual function, the above-mentioned characteristics can be seen in the data, but due to the present limitations in the modal parameter evaluation (natural frequencies and mode shapes) of test data, the residual function has regions of ragged data. A second order polynomial curve fit is required to obtain the residual flexibility term. A weighting function of the data is generated by examining the variances between neighboring data points. From a weighted second-order polynomial curve fit, an accurate residual flexibility value can be obtained. The residual flexibility value and free-free modes from testing are used to improve a mathematical model of the structure. The residual flexibility modal test method is applied to a straight beam with a trunnion appendage and a space shuttle payload pallet simulator.

  11. CHOW PARAMETERS IN THRESHOLD LOGIC,

    DTIC Science & Technology

    respect to threshold functions, they provide the optimal test-synthesis method for completely specified 7-argument (or less) functions, reflect the...signs and relative magnitudes of realizing weights and threshold , and can be used themselves as approximating weights. Results are reproved in a

  12. A partition function-based weighting scheme in force field parameter development using ab initio calculation results in global configurational space.

    PubMed

    Wu, Yao; Dai, Xiaodong; Huang, Niu; Zhao, Lifeng

    2013-06-05

    In force field parameter development using ab initio potential energy surfaces (PES) as target data, an important but often neglected matter is the lack of a weighting scheme with optimal discrimination power to fit the target data. Here, we developed a novel partition function-based weighting scheme, which not only fits the target potential energies exponentially like the general Boltzmann weighting method, but also reduces the effect of fitting errors leading to overfitting. The van der Waals (vdW) parameters of benzene and propane were reparameterized by using the new weighting scheme to fit the high-level ab initio PESs probed by a water molecule in global configurational space. The molecular simulation results indicate that the newly derived parameters are capable of reproducing experimental properties in a broader range of temperatures, which supports the partition function-based weighting scheme. Our simulation results also suggest that structural properties are more sensitive to vdW parameters than partial atomic charge parameters in these systems although the electrostatic interactions are still important in energetic properties. As no prerequisite conditions are required, the partition function-based weighting method may be applied in developing any types of force field parameters. Copyright © 2013 Wiley Periodicals, Inc.

  13. Fitting a function to time-dependent ensemble averaged data.

    PubMed

    Fogelmark, Karl; Lomholt, Michael A; Irbäck, Anders; Ambjörnsson, Tobias

    2018-05-03

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion constants). A commonly overlooked challenge in such function fitting procedures is that fluctuations around mean values, by construction, exhibit temporal correlations. We show that the only available general purpose function fitting methods, correlated chi-square method and the weighted least squares method (which neglects correlation), fail at either robust parameter estimation or accurate error estimation. We remedy this by deriving a new closed-form error estimation formula for weighted least square fitting. The new formula uses the full covariance matrix, i.e., rigorously includes temporal correlations, but is free of the robustness issues, inherent to the correlated chi-square method. We demonstrate its accuracy in four examples of importance in many fields: Brownian motion, damped harmonic oscillation, fractional Brownian motion and continuous time random walks. We also successfully apply our method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software.

  14. Infrared target tracking via weighted correlation filter

    NASA Astrophysics Data System (ADS)

    He, Yu-Jie; Li, Min; Zhang, JinLi; Yao, Jun-Ping

    2015-11-01

    Design of an effective target tracker is an important and challenging task for many applications due to multiple factors which can cause disturbance in infrared video sequences. In this paper, an infrared target tracking method under tracking by detection framework based on a weighted correlation filter is presented. This method consists of two parts: detection and filtering. For the detection stage, we propose a sequential detection method for the infrared target based on low-rank representation. For the filtering stage, a new multi-feature weighted function which fuses different target features is proposed, which takes the importance of the different regions into consideration. The weighted function is then incorporated into a correlation filter to compute a confidence map more accurately, in order to indicate the best target location based on the detection results obtained from the first stage. Extensive experimental results on different video sequences demonstrate that the proposed method performs favorably for detection and tracking compared with baseline methods in terms of efficiency and accuracy.

  15. Acute weight gain, gender, and therapeutic response to antipsychotics in the treatment of patients with schizophrenia

    PubMed Central

    Ascher-Svanum, Haya; Stensland, Michael; Zhao, Zhongyun; Kinon, Bruce J

    2005-01-01

    Background Previous research indicated that women are more vulnerable than men to adverse psychological consequences of weight gain. Other research has suggested that weight gain experienced during antipsychotic therapy may also psychologically impact women more negatively. This study assessed the impact of acute treatment-emergent weight gain on clinical and functional outcomes of patients with schizophrenia by patient gender and antipsychotic treatment (olanzapine or haloperidol). Methods Data were drawn from the acute phase (first 6-weeks) of a double-blind randomized clinical trial of olanzapine versus haloperidol in the treatment of 1296 men and 700 women with schizophrenia-spectrum disorders. The associations between weight change and change in core schizophrenia symptoms, depressive symptoms, and functional status were examined post-hoc for men and women and for each medication group. Core schizophrenia symptoms (positive and negative) were measured with the Brief Psychiatric Rating Scale (BPRS), depressive symptoms with the BPRS Anxiety/Depression Scale and the Montgomery-Asberg Depression Rating Scale, and functional status with the mental and physical component scores on the Medical Outcome Survey-Short Form 36. Statistical analysis included methods that controlled for treatment duration. Results Weight gain during 6-week treatment with olanzapine and haloperidol was significantly associated with improvements in core schizophrenia symptoms, depressive symptoms, mental functioning, and physical functioning for men and women alike. The conditional probability of clinical response (20% reduction in core schizophrenia symptom), given a clinically significant weight gain (at least 7% of baseline weight), showed that about half of the patients who lost weight responded to treatment, whereas three-quarters of the patients who had a clinically significant weight gain responded to treatment. The positive associations between therapeutic response and weight gain were similar for the olanzapine and haloperidol treatment groups. Improved outcomes were, however, more pronounced for the olanzapine-treated patients, and more olanzapine-treated patients gained weight. Conclusions The findings of significant relationships between treatment-emergent weight gain and improvements in clinical and functional status at 6-weeks suggest that patients who have greater treatment-emergent weight gain are more likely to benefit from treatment with olanzapine or haloperidol regardless of gender. PMID:15649317

  16. Geographically weighted regression based methods for merging satellite and gauge precipitation

    NASA Astrophysics Data System (ADS)

    Chao, Lijun; Zhang, Ke; Li, Zhijia; Zhu, Yuelong; Wang, Jingfeng; Yu, Zhongbo

    2018-03-01

    Real-time precipitation data with high spatiotemporal resolutions are crucial for accurate hydrological forecasting. To improve the spatial resolution and quality of satellite precipitation, a three-step satellite and gauge precipitation merging method was formulated in this study: (1) bilinear interpolation is first applied to downscale coarser satellite precipitation to a finer resolution (PS); (2) the (mixed) geographically weighted regression methods coupled with a weighting function are then used to estimate biases of PS as functions of gauge observations (PO) and PS; and (3) biases of PS are finally corrected to produce a merged precipitation product. Based on the above framework, eight algorithms, a combination of two geographically weighted regression methods and four weighting functions, are developed to merge CMORPH (CPC MORPHing technique) precipitation with station observations on a daily scale in the Ziwuhe Basin of China. The geographical variables (elevation, slope, aspect, surface roughness, and distance to the coastline) and a meteorological variable (wind speed) were used for merging precipitation to avoid the artificial spatial autocorrelation resulting from traditional interpolation methods. The results show that the combination of the MGWR and BI-square function (MGWR-BI) has the best performance (R = 0.863 and RMSE = 7.273 mm/day) among the eight algorithms. The MGWR-BI algorithm was then applied to produce hourly merged precipitation product. Compared to the original CMORPH product (R = 0.208 and RMSE = 1.208 mm/hr), the quality of the merged data is significantly higher (R = 0.724 and RMSE = 0.706 mm/hr). The developed merging method not only improves the spatial resolution and quality of the satellite product but also is easy to implement, which is valuable for hydrological modeling and other applications.

  17. An RBF-based compression method for image-based relighting.

    PubMed

    Leung, Chi-Sing; Wong, Tien-Tsin; Lam, Ping-Man; Choy, Kwok-Hung

    2006-04-01

    In image-based relighting, a pixel is associated with a number of sampled radiance values. This paper presents a two-level compression method. In the first level, the plenoptic property of a pixel is approximated by a spherical radial basis function (SRBF) network. That means that the spherical plenoptic function of each pixel is represented by a number of SRBF weights. In the second level, we apply a wavelet-based method to compress these SRBF weights. To reduce the visual artifact due to quantization noise, we develop a constrained method for estimating the SRBF weights. Our proposed approach is superior to JPEG, JPEG2000, and MPEG. Compared with the spherical harmonics approach, our approach has a lower complexity, while the visual quality is comparable. The real-time rendering method for our SRBF representation is also discussed.

  18. New quantitative method for evaluation of motor functions applicable to spinal muscular atrophy.

    PubMed

    Matsumaru, Naoki; Hattori, Ryo; Ichinomiya, Takashi; Tsukamoto, Katsura; Kato, Zenichiro

    2018-03-01

    The aim of this study was to develop and introduce new method to quantify motor functions of the upper extremity. The movement was recorded using a three-dimensional motion capture system, and the movement trajectory was analyzed using newly developed two indices, which measure precise repeatability and directional smoothness. Our target task was shoulder flexion repeated ten times. We applied our method to a healthy adult without and with a weight, simulating muscle impairment. We also applied our method to assess the efficacy of a drug therapy for amelioration of motor functions in a non-ambulatory patient with spinal muscular atrophy. Movement trajectories before and after thyrotropin-releasing hormone therapy were analyzed. In the healthy adult, we found the values of both indices increased significantly when holding a weight so that the weight-induced deterioration in motor function was successfully detected. From the efficacy assessment of drug therapy in the patient, the directional smoothness index successfully detected improvements in motor function, which were also clinically observed by the patient's doctors. We have developed a new quantitative evaluation method of motor functions of the upper extremity. Clinical usability of this method is also greatly enhanced by reducing the required number of body-attached markers to only one. This simple but universal approach to quantify motor functions will provide additional insights into the clinical phenotypes of various neuromuscular diseases and developmental disorders. Copyright © 2017 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.

  19. Action-angle formulation of generalized, orbit-based, fast-ion diagnostic weight functions

    NASA Astrophysics Data System (ADS)

    Stagner, L.; Heidbrink, W. W.

    2017-09-01

    Due to the usually complicated and anisotropic nature of the fast-ion distribution function, diagnostic velocity-space weight functions, which indicate the sensitivity of a diagnostic to different fast-ion velocities, are used to facilitate the analysis of experimental data. Additionally, when velocity-space weight functions are discretized, a linear equation relating the fast-ion density and the expected diagnostic signal is formed. In a technique known as velocity-space tomography, many measurements can be combined to create an ill-conditioned system of linear equations that can be solved using various computational methods. However, when velocity-space weight functions (which by definition ignore spatial dependencies) are used, velocity-space tomography is restricted, both by the accuracy of its forward model and also by the availability of spatially overlapping diagnostic measurements. In this work, we extend velocity-space weight functions to a full 6D generalized coordinate system and then show how to reduce them to a 3D orbit-space without loss of generality using an action-angle formulation. Furthermore, we show how diagnostic orbit-weight functions can be used to infer the full fast-ion distribution function, i.e., orbit tomography. In depth derivations of orbit weight functions for the neutron, neutral particle analyzer, and fast-ion D-α diagnostics are also shown.

  20. Locally Weighted Score Estimation for Quantile Classification in Binary Regression Models

    PubMed Central

    Rice, John D.; Taylor, Jeremy M. G.

    2016-01-01

    One common use of binary response regression methods is classification based on an arbitrary probability threshold dictated by the particular application. Since this is given to us a priori, it is sensible to incorporate the threshold into our estimation procedure. Specifically, for the linear logistic model, we solve a set of locally weighted score equations, using a kernel-like weight function centered at the threshold. The bandwidth for the weight function is selected by cross validation of a novel hybrid loss function that combines classification error and a continuous measure of divergence between observed and fitted values; other possible cross-validation functions based on more common binary classification metrics are also examined. This work has much in common with robust estimation, but diers from previous approaches in this area in its focus on prediction, specifically classification into high- and low-risk groups. Simulation results are given showing the reduction in error rates that can be obtained with this method when compared with maximum likelihood estimation, especially under certain forms of model misspecification. Analysis of a melanoma data set is presented to illustrate the use of the method in practice. PMID:28018492

  1. Best quadrature formula on Sobolev class with Chebyshev weight

    NASA Astrophysics Data System (ADS)

    Xie, Congcong

    2008-05-01

    Using best interpolation function based on a given function information, we present a best quadrature rule of function on Sobolev class KWr[-1,1] with Chebyshev weight. The given function information means that the values of a function f[set membership, variant]KWr[-1,1] and its derivatives up to r-1 order at a set of nodes x are given. Error bounds are obtained, and the method is illustrated by some examples.

  2. Efficient Iris Recognition Based on Optimal Subfeature Selection and Weighted Subregion Fusion

    PubMed Central

    Deng, Ning

    2014-01-01

    In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, andMMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity. PMID:24683317

  3. Efficient iris recognition based on optimal subfeature selection and weighted subregion fusion.

    PubMed

    Chen, Ying; Liu, Yuanning; Zhu, Xiaodong; He, Fei; Wang, Hongye; Deng, Ning

    2014-01-01

    In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, and MMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity.

  4. Dimensional feature weighting utilizing multiple kernel learning for single-channel talker location discrimination using the acoustic transfer function.

    PubMed

    Takashima, Ryoichi; Takiguchi, Tetsuya; Ariki, Yasuo

    2013-02-01

    This paper presents a method for discriminating the location of the sound source (talker) using only a single microphone. In a previous work, the single-channel approach for discriminating the location of the sound source was discussed, where the acoustic transfer function from a user's position is estimated by using a hidden Markov model of clean speech in the cepstral domain. In this paper, each cepstral dimension of the acoustic transfer function is newly weighted, in order to obtain the cepstral dimensions having information that is useful for classifying the user's position. Then, this paper proposes a feature-weighting method for the cepstral parameter using multiple kernel learning, defining the base kernels for each cepstral dimension of the acoustic transfer function. The user's position is trained and classified by support vector machine. The effectiveness of this method has been confirmed by sound source (talker) localization experiments performed in different room environments.

  5. On the Fast Evaluation Method of Temperature and Gas Mixing Ratio Weighting Functions for Remote Sensing of Planetary Atmospheres in Thermal IR and Microwave

    NASA Technical Reports Server (NTRS)

    Ustinov, E. A.

    1999-01-01

    Evaluation of weighting functions in the atmospheric remote sensing is usually the most computer-intensive part of the inversion algorithms. We present an analytic approach to computations of temperature and mixing ratio weighting functions that is based on our previous results but the resulting expressions use the intermediate variables that are generated in computations of observable radiances themselves. Upwelling radiances at the given level in the atmosphere and atmospheric transmittances from space to the given level are combined with local values of the total absorption coefficient and its components due to absorption of atmospheric constituents under study. This makes it possible to evaluate the temperature and mixing ratio weighting functions in parallel with evaluation of radiances. This substantially decreases the computer time required for evaluation of weighting functions. Implications for the nadir and limb viewing geometries are discussed.

  6. Functional Brain Networks: Does the Choice of Dependency Estimator and Binarization Method Matter?

    NASA Astrophysics Data System (ADS)

    Jalili, Mahdi

    2016-07-01

    The human brain can be modelled as a complex networked structure with brain regions as individual nodes and their anatomical/functional links as edges. Functional brain networks are constructed by first extracting weighted connectivity matrices, and then binarizing them to minimize the noise level. Different methods have been used to estimate the dependency values between the nodes and to obtain a binary network from a weighted connectivity matrix. In this work we study topological properties of EEG-based functional networks in Alzheimer’s Disease (AD). To estimate the connectivity strength between two time series, we use Pearson correlation, coherence, phase order parameter and synchronization likelihood. In order to binarize the weighted connectivity matrices, we use Minimum Spanning Tree (MST), Minimum Connected Component (MCC), uniform threshold and density-preserving methods. We find that the detected AD-related abnormalities highly depend on the methods used for dependency estimation and binarization. Topological properties of networks constructed using coherence method and MCC binarization show more significant differences between AD and healthy subjects than the other methods. These results might explain contradictory results reported in the literature for network properties specific to AD symptoms. The analysis method should be seriously taken into account in the interpretation of network-based analysis of brain signals.

  7. Ranking of physiotherapeutic evaluation methods as outcome measures of stifle functionality in dogs.

    PubMed

    Hyytiäinen, Heli K; Mölsä, Sari H; Junnila, Jouni T; Laitinen-Vapaavuori, Outi M; Hielm-Björkman, Anna K

    2013-04-08

    Various physiotherapeutic evaluation methods are used to assess the functionality of dogs with stifle problems. Neither validity nor sensitivity of these methods has been investigated. This study aimed to determine the most valid and sensitive physiotherapeutic evaluation methods for assessing functional capacity in hind limbs of dogs with stifle problems and to serve as a basis for developing an indexed test for these dogs. A group of 43 dogs with unilateral surgically treated cranial cruciate ligament deficiency and osteoarthritic findings was used to test different physiotherapeutic evaluation methods. Twenty-one healthy dogs served as the control group and were used to determine normal variation in static weight bearing and range of motion.The protocol consisted of 14 different evaluation methods: visual evaluation of lameness, visual evaluation of diagonal movement, visual evaluation of functional active range of motion and difference in thrust of hind limbs via functional tests (sit-to-move and lie-to-move), movement in stairs, evaluation of hind limb muscle atrophy, manual evaluation of hind limb static weight bearing, quantitative measurement of static weight bearing of hind limbs with bathroom scales, and passive range of motion of hind limb stifle (flexion and extension) and tarsal (flexion and extension) joints using a universal goniometer. The results were compared with those from an orthopaedic examination, force plate analysis, radiographic evaluation, and a conclusive assessment. Congruity of the methods was assessed with a combination of three statistical approaches (Fisher's exact test and two differently calculated proportions of agreeing observations), and the components were ranked from best to worst. Sensitivities of all of the physiotherapeutic evaluation methods against each standard were calculated. Evaluation of asymmetry in a sitting and lying position, assessment of muscle atrophy, manual and measured static weight bearing, and measurement of stifle passive range of motion were the most valid and sensitive physiotherapeutic evaluation methods. Ranking of the various physiotherapeutic evaluation methods was accomplished. Several of these methods can be considered valid and sensitive when examining the functionality of dogs with stifle problems.

  8. Ranking of physiotherapeutic evaluation methods as outcome measures of stifle functionality in dogs

    PubMed Central

    2013-01-01

    Background Various physiotherapeutic evaluation methods are used to assess the functionality of dogs with stifle problems. Neither validity nor sensitivity of these methods has been investigated. This study aimed to determine the most valid and sensitive physiotherapeutic evaluation methods for assessing functional capacity in hind limbs of dogs with stifle problems and to serve as a basis for developing an indexed test for these dogs. A group of 43 dogs with unilateral surgically treated cranial cruciate ligament deficiency and osteoarthritic findings was used to test different physiotherapeutic evaluation methods. Twenty-one healthy dogs served as the control group and were used to determine normal variation in static weight bearing and range of motion. The protocol consisted of 14 different evaluation methods: visual evaluation of lameness, visual evaluation of diagonal movement, visual evaluation of functional active range of motion and difference in thrust of hind limbs via functional tests (sit-to-move and lie-to-move), movement in stairs, evaluation of hind limb muscle atrophy, manual evaluation of hind limb static weight bearing, quantitative measurement of static weight bearing of hind limbs with bathroom scales, and passive range of motion of hind limb stifle (flexion and extension) and tarsal (flexion and extension) joints using a universal goniometer. The results were compared with those from an orthopaedic examination, force plate analysis, radiographic evaluation, and a conclusive assessment. Congruity of the methods was assessed with a combination of three statistical approaches (Fisher’s exact test and two differently calculated proportions of agreeing observations), and the components were ranked from best to worst. Sensitivities of all of the physiotherapeutic evaluation methods against each standard were calculated. Results Evaluation of asymmetry in a sitting and lying position, assessment of muscle atrophy, manual and measured static weight bearing, and measurement of stifle passive range of motion were the most valid and sensitive physiotherapeutic evaluation methods. Conclusions Ranking of the various physiotherapeutic evaluation methods was accomplished. Several of these methods can be considered valid and sensitive when examining the functionality of dogs with stifle problems. PMID:23566355

  9. Joint Inversion of Gravity and Gravity Tensor Data Using the Structural Index as Weighting Function Rate Decay

    NASA Astrophysics Data System (ADS)

    Ialongo, S.; Cella, F.; Fedi, M.; Florio, G.

    2011-12-01

    Most geophysical inversion problems are characterized by a number of data considerably higher than the number of the unknown parameters. This corresponds to solve highly underdetermined systems. To get a unique solution, a priori information must be therefore introduced. We here analyze the inversion of the gravity gradient tensor (GGT). Previous approaches to invert jointly or independently more gradient components are by Li (2001) proposing an algorithm using a depth weighting function and Zhdanov et alii (2004), providing a well focused inversion of gradient data. Both the methods give a much-improved solution compared with the minimum length solution, which is invariably shallow and not representative of the true source distribution. For very undetermined problems, this feature is due to the role of the depth weighting matrices used by both the methods. Recently, Cella and Fedi (2011) showed however that for magnetic and gravity data the depth weighting function has to be defined carefully, under a preliminary application of Euler Deconvolution or Depth from Extreme Point methods, yielding the appropriate structural index and then using it as the rate decay of the weighting function. We therefore propose to extend this last approach to invert jointly or independently the GGT tensor using the structural index as weighting function rate decay. In case of a joint inversion, gravity data can be added as well. This multicomponent case is also relevant because the simultaneous use of several components and gravity increase the number of data and reduce the algebraic ambiguity compared to the inversion of a single component. The reduction of such ambiguity was shown in Fedi et al, (2005) decisive to get an improved depth resolution in inverse problems, independently from any form of depth weighting function. The method is demonstrated to synthetic cases and applied to real cases, such as the Vredefort impact area (South Africa), characterized by a complex density distribution, well defining a central uplift area, ring structures and low density sediments. REFERENCES Cella F., and Fedi M., 2011, Inversion of potential field data using the structural index as weighting function rate decay, Geophysical Prospecting, doi: 10.1111/j.1365-2478.2011.00974.x Fedi M., Hansen P. C., and Paoletti V., 2005 Analysis of depth resolution in potential-field inversion. Geophysics, 70, NO. 6 Li, Y., 2001, 3-D inversion of gravity gradiometry data: 71st Annual Meeting, SEG, Expanded Abstracts, 1470-1473. Zhdanov, M. S., Ellis, R. G., and Mukherjee, S., 2004, Regularized focusing inversion of 3-D gravity tensor data: Geophysics, 69, 925-937.

  10. Predicting objective function weights from patient anatomy in prostate IMRT treatment planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Taewoo, E-mail: taewoo.lee@utoronto.ca; Hammad, Muhannad; Chan, Timothy C. Y.

    2013-12-15

    Purpose: Intensity-modulated radiation therapy (IMRT) treatment planning typically combines multiple criteria into a single objective function by taking a weighted sum. The authors propose a statistical model that predicts objective function weights from patient anatomy for prostate IMRT treatment planning. This study provides a proof of concept for geometry-driven weight determination. Methods: A previously developed inverse optimization method (IOM) was used to generate optimal objective function weights for 24 patients using their historical treatment plans (i.e., dose distributions). These IOM weights were around 1% for each of the femoral heads, while bladder and rectum weights varied greatly between patients. Amore » regression model was developed to predict a patient's rectum weight using the ratio of the overlap volume of the rectum and bladder with the planning target volume at a 1 cm expansion as the independent variable. The femoral head weights were fixed to 1% each and the bladder weight was calculated as one minus the rectum and femoral head weights. The model was validated using leave-one-out cross validation. Objective values and dose distributions generated through inverse planning using the predicted weights were compared to those generated using the original IOM weights, as well as an average of the IOM weights across all patients. Results: The IOM weight vectors were on average six times closer to the predicted weight vectors than to the average weight vector, usingl{sub 2} distance. Likewise, the bladder and rectum objective values achieved by the predicted weights were more similar to the objective values achieved by the IOM weights. The difference in objective value performance between the predicted and average weights was statistically significant according to a one-sided sign test. For all patients, the difference in rectum V54.3 Gy, rectum V70.0 Gy, bladder V54.3 Gy, and bladder V70.0 Gy values between the dose distributions generated by the predicted weights and IOM weights was less than 5 percentage points. Similarly, the difference in femoral head V54.3 Gy values between the two dose distributions was less than 5 percentage points for all but one patient. Conclusions: This study demonstrates a proof of concept that patient anatomy can be used to predict appropriate objective function weights for treatment planning. In the long term, such geometry-driven weights may serve as a starting point for iterative treatment plan design or may provide information about the most clinically relevant region of the Pareto surface to explore.« less

  11. Predicting objective function weights from patient anatomy in prostate IMRT treatment planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Taewoo, E-mail: taewoo.lee@utoronto.ca; Hammad, Muhannad; Chan, Timothy C. Y.

    Purpose: Intensity-modulated radiation therapy (IMRT) treatment planning typically combines multiple criteria into a single objective function by taking a weighted sum. The authors propose a statistical model that predicts objective function weights from patient anatomy for prostate IMRT treatment planning. This study provides a proof of concept for geometry-driven weight determination. Methods: A previously developed inverse optimization method (IOM) was used to generate optimal objective function weights for 24 patients using their historical treatment plans (i.e., dose distributions). These IOM weights were around 1% for each of the femoral heads, while bladder and rectum weights varied greatly between patients. Amore » regression model was developed to predict a patient's rectum weight using the ratio of the overlap volume of the rectum and bladder with the planning target volume at a 1 cm expansion as the independent variable. The femoral head weights were fixed to 1% each and the bladder weight was calculated as one minus the rectum and femoral head weights. The model was validated using leave-one-out cross validation. Objective values and dose distributions generated through inverse planning using the predicted weights were compared to those generated using the original IOM weights, as well as an average of the IOM weights across all patients. Results: The IOM weight vectors were on average six times closer to the predicted weight vectors than to the average weight vector, usingl{sub 2} distance. Likewise, the bladder and rectum objective values achieved by the predicted weights were more similar to the objective values achieved by the IOM weights. The difference in objective value performance between the predicted and average weights was statistically significant according to a one-sided sign test. For all patients, the difference in rectum V54.3 Gy, rectum V70.0 Gy, bladder V54.3 Gy, and bladder V70.0 Gy values between the dose distributions generated by the predicted weights and IOM weights was less than 5 percentage points. Similarly, the difference in femoral head V54.3 Gy values between the two dose distributions was less than 5 percentage points for all but one patient. Conclusions: This study demonstrates a proof of concept that patient anatomy can be used to predict appropriate objective function weights for treatment planning. In the long term, such geometry-driven weights may serve as a starting point for iterative treatment plan design or may provide information about the most clinically relevant region of the Pareto surface to explore.« less

  12. Weight change and physical function in older women: findings from the Nun Study.

    PubMed

    Tully, C L; Snowdon, D A

    1995-12-01

    To investigate the association between change in weight and decline in physical function in older women. Longitudinal study of a defined population of Catholic sisters (nuns) whose weight and function were assessed twice, an average of 584 days apart. Unique life communities (convents) located throughout the United States. 475 Catholic sisters who were 75 to 99 years of age (M = 82.1, SD = 4.8) and were independent in at least one Activity of Daily Living (ADL) at the first assessment of weight and function. None. At each assessment, weight, ADLs, and cognitive function were evaluated as part of the Nun Study--a longitudinal study of aging and Alzheimer's disease. Annual percent weight change was calculated using weights from the two assessments, as well as the number of days that elapsed between assessments. Mean weight at first assessment was 140 pounds (range 78 to 232, SD = 27). The mean annual percent weight change was 0.1% (range 22% loss to 16% gain, SD = 3.8). Age- and initial weight-adjusted findings indicated that those participants with an annual percent weight loss of 3% or greater had 2.7 to 3.9 times the risk of becoming dependent in each ADL, compared to the sisters with no weight change. The elevated risk persisted in those who were mentally intact or were independent in their eating habits. Monitoring of weight may be an easy and inexpensive method of identifying older individuals at increased risk of disability.

  13. Associations between Birth Weight and Attention-Deficit/Hyperactivity Disorder (ADHD) Symptom Severity: Indirect Effects via Primary Neuropsychological Functions

    PubMed Central

    Hatch, Burt; Healey, Dione M.; Halperin, Jeffrey M.

    2013-01-01

    Background ADHD has a range of aetiological origins which are associated with a number of disruptions in neuropsychological functioning. This study aims to examine how low birth weight, a proxy measure for a range of environmental complications during gestation, predicts ADHD symptom severity in preschool-aged children indirectly via neuropsychological functioning. Methods 197 preschool-aged children were recruited as part of a larger longitudinal study. Two neuropsychological factors were derived from NEPSY domain scores. One, referred to as ‘Primary Neuropsychological Function,’ loaded highly with Sensorimotor and Visuospatial scores. The other, termed ‘Higher-Order Function’ loaded highly with Language and Memory domain scores. Executive functioning split evenly across the two. Analyses examined whether these neuropsychological factors allowed for an indirect association between birth weight and ADHD symptom severity. Results While both factors were associated with symptom severity, only the Primary Neuropsychological Factor was associated with birth weight. Furthermore, birth weight was indirectly associated to symptom severity via this factor. Conclusions These data indicate that birth weight is indirectly associated with ADHD severity via disruption of neuropsychological functions that are more primary in function as opposed to functions that play a higher-order role in utilising and integrating the primary functions. PMID:24795955

  14. Polymer solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krawczyk, Gerhard Erich; Miller, Kevin Michael

    2011-07-26

    There is provided a method of making a polymer solution comprising polymerizing one or more monomer in a solvent, wherein said monomer comprises one or more ethylenically unsaturated monomer that is a multi-functional Michael donor, and wherein said solvent comprises 40% or more by weight, based on the weight of said solvent, one or more multi-functional Michael donor.

  15. Methods of Constructing a Blended Performance Function Suitable for Formation Flight

    NASA Technical Reports Server (NTRS)

    Ryan, Jack

    2017-01-01

    Two methods for constructing performance functions for formation fight-for-drag-reduction suitable for use with an extreme-seeking control system are presented. The first method approximates an a prior measured or estimated drag-reduction performance function by combining real-time measurements of readily available parameters. The parameters are combined with weightings determined from a minimum squares optimization to form a blended performance function.

  16. Particle swarm optimizer for weighting factor selection in intensity-modulated radiation therapy optimization algorithms.

    PubMed

    Yang, Jie; Zhang, Pengcheng; Zhang, Liyuan; Shu, Huazhong; Li, Baosheng; Gui, Zhiguo

    2017-01-01

    In inverse treatment planning of intensity-modulated radiation therapy (IMRT), the objective function is typically the sum of the weighted sub-scores, where the weights indicate the importance of the sub-scores. To obtain a high-quality treatment plan, the planner manually adjusts the objective weights using a trial-and-error procedure until an acceptable plan is reached. In this work, a new particle swarm optimization (PSO) method which can adjust the weighting factors automatically was investigated to overcome the requirement of manual adjustment, thereby reducing the workload of the human planner and contributing to the development of a fully automated planning process. The proposed optimization method consists of three steps. (i) First, a swarm of weighting factors (i.e., particles) is initialized randomly in the search space, where each particle corresponds to a global objective function. (ii) Then, a plan optimization solver is employed to obtain the optimal solution for each particle, and the values of the evaluation functions used to determine the particle's location and the population global location for the PSO are calculated based on these results. (iii) Next, the weighting factors are updated based on the particle's location and the population global location. Step (ii) is performed alternately with step (iii) until the termination condition is reached. In this method, the evaluation function is a combination of several key points on the dose volume histograms. Furthermore, a perturbation strategy - the crossover and mutation operator hybrid approach - is employed to enhance the population diversity, and two arguments are applied to the evaluation function to improve the flexibility of the algorithm. In this study, the proposed method was used to develop IMRT treatment plans involving five unequally spaced 6MV photon beams for 10 prostate cancer cases. The proposed optimization algorithm yielded high-quality plans for all of the cases, without human planner intervention. A comparison of the results with the optimized solution obtained using a similar optimization model but with human planner intervention revealed that the proposed algorithm produced optimized plans superior to that developed using the manual plan. The proposed algorithm can generate admissible solutions within reasonable computational times and can be used to develop fully automated IMRT treatment planning methods, thus reducing human planners' workloads during iterative processes. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  17. Enhancement of partial robust M-regression (PRM) performance using Bisquare weight function

    NASA Astrophysics Data System (ADS)

    Mohamad, Mazni; Ramli, Norazan Mohamed; Ghani@Mamat, Nor Azura Md; Ahmad, Sanizah

    2014-09-01

    Partial Least Squares (PLS) regression is a popular regression technique for handling multicollinearity in low and high dimensional data which fits a linear relationship between sets of explanatory and response variables. Several robust PLS methods are proposed to accommodate the classical PLS algorithms which are easily affected with the presence of outliers. The recent one was called partial robust M-regression (PRM). Unfortunately, the use of monotonous weighting function in the PRM algorithm fails to assign appropriate and proper weights to large outliers according to their severity. Thus, in this paper, a modified partial robust M-regression is introduced to enhance the performance of the original PRM. A re-descending weight function, known as Bisquare weight function is recommended to replace the fair function in the PRM. A simulation study is done to assess the performance of the modified PRM and its efficiency is also tested in both contaminated and uncontaminated simulated data under various percentages of outliers, sample sizes and number of predictors.

  18. Model predictive controller design for boost DC-DC converter using T-S fuzzy cost function

    NASA Astrophysics Data System (ADS)

    Seo, Sang-Wha; Kim, Yong; Choi, Han Ho

    2017-11-01

    This paper proposes a Takagi-Sugeno (T-S) fuzzy method to select cost function weights of finite control set model predictive DC-DC converter control algorithms. The proposed method updates the cost function weights at every sample time by using T-S type fuzzy rules derived from the common optimal control engineering knowledge that a state or input variable with an excessively large magnitude can be penalised by increasing the weight corresponding to the variable. The best control input is determined via the online optimisation of the T-S fuzzy cost function for all the possible control input sequences. This paper implements the proposed model predictive control algorithm in real time on a Texas Instruments TMS320F28335 floating-point Digital Signal Processor (DSP). Some experimental results are given to illuminate the practicality and effectiveness of the proposed control system under several operating conditions. The results verify that our method can yield not only good transient and steady-state responses (fast recovery time, small overshoot, zero steady-state error, etc.) but also insensitiveness to abrupt load or input voltage parameter variations.

  19. Fluctuations, noise, and numerical methods in gyrokinetic particle-in-cell simulations

    NASA Astrophysics Data System (ADS)

    Jenkins, Thomas Grant

    In this thesis, the role of the "marker weight" (or "particle weight") used in gyrokinetic particle-in-cell (PIC) simulations is explored. Following a review of the foundations and major developments of gyrokinetic theory, key concepts of the Monte Carlo methods which form the basis for PIC simulations are set forth. Consistent with these methods, a Klimontovich representation for the set of simulation markers is developed in the extended phase space {R, v||, v ⊥, W, P} (with the additional coordinates representing weight fields); clear distinctions are consequently established between the marker distribution function and various physical distribution functions (arising from diverse moments of the marker distribution). Equations describing transport in the simulation are shown to be easily derivable using the formalism. The necessity of a two-weight model for nonequilibrium simulations is demonstrated, and a simple method for calculating the second (background-related) weight is presented. Procedures for arbitrary marker loading schemes in gyrokinetic PIC simulations are outlined; various initialization methods for simulations are compared. Possible effects of inadequate velocity-space resolution in gyrokinetic continuum simulations are explored. The "partial-f" simulation method is developed and its limitations indicated. A quasilinear treatment of electrostatic drift waves is shown to correctly predict nonlinear saturation amplitudes, and the relevance of the gyrokinetic fluctuation-dissipation theorem in assessing the effects of discrete-marker-induced statistical noise on the resulting marginally stable states is demonstrated.

  20. A novel beamformer design method for medical ultrasound. Part I: Theory.

    PubMed

    Ranganathan, Karthik; Walker, William F

    2003-01-01

    The design of transmit and receive aperture weightings is a critical step in the development of ultrasound imaging systems. Current design methods are generally iterative, and consequently time consuming and inexact. We describe a new and general ultrasound beamformer design method, the minimum sum squared error (MSSE) technique. The MSSE technique enables aperture design for arbitrary beam patterns (within fundamental limitations imposed by diffraction). It uses a linear algebra formulation to describe the system point spread function (psf) as a function of the aperture weightings. The sum squared error (SSE) between the system psf and the desired or goal psf is minimized, yielding the optimal aperture weightings. We present detailed analysis for continuous wave (CW) and broadband systems. We also discuss several possible applications of the technique, such as the design of aperture weightings that improve the system depth of field, generate limited diffraction transmit beams, and improve the correlation depth of field in translated aperture system geometries. Simulation results are presented in an accompanying paper.

  1. On the time-weighted quadratic sum of linear discrete systems

    NASA Technical Reports Server (NTRS)

    Jury, E. I.; Gutman, S.

    1975-01-01

    A method is proposed for obtaining the time-weighted quadratic sum for linear discrete systems. The formula of the weighted quadratic sum is obtained from matrix z-transform formulation. In addition, it is shown that this quadratic sum can be derived in a recursive form for several useful weighted functions. The discussion presented parallels that of MacFarlane (1963) for weighted quadratic integral for linear continuous systems.

  2. Development and evaluation of modified envelope correlation method for deep tectonic tremor

    NASA Astrophysics Data System (ADS)

    Mizuno, N.; Ide, S.

    2017-12-01

    We develop a new location method for deep tectonic tremors, as an improvement of widely used envelope correlation method, and applied it to construct a tremor catalog in western Japan. Using the cross-correlation functions as objective functions and weighting components of data by the inverse of error variances, the envelope cross-correlation method is redefined as a maximum likelihood method. This method is also capable of multiple source detection, because when several events occur almost simultaneously, they appear as local maxima of likelihood.The average of weighted cross-correlation functions, defined as ACC, is a nonlinear function whose variable is a position of deep tectonic tremor. The optimization method has two steps. First, we fix the source depth to 30 km and use a grid search with 0.2 degree intervals to find the maxima of ACC, which are candidate event locations. Then, using each of the candidate locations as initial values, we apply a gradient method to determine horizontal and vertical components of a hypocenter. Sometimes, several source locations are determined in a time window of 5 minutes. We estimate the resolution, which is defined as a distance of sources to be detected separately by the location method, is about 100 km. The validity of this estimation is confirmed by a numerical test using synthetic waveforms. Applying to continuous seismograms in western Japan for over 10 years, the new method detected 27% more tremors than a previous method, owing to the multiple detection and improvement of accuracy by appropriate weighting scheme.

  3. Broadening of polymer chromatographic signals: Analysis, quantification and correction through effective diffusion coefficients.

    PubMed

    Suárez, Inmaculada; Coto, Baudilio

    2015-08-14

    Average molecular weights and polydispersity indexes are some of the most important parameters considered in the polymer characterization. Usually, gel permeation chromatography (GPC) and multi angle light scattering (MALS) are used for this determination, but GPC values are overestimated due to the dispersion introduced by the column separation. Several procedures were proposed to correct such effect usually involving more complex calibration processes. In this work, a new method of calculation has been considered including diffusion effects. An equation for the concentration profile due to diffusion effects along the GPC column was considered to be a Fickian function and polystyrene narrow standards were used to determine effective diffusion coefficients. The molecular weight distribution function of mono and poly disperse polymers was interpreted as a sum of several Fickian functions representing a sample formed by only few kind of polymer chains with specific molecular weight and diffusion coefficient. Proposed model accurately fit the concentration profile along the whole elution time range as checked by the computed standard deviation. Molecular weights obtained by this new method are similar to those obtained by MALS or traditional GPC while polydispersity index values are intermediate between those obtained by the traditional GPC combined to Universal Calibration method and the MALS method. Values for Pearson and Lin coefficients shows improvement in the correlation of polydispersity index values determined by GPC and MALS methods when diffusion coefficients and new methods are used. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. [Specific weight loss in hyper- and hypothyroidism (author's transl)].

    PubMed

    Schlick, W; Schmid, P; Irsigler, K

    1975-02-07

    By means of a new method of extremely precise weight measurement (buoyancy scale) it is possible to measure the continuous weight loss of the human body. This weight loss is made up of three components, viz. the weight difference between produced CO2 and consumed O2, water loss through the lungs and transpiration through the skin. In relation to body weight it is called "specific weight loss." This parameter was measured in healthy human subjects and found to be within a relatively narrow range (16.42 plus or minus 2.55 mg/min/kp body weight). In four patients with hypothyroidism the values were very low (5.5 to 8.5 mg/min/kp). An increased specific weight loss was found in patients with hyperthyroidism (38 to 102 mg/min/kp in clinically severe cases). The applicability of this method to examination of thyroid function is discussed. It is compared to the classical method of basal metabolic rate measurement and its advantages are enumerated.

  5. A Hybrid One-Way ANOVA Approach for the Robust and Efficient Estimation of Differential Gene Expression with Multiple Patterns

    PubMed Central

    Mollah, Mohammad Manir Hossain; Jamal, Rahman; Mokhtar, Norfilza Mohd; Harun, Roslan; Mollah, Md. Nurul Haque

    2015-01-01

    Background Identifying genes that are differentially expressed (DE) between two or more conditions with multiple patterns of expression is one of the primary objectives of gene expression data analysis. Several statistical approaches, including one-way analysis of variance (ANOVA), are used to identify DE genes. However, most of these methods provide misleading results for two or more conditions with multiple patterns of expression in the presence of outlying genes. In this paper, an attempt is made to develop a hybrid one-way ANOVA approach that unifies the robustness and efficiency of estimation using the minimum β-divergence method to overcome some problems that arise in the existing robust methods for both small- and large-sample cases with multiple patterns of expression. Results The proposed method relies on a β-weight function, which produces values between 0 and 1. The β-weight function with β = 0.2 is used as a measure of outlier detection. It assigns smaller weights (≥ 0) to outlying expressions and larger weights (≤ 1) to typical expressions. The distribution of the β-weights is used to calculate the cut-off point, which is compared to the observed β-weight of an expression to determine whether that gene expression is an outlier. This weight function plays a key role in unifying the robustness and efficiency of estimation in one-way ANOVA. Conclusion Analyses of simulated gene expression profiles revealed that all eight methods (ANOVA, SAM, LIMMA, EBarrays, eLNN, KW, robust BetaEB and proposed) perform almost identically for m = 2 conditions in the absence of outliers. However, the robust BetaEB method and the proposed method exhibited considerably better performance than the other six methods in the presence of outliers. In this case, the BetaEB method exhibited slightly better performance than the proposed method for the small-sample cases, but the the proposed method exhibited much better performance than the BetaEB method for both the small- and large-sample cases in the presence of more than 50% outlying genes. The proposed method also exhibited better performance than the other methods for m > 2 conditions with multiple patterns of expression, where the BetaEB was not extended for this condition. Therefore, the proposed approach would be more suitable and reliable on average for the identification of DE genes between two or more conditions with multiple patterns of expression. PMID:26413858

  6. Using external sensors in solution of SLAM task

    NASA Astrophysics Data System (ADS)

    Provkov, V. S.; Starodubtsev, I. S.

    2018-05-01

    This article describes the algorithms of spatial orientation of SLAM, PTAM and their positive and negative sides. Based on the SLAM method, a method that uses an RGBD camera and additional sensors was developed: an accelerometer, a gyroscope, and a magnetometer. The investigated orientation methods have their advantages when moving along a straight trajectory or when rotating a moving platform. As a result of experiments and a weighted linear combination of the positions obtained from data of the RGBD camera and the nine-axis sensor, it became possible to improve the accuracy of the original algorithm even using a constant as a weight function. In the future, it is planned to develop an algorithm for the dynamic construction of a weight function, as a result of which an increase in the accuracy of the algorithm is expected.

  7. Vertical Photon Transport in Cloud Remote Sensing Problems

    NASA Technical Reports Server (NTRS)

    Platnick, S.

    1999-01-01

    Photon transport in plane-parallel, vertically inhomogeneous clouds is investigated and applied to cloud remote sensing techniques that use solar reflectance or transmittance measurements for retrieving droplet effective radius. Transport is couched in terms of weighting functions which approximate the relative contribution of individual layers to the overall retrieval. Two vertical weightings are investigated, including one based on the average number of scatterings encountered by reflected and transmitted photons in any given layer. A simpler vertical weighting based on the maximum penetration of reflected photons proves useful for solar reflectance measurements. These weighting functions are highly dependent on droplet absorption and solar/viewing geometry. A superposition technique, using adding/doubling radiative transfer procedures, is derived to accurately determine both weightings, avoiding time consuming Monte Carlo methods. Superposition calculations are made for a variety of geometries and cloud models, and selected results are compared with Monte Carlo calculations. Effective radius retrievals from modeled vertically inhomogeneous liquid water clouds are then made using the standard near-infrared bands, and compared with size estimates based on the proposed weighting functions. Agreement between the two methods is generally within several tenths of a micrometer, much better than expected retrieval accuracy. Though the emphasis is on photon transport in clouds, the derived weightings can be applied to any multiple scattering plane-parallel radiative transfer problem, including arbitrary combinations of cloud, aerosol, and gas layers.

  8. [Range of Hip Joint Motion and Weight of Lower Limb Function under 3D Dynamic Marker].

    PubMed

    Xia, Q; Zhang, M; Gao, D; Xia, W T

    2017-12-01

    To explore the range of reasonable weight coefficient of hip joint in lower limb function. When the hip joints of healthy volunteers under normal conditions or fixed at three different positions including functional, flexed and extension positions, the movements of lower limbs were recorded by LUKOtronic motion capture and analysis system. The degree of lower limb function loss was calculated using Fugl-Meyer lower limb function assessment form when the hip joints were fixed at the aforementioned positions. One-way analysis of variance and Tamhane's T2 method were used to proceed statistics analysis and calculate the range of reasonable weight coefficient of hip joint. There were significant differences between the degree of lower limb function loss when the hip joints fixed at flexed and extension positions and at functional position. While the differences between the degree of lower limb function loss when the hip joints fixed at flexed position and extension position had no statistical significance. In 95% confidence interval, the reasonable weight coefficient of hip joint in lower limb function was between 61.05% and 73.34%. Expect confirming the reasonable weight coefficient, the effects of functional and non-functional positions on the degree of lower limb function loss should also be considered for the assessment of hip joint function loss. Copyright© by the Editorial Department of Journal of Forensic Medicine

  9. The effects of weight change on glomerular filtration rate

    PubMed Central

    Chang, Alex; Greene, Tom H.; Wang, Xuelei; Kendrick, Cynthia; Kramer, Holly; Wright, Jackson; Astor, Brad; Shafi, Tariq; Toto, Robert; Lewis, Julia; Appel, Lawrence J.; Grams, Morgan

    2015-01-01

    Background Little is known about the effect of weight loss/gain on kidney function. Analyses are complicated by uncertainty about optimal body surface indexing strategies for measured glomerular filtration rate (mGFR). Methods Using data from the African-American Study of Kidney Disease and Hypertension (AASK), we determined the association of change in weight with three different estimates of change in kidney function: (i) unindexed mGFR estimated by renal clearance of iodine-125-iothalamate, (ii) mGFR indexed to concurrently measured BSA and (iii) GFR estimated from serum creatinine (eGFR). All models were adjusted for baseline weight, time, randomization group and time-varying diuretic use. We also examined whether these relationships were consistent across a number of subgroups, including tertiles of baseline 24-h urine sodium excretion. Results In 1094 participants followed over an average of 3.6 years, a 5-kg weight gain was associated with a 1.10 mL/min/1.73 m2 (95% CI: 0.87 to 1.33; P < 0.001) increase in unindexed mGFR. There was no association between weight change and mGFR indexed for concurrent BSA (per 5 kg weight gain, 0.21; 95% CI: −0.02 to 0.44; P = 0.1) or between weight change and eGFR (−0.09; 95% CI: −0.32 to 0.14; P = 0.4). The effect of weight change on unindexed mGFR was less pronounced in individuals with higher baseline sodium excretion (P = 0.08 for interaction). Conclusion The association between weight change and kidney function varies depending on the method of assessment. Future clinical trials should examine the effect of intentional weight change on measured GFR or filtration markers robust to changes in muscle mass. PMID:26085555

  10. Orbit Tomography: A Method for Determining the Population of Individual Fast-ion Orbits from Experimental Measurements

    NASA Astrophysics Data System (ADS)

    Stagner, L.; Heidbrink, W. W.

    2017-10-01

    Due to the complicated nature of the fast-ion distribution function, diagnostic velocity-space weight functions are used to analyze experimental data. In a technique known as Velocity-space Tomography (VST), velocity-space weight functions are combined with experimental measurements to create a system of linear equations that can be solved. However, VST (which by definition ignores spatial dependencies) is restricted, both by the accuracy of its forward model and also by the availability of spatially overlapping diagnostics. In this work we extend velocity-space weight functions to a full 6D generalized coordinate system and then show how to reduce them to a 3D orbit-space without loss of generality using an action-angle formulation. Furthermore, we show how diagnostic orbit-weight functions can be used to infer the full fast-ion distribution function, i.e. Orbit Tomography. Examples of orbit weights functions for different diagnostics and reconstructions of fast-ion distributions are shown for DIII-D experiments. This work was supported by the U.S. Department of Energy under DE-AC02-09CH11466 and DE-FC02-04ER54698.

  11. Advanced Parental Ages and Low Birth Weight in Autism Spectrum Disorders--Rates and Effect on Functioning

    ERIC Educational Resources Information Center

    Ben Itzchak, Esther; Lahat, Eli; Zachor, Ditza A.

    2011-01-01

    Objectives: (1) To assess the distribution of parental age and birth weight in a large cohort with autism spectrum disorder (ASD) and to compare them to Israeli national data. (2) To examine possible relationships between these risk factors and functioning. Methods: The study included 529 participants diagnosed with ASD using standardized tests:…

  12. A method for determining optimum phasing of a multiphase propulsion system for a single-stage vehicle with linearized inert weight

    NASA Technical Reports Server (NTRS)

    Martin, J. A.

    1974-01-01

    A general analytical treatment is presented of a single-stage vehicle with multiple propulsion phases. A closed-form solution for the cost and for the performance and a derivation of the optimal phasing of the propulsion are included. Linearized variations in the inert weight elements are included, and the function to be minimized can be selected. The derivation of optimal phasing results in a set of nonlinear algebraic equations for optimal fuel volumes, for which a solution method is outlined. Three specific example cases are analyzed: minimum gross lift-off weight, minimum inert weight, and a minimized general function for a two-phase vehicle. The results for the two-phase vehicle are applied to the dual-fuel rocket. Comparisons with single-fuel vehicles indicate that dual-fuel vehicles can have lower inert weight either by development of a dual-fuel engine or by parallel burning of separate engines from lift-off.

  13. Analysis of surface cracks at hole by a 3-D weight function method with stresses from finite element method

    NASA Technical Reports Server (NTRS)

    Zhao, W.; Newman, J. C., Jr.; Sutton, M. A.; Shivakumar, K. N.; Wu, X. R.

    1995-01-01

    Parallel with the work in Part-1, stress intensity factors for semi-elliptical surface cracks emanating from a circular hole are determined. The 3-D weight function method with the 3D finite element solutions for the uncracked stress distribution as in Part-1 is used for the analysis. Two different loading conditions, i.e. remote tension and wedge loading, are considered for a wide range in geometrical parameters. Both single and double surface cracks are studied and compared with other solutions available in the literature. Typical crack opening displacements are also provided.

  14. Graph characterization via Ihara coefficients.

    PubMed

    Ren, Peng; Wilson, Richard C; Hancock, Edwin R

    2011-02-01

    The novel contributions of this paper are twofold. First, we demonstrate how to characterize unweighted graphs in a permutation-invariant manner using the polynomial coefficients from the Ihara zeta function, i.e., the Ihara coefficients. Second, we generalize the definition of the Ihara coefficients to edge-weighted graphs. For an unweighted graph, the Ihara zeta function is the reciprocal of a quasi characteristic polynomial of the adjacency matrix of the associated oriented line graph. Since the Ihara zeta function has poles that give rise to infinities, the most convenient numerically stable representation is to work with the coefficients of the quasi characteristic polynomial. Moreover, the polynomial coefficients are invariant to vertex order permutations and also convey information concerning the cycle structure of the graph. To generalize the representation to edge-weighted graphs, we make use of the reduced Bartholdi zeta function. We prove that the computation of the Ihara coefficients for unweighted graphs is a special case of our proposed method for unit edge weights. We also present a spectral analysis of the Ihara coefficients and indicate their advantages over other graph spectral methods. We apply the proposed graph characterization method to capturing graph-class structure and clustering graphs. Experimental results reveal that the Ihara coefficients are more effective than methods based on Laplacian spectra.

  15. Production of low-molecular weight soluble yeast β-glucan by an acid degradation method.

    PubMed

    Ishimoto, Yuina; Ishibashi, Ken-Ichi; Yamanaka, Daisuke; Adachi, Yoshiyuki; Kanzaki, Ken; Iwakura, Yoichiro; Ohno, Naohito

    2018-02-01

    β-glucan is widely distributed in nature as water soluble and insoluble forms. Both forms of β-glucan are utilized in several fields, especially for functional foods. Yeast β-glucan is a medically important insoluble particle. Solubilization of yeast β-glucan may be valuable for improving functional foods and in medicinal industries. In the present study, we applied an acid degradation method to solubilize yeast β-glucan and found that β-glucan was effectively solubilized to low-molecular weight β-glucans by 45% sulfuric acid treatment at 20°C. The acid-degraded soluble yeast β-glucan (ad-sBBG) was further fractionated into a higher-molecular weight fraction (ad-sBBG-high) and a lower-molecular weight fraction (ad-sBBG-low). Since ad-sBBG-high contained mannan, while ad-sBBG-low contained it only scarcely, it was possible to prepare low-molecular weight soluble β-glucan with higher purity. In addition, ad-sBBG-low bound to dectin-1, which is an innate immunity receptor of β-glucan, and showed antagonistic activity against reactive oxygen production and cytokine synthesis by macrophages. Thus, this acid degradation method is an important procedure for generating immune-modulating, low-molecular weight, soluble yeast β-glucan. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. A method for measuring quality of life through subjective weighting of functional status.

    PubMed

    Stineman, Margaret G; Wechsler, Barbara; Ross, Richard; Maislin, Greg

    2003-04-01

    To apply a new tool to understand the quality of life (QOL) implications of patients' functional status. Results from the Features-Resource Trade-Off Game were used to form utility weights by ranking functional activities by the relative value of achieving independence in each activity compared with all other component activities. The utility weights were combined with patients' actual levels of performance across the same activities to produce QOL-weighted functional status scores and to form "value rulers" to order activities by perceived importance. Persons with severe disabilities living in the community and clinicians practicing in various rehabilitation disciplines. Two panels of 5 consumers with disabilities and 2 panels of 5 rehabilitation clinicians. The 4 panels played the Features Resource Trade-Off Game by using the FIMT(TM) instrument definitions. Utility weights for each of the 18 FIM items, QOL-weighted FIM scores, and value rulers. All 4 panels valued the achievement of independence in cognitive and communication activities more than independence in physical activities. Consequently, the unweighted FIM scores of patients who have severe physical disabilities but relatively intact cognitive skills will underestimate QOL, while inflating QOL in those with low levels of independence in cognition and communication but higher physical function. Independence in some activities is more valued than in others; thus, 2 people with the same numeric functional status score could experience very different QOL. QOL-weighted functional status scores translate objectively measured functional status into its subjective meaning. This new technology for measuring subjective function-related QOL has a variety of applications to clinical, educational, and research practices.

  17. Gaussian windows: A tool for exploring multivariate data

    NASA Technical Reports Server (NTRS)

    Jaeckel, Louis A.

    1990-01-01

    Presented here is a method for interactively exploring a large set of quantitative multivariate data, in order to estimate the shape of the underlying density function. It is assumed that the density function is more or less smooth, but no other specific assumptions are made concerning its structure. The local structure of the data in a given region may be examined by viewing the data through a Gaussian window, whose location and shape are chosen by the user. A Gaussian window is defined by giving each data point a weight based on a multivariate Gaussian function. The weighted sample mean and sample covariance matrix are then computed, using the weights attached to the data points. These quantities are used to compute an estimate of the shape of the density function in the window region. The local structure of the data is described by a method similar to the method of principal components. By taking many such local views of the data, we can form an idea of the structure of the data set. The method is applicable in any number of dimensions. The method can be used to find and describe simple structural features such as peaks, valleys, and saddle points in the density function, and also extended structures in higher dimensions. With some practice, we can apply our geometrical intuition to these structural features in any number of dimensions, so that we can think about and describe the structure of the data. Since the computations involved are relatively simple, the method can easily be implemented on a small computer.

  18. Does Choice of Multicriteria Method Matter? An Experiment in Water Resources Planning

    NASA Astrophysics Data System (ADS)

    Hobbs, Benjamin F.; Chankong, Vira; Hamadeh, Wael; Stakhiv, Eugene Z.

    1992-07-01

    Many multiple criteria decision making methods have been proposed and applied to water planning. Their purpose is to provide information on tradeoffs among objectives and to help users articulate value judgments in a systematic, coherent, and documentable manner. The wide variety of available techniques confuses potential users, causing inappropriate matching of methods with problems. Experiments in which water planners apply more than one multicriteria procedure to realistic problems can help dispel this confusion by testing method appropriateness, ease of use, and validity. We summarize one such experiment where U.S. Army Corps of Engineers personnel used several methods to screen urban water supply plans. The methods evaluated include goal programming, ELECTRE I, additive value functions, multiplicative utility functions, and three techniques for choosing weights (direct rating, indifference tradeoff, and the analytical hierarchy process). Among the conclusions we reach are the following. First, experienced planners generally prefer simpler, more transparent methods. Additive value functions are favored. Yet none of the methods are endorsed by a majority of the participants; many preferred to use no formal method at all. Second, there is strong evidence that rating, the most commonly applied weight selection method, is likely to lead to weights that fail to represent the trade-offs that users are willing to make among criteria. Finally, we show that decisions can be as or more sensitive to the method used as to which person applies it. Therefore, if who chooses is important, then so too is how a choice is made.

  19. GENIE - Generation of computational geometry-grids for internal-external flow configurations

    NASA Technical Reports Server (NTRS)

    Soni, B. K.

    1988-01-01

    Progress realized in the development of a master geometry-grid generation code GENIE is presented. The grid refinement process is enhanced by developing strategies to utilize bezier curves/surfaces and splines along with weighted transfinite interpolation technique and by formulating new forcing function for the elliptic solver based on the minimization of a non-orthogonality functional. A two step grid adaptation procedure is developed by optimally blending adaptive weightings with weighted transfinite interpolation technique. Examples of 2D-3D grids are provided to illustrate the success of these methods.

  20. [Plaque segmentation of intracoronary optical coherence tomography images based on K-means and improved random walk algorithm].

    PubMed

    Wang, Guanglei; Wang, Pengyu; Han, Yechen; Liu, Xiuling; Li, Yan; Lu, Qian

    2017-06-01

    In recent years, optical coherence tomography (OCT) has developed into a popular coronary imaging technology at home and abroad. The segmentation of plaque regions in coronary OCT images has great significance for vulnerable plaque recognition and research. In this paper, a new algorithm based on K -means clustering and improved random walk is proposed and Semi-automated segmentation of calcified plaque, fibrotic plaque and lipid pool was achieved. And the weight function of random walk is improved. The distance between the edges of pixels in the image and the seed points is added to the definition of the weight function. It increases the weak edge weights and prevent over-segmentation. Based on the above methods, the OCT images of 9 coronary atherosclerotic patients were selected for plaque segmentation. By contrasting the doctor's manual segmentation results with this method, it was proved that this method had good robustness and accuracy. It is hoped that this method can be helpful for the clinical diagnosis of coronary heart disease.

  1. A Decision Analysis Framework for Evaluation of Helmet Mounted Display Alternatives for Fighter Aircraft

    DTIC Science & Technology

    2014-12-26

    additive value function, which assumes mutual preferential independence (Gregory S. Parnell, 2013). In other words, this method can be used if the... additive value function method to calculate the aggregate value of multiple objectives. Step 9 : Sensitivity Analysis Once the global values are...gravity metric, the additive method will be applied using equal weights for each axis value function. Pilot Satisfaction (Usability) As expressed

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boutilier, Justin J., E-mail: j.boutilier@mail.utoronto.ca; Lee, Taewoo; Craig, Tim

    Purpose: To develop and evaluate the clinical applicability of advanced machine learning models that simultaneously predict multiple optimization objective function weights from patient geometry for intensity-modulated radiation therapy of prostate cancer. Methods: A previously developed inverse optimization method was applied retrospectively to determine optimal objective function weights for 315 treated patients. The authors used an overlap volume ratio (OV) of bladder and rectum for different PTV expansions and overlap volume histogram slopes (OVSR and OVSB for the rectum and bladder, respectively) as explanatory variables that quantify patient geometry. Using the optimal weights as ground truth, the authors trained and appliedmore » three prediction models: logistic regression (LR), multinomial logistic regression (MLR), and weighted K-nearest neighbor (KNN). The population average of the optimal objective function weights was also calculated. Results: The OV at 0.4 cm and OVSR at 0.1 cm features were found to be the most predictive of the weights. The authors observed comparable performance (i.e., no statistically significant difference) between LR, MLR, and KNN methodologies, with LR appearing to perform the best. All three machine learning models outperformed the population average by a statistically significant amount over a range of clinical metrics including bladder/rectum V53Gy, bladder/rectum V70Gy, and dose to the bladder, rectum, CTV, and PTV. When comparing the weights directly, the LR model predicted bladder and rectum weights that had, on average, a 73% and 74% relative improvement over the population average weights, respectively. The treatment plans resulting from the LR weights had, on average, a rectum V70Gy that was 35% closer to the clinical plan and a bladder V70Gy that was 29% closer, compared to the population average weights. Similar results were observed for all other clinical metrics. Conclusions: The authors demonstrated that the KNN and MLR weight prediction methodologies perform comparably to the LR model and can produce clinical quality treatment plans by simultaneously predicting multiple weights that capture trade-offs associated with sparing multiple OARs.« less

  3. Quality of life in overweight and obese young Chinese children: a mixed-method study

    PubMed Central

    2013-01-01

    Background Obesity among young children in Hong Kong has become a public health problem. This study explored associations between Chinese parent reported children’s quality of life (QoL), socio-demographics and young children’s weight status from 27 preschool settings. Methods A mixed-method approach, including quantitative and qualitative tools, was employed for this cross-sectional study. Quantitative data were collected from 336 Chinese parents of children aged 2–7 years. Paediatric Quality of Life Inventory 4.0 (PedsQL, v 4.0) and a questionnaire about parents’ socio-demographics were used. In-depth interviews with mothers, teachers and children from a larger sample were the basis of 10 case studies. Quantitative data were analysed using chi-square analysis, one-way ANOVA and logistic regression. Qualitative data were analysed according to a multi-level framework that established linkages with quantitative data. Results The children’s Body Mass Index (BMI) ranged from 11.3 to 28.0 kg/m2 and was classified into four weight groups. ANOVAs showed that the normal-weight children had significantly higher PedsQL scores in Physical Functioning than obese children (mean difference = 14.19, p < .0083) and significantly higher scores in School Functioning than overweight children (mean difference = 10.15, p < .0083). Results of logistic regression showed that relative to normal-weight children, obese children had a 2–5 times higher odds of showing problems in Physical, Social Functioning and School Performance. Overweight children had 2 times higher odds of problems in Social Functioning, and underweight children had a 2 times higher odds of problems in Physical Functioning. Children’s age (χ2 = 21.71, df = 3, p < 0.01), and housing (χ2 = 33.00, df = 9, p < 0.01) were associated with their weight. The case studies further act as a supplement to the quantitative data that children showed emotional problems across different abnormal weight statues; and the association between children’s weight status and well-being might be affected by multiple childcare arrangements and familial immigration status. Conclusions This study is one of only a few studies that have examined parents’, teachers’ and young children’s own perceptions of the children’s quality of life across different weight statuses. The results are discussed in terms of their implications for intervention. PMID:23496917

  4. Functional annotation by sequence-weighted structure alignments: statistical analysis and case studies from the Protein 3000 structural genomics project in Japan.

    PubMed

    Standley, Daron M; Toh, Hiroyuki; Nakamura, Haruki

    2008-09-01

    A method to functionally annotate structural genomics targets, based on a novel structural alignment scoring function, is proposed. In the proposed score, position-specific scoring matrices are used to weight structurally aligned residue pairs to highlight evolutionarily conserved motifs. The functional form of the score is first optimized for discriminating domains belonging to the same Pfam family from domains belonging to different families but the same CATH or SCOP superfamily. In the optimization stage, we consider four standard weighting functions as well as our own, the "maximum substitution probability," and combinations of these functions. The optimized score achieves an area of 0.87 under the receiver-operating characteristic curve with respect to identifying Pfam families within a sequence-unique benchmark set of domain pairs. Confidence measures are then derived from the benchmark distribution of true-positive scores. The alignment method is next applied to the task of functionally annotating 230 query proteins released to the public as part of the Protein 3000 structural genomics project in Japan. Of these queries, 78 were found to align to templates with the same Pfam family as the query or had sequence identities > or = 30%. Another 49 queries were found to match more distantly related templates. Within this group, the template predicted by our method to be the closest functional relative was often not the most structurally similar. Several nontrivial cases are discussed in detail. Finally, 103 queries matched templates at the fold level, but not the family or superfamily level, and remain functionally uncharacterized. 2008 Wiley-Liss, Inc.

  5. Size-exclusion chromatography (HPLC-SEC) technique optimization by simplex method to estimate molecular weight distribution of agave fructans.

    PubMed

    Moreno-Vilet, Lorena; Bostyn, Stéphane; Flores-Montaño, Jose-Luis; Camacho-Ruiz, Rosa-María

    2017-12-15

    Agave fructans are increasingly important in food industry and nutrition sciences as a potential ingredient of functional food, thus practical analysis tools to characterize them are needed. In view of the importance of the molecular weight on the functional properties of agave fructans, this study has the purpose to optimize a method to determine their molecular weight distribution by HPLC-SEC for industrial application. The optimization was carried out using a simplex method. The optimum conditions obtained were at column temperature of 61.7°C using tri-distilled water without salt, adjusted pH of 5.4 and a flow rate of 0.36mL/min. The exclusion range is from 1 to 49 of polymerization degree (180-7966Da). This proposed method represents an accurate and fast alternative to standard methods involving multiple-detection or hydrolysis of fructans. The industrial applications of this technique might be for quality control, study of fractionation processes and determination of purity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Optimal design of a bank of spatio-temporal filters for EEG signal classification.

    PubMed

    Higashi, Hiroshi; Tanaka, Toshihisa

    2011-01-01

    The spatial weights for electrodes called common spatial pattern (CSP) are known to be effective in EEG signal classification for motor imagery based brain computer interfaces (MI-BCI). To achieve accurate classification in CSP, the frequency filter should be properly designed. To this end, several methods for designing the filter have been proposed. However, the existing methods cannot consider plural brain activities described with different frequency bands and different spatial patterns such as activities of mu and beta rhythms. In order to efficiently extract these brain activities, we propose a method to design plural filters and spatial weights which extract desired brain activity. The proposed method designs finite impulse response (FIR) filters and the associated spatial weights by optimization of an objective function which is a natural extension of CSP. Moreover, we show by a classification experiment that the bank of FIR filters which are designed by introducing an orthogonality into the objective function can extract good discriminative features. Moreover, the experiment result suggests that the proposed method can automatically detect and extract brain activities related to motor imagery.

  7. Predicting overlapping protein complexes from weighted protein interaction graphs by gradually expanding dense neighborhoods.

    PubMed

    Dimitrakopoulos, Christos; Theofilatos, Konstantinos; Pegkas, Andreas; Likothanassis, Spiros; Mavroudi, Seferina

    2016-07-01

    Proteins are vital biological molecules driving many fundamental cellular processes. They rarely act alone, but form interacting groups called protein complexes. The study of protein complexes is a key goal in systems biology. Recently, large protein-protein interaction (PPI) datasets have been published and a plethora of computational methods that provide new ideas for the prediction of protein complexes have been implemented. However, most of the methods suffer from two major limitations: First, they do not account for proteins participating in multiple functions and second, they are unable to handle weighted PPI graphs. Moreover, the problem remains open as existing algorithms and tools are insufficient in terms of predictive metrics. In the present paper, we propose gradually expanding neighborhoods with adjustment (GENA), a new algorithm that gradually expands neighborhoods in a graph starting from highly informative "seed" nodes. GENA considers proteins as multifunctional molecules allowing them to participate in more than one protein complex. In addition, GENA accepts weighted PPI graphs by using a weighted evaluation function for each cluster. In experiments with datasets from Saccharomyces cerevisiae and human, GENA outperformed Markov clustering, restricted neighborhood search and clustering with overlapping neighborhood expansion, three state-of-the-art methods for computationally predicting protein complexes. Seven PPI networks and seven evaluation datasets were used in total. GENA outperformed existing methods in 16 out of 18 experiments achieving an average improvement of 5.5% when the maximum matching ratio metric was used. Our method was able to discover functionally homogeneous protein clusters and uncover important network modules in a Parkinson expression dataset. When used on the human networks, around 47% of the detected clusters were enriched in gene ontology (GO) terms with depth higher than five in the GO hierarchy. In the present manuscript, we introduce a new method for the computational prediction of protein complexes by making the realistic assumption that proteins participate in multiple protein complexes and cellular functions. Our method can detect accurate and functionally homogeneous clusters. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Term frequency - function of document frequency: a new term weighting scheme for enterprise information retrieval

    NASA Astrophysics Data System (ADS)

    Zhang, Hui; Wang, Deqing; Wu, Wenjun; Hu, Hongping

    2012-11-01

    In today's business environment, enterprises are increasingly under pressure to process the vast amount of data produced everyday within enterprises. One method is to focus on the business intelligence (BI) applications and increasing the commercial added-value through such business analytics activities. Term weighting scheme, which has been used to convert the documents as vectors in the term space, is a vital task in enterprise Information Retrieval (IR), text categorisation, text analytics, etc. When determining term weight in a document, the traditional TF-IDF scheme sets weight value for the term considering only its occurrence frequency within the document and in the entire set of documents, which leads to some meaningful terms that cannot get the appropriate weight. In this article, we propose a new term weighting scheme called Term Frequency - Function of Document Frequency (TF-FDF) to address this issue. Instead of using monotonically decreasing function such as Inverse Document Frequency, FDF presents a convex function that dynamically adjusts weights according to the significance of the words in a document set. This function can be manually tuned based on the distribution of the most meaningful words which semantically represent the document set. Our experiments show that the TF-FDF can achieve higher value of Normalised Discounted Cumulative Gain in IR than that of TF-IDF and its variants, and improving the accuracy of relevance ranking of the IR results.

  9. Stochastic search, optimization and regression with energy applications

    NASA Astrophysics Data System (ADS)

    Hannah, Lauren A.

    Designing clean energy systems will be an important task over the next few decades. One of the major roadblocks is a lack of mathematical tools to economically evaluate those energy systems. However, solutions to these mathematical problems are also of interest to the operations research and statistical communities in general. This thesis studies three problems that are of interest to the energy community itself or provide support for solution methods: R&D portfolio optimization, nonparametric regression and stochastic search with an observable state variable. First, we consider the one stage R&D portfolio optimization problem to avoid the sequential decision process associated with the multi-stage. The one stage problem is still difficult because of a non-convex, combinatorial decision space and a non-convex objective function. We propose a heuristic solution method that uses marginal project values---which depend on the selected portfolio---to create a linear objective function. In conjunction with the 0-1 decision space, this new problem can be solved as a knapsack linear program. This method scales well to large decision spaces. We also propose an alternate, provably convergent algorithm that does not exploit problem structure. These methods are compared on a solid oxide fuel cell R&D portfolio problem. Next, we propose Dirichlet Process mixtures of Generalized Linear Models (DPGLM), a new method of nonparametric regression that accommodates continuous and categorical inputs, and responses that can be modeled by a generalized linear model. We prove conditions for the asymptotic unbiasedness of the DP-GLM regression mean function estimate. We also give examples for when those conditions hold, including models for compactly supported continuous distributions and a model with continuous covariates and categorical response. We empirically analyze the properties of the DP-GLM and why it provides better results than existing Dirichlet process mixture regression models. We evaluate DP-GLM on several data sets, comparing it to modern methods of nonparametric regression like CART, Bayesian trees and Gaussian processes. Compared to existing techniques, the DP-GLM provides a single model (and corresponding inference algorithms) that performs well in many regression settings. Finally, we study convex stochastic search problems where a noisy objective function value is observed after a decision is made. There are many stochastic search problems whose behavior depends on an exogenous state variable which affects the shape of the objective function. Currently, there is no general purpose algorithm to solve this class of problems. We use nonparametric density estimation to take observations from the joint state-outcome distribution and use them to infer the optimal decision for a given query state. We propose two solution methods that depend on the problem characteristics: function-based and gradient-based optimization. We examine two weighting schemes, kernel-based weights and Dirichlet process-based weights, for use with the solution methods. The weights and solution methods are tested on a synthetic multi-product newsvendor problem and the hour-ahead wind commitment problem. Our results show that in some cases Dirichlet process weights offer substantial benefits over kernel based weights and more generally that nonparametric estimation methods provide good solutions to otherwise intractable problems.

  10. Multifunctional Graphene-Silicone Elastomer Nanocomposite, Method of Making the Same, and Uses Thereof

    NASA Technical Reports Server (NTRS)

    Aksay, Ilhan A. (Inventor); Pan, Shuyang (Inventor); Prud'Homme, Robert K. (Inventor)

    2016-01-01

    A nanocomposite composition having a silicone elastomer matrix having therein a filler loading of greater than 0.05 weight percentage, based on total nanocomposite weight, wherein the filler is functional graphene sheets (FGS) having a surface area of from 300 square meters per gram to 2630 square meters per gram; and a method for producing the nanocomposite and uses thereof.

  11. Variable Weight Fractional Collisions for Multiple Species Mixtures

    DTIC Science & Technology

    2017-08-28

    DISTRIBUTION A: APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED; PA #17517 6 / 21 VARIABLE WEIGHTS FOR DYNAMIC RANGE Continuum to Discrete ...Representation: Many Particles →̃ Continuous Distribution Discretized VDF Yields Vlasov But Collision Integral Still a Problem Particle Methods VDF to Delta...Function Set Collisions between Discrete Velocities But Poorly Resolved Tail (Tail Critical to Inelastic Collisions) Variable Weights Permit Extra DOF in

  12. WGCNA: an R package for weighted correlation network analysis.

    PubMed

    Langfelder, Peter; Horvath, Steve

    2008-12-29

    Correlation networks are increasingly being used in bioinformatics applications. For example, weighted gene co-expression network analysis is a systems biology method for describing the correlation patterns among genes across microarray samples. Weighted correlation network analysis (WGCNA) can be used for finding clusters (modules) of highly correlated genes, for summarizing such clusters using the module eigengene or an intramodular hub gene, for relating modules to one another and to external sample traits (using eigengene network methodology), and for calculating module membership measures. Correlation networks facilitate network based gene screening methods that can be used to identify candidate biomarkers or therapeutic targets. These methods have been successfully applied in various biological contexts, e.g. cancer, mouse genetics, yeast genetics, and analysis of brain imaging data. While parts of the correlation network methodology have been described in separate publications, there is a need to provide a user-friendly, comprehensive, and consistent software implementation and an accompanying tutorial. The WGCNA R software package is a comprehensive collection of R functions for performing various aspects of weighted correlation network analysis. The package includes functions for network construction, module detection, gene selection, calculations of topological properties, data simulation, visualization, and interfacing with external software. Along with the R package we also present R software tutorials. While the methods development was motivated by gene expression data, the underlying data mining approach can be applied to a variety of different settings. The WGCNA package provides R functions for weighted correlation network analysis, e.g. co-expression network analysis of gene expression data. The R package along with its source code and additional material are freely available at http://www.genetics.ucla.edu/labs/horvath/CoexpressionNetwork/Rpackages/WGCNA.

  13. WGCNA: an R package for weighted correlation network analysis

    PubMed Central

    Langfelder, Peter; Horvath, Steve

    2008-01-01

    Background Correlation networks are increasingly being used in bioinformatics applications. For example, weighted gene co-expression network analysis is a systems biology method for describing the correlation patterns among genes across microarray samples. Weighted correlation network analysis (WGCNA) can be used for finding clusters (modules) of highly correlated genes, for summarizing such clusters using the module eigengene or an intramodular hub gene, for relating modules to one another and to external sample traits (using eigengene network methodology), and for calculating module membership measures. Correlation networks facilitate network based gene screening methods that can be used to identify candidate biomarkers or therapeutic targets. These methods have been successfully applied in various biological contexts, e.g. cancer, mouse genetics, yeast genetics, and analysis of brain imaging data. While parts of the correlation network methodology have been described in separate publications, there is a need to provide a user-friendly, comprehensive, and consistent software implementation and an accompanying tutorial. Results The WGCNA R software package is a comprehensive collection of R functions for performing various aspects of weighted correlation network analysis. The package includes functions for network construction, module detection, gene selection, calculations of topological properties, data simulation, visualization, and interfacing with external software. Along with the R package we also present R software tutorials. While the methods development was motivated by gene expression data, the underlying data mining approach can be applied to a variety of different settings. Conclusion The WGCNA package provides R functions for weighted correlation network analysis, e.g. co-expression network analysis of gene expression data. The R package along with its source code and additional material are freely available at . PMID:19114008

  14. Model Considerations for Memory-based Automatic Music Transcription

    NASA Astrophysics Data System (ADS)

    Albrecht, Štěpán; Šmídl, Václav

    2009-12-01

    The problem of automatic music description is considered. The recorded music is modeled as a superposition of known sounds from a library weighted by unknown weights. Similar observation models are commonly used in statistics and machine learning. Many methods for estimation of the weights are available. These methods differ in the assumptions imposed on the weights. In Bayesian paradigm, these assumptions are typically expressed in the form of prior probability density function (pdf) on the weights. In this paper, commonly used assumptions about music signal are summarized and complemented by a new assumption. These assumptions are translated into pdfs and combined into a single prior density using combination of pdfs. Validity of the model is tested in simulation using synthetic data.

  15. AN ASSESSMENT OF MCNP WEIGHT WINDOWS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. S. HENDRICKS; C. N. CULBERTSON

    2000-01-01

    The weight window variance reduction method in the general-purpose Monte Carlo N-Particle radiation transport code MCNPTM has recently been rewritten. In particular, it is now possible to generate weight window importance functions on a superimposed mesh, eliminating the need to subdivide geometries for variance reduction purposes. Our assessment addresses the following questions: (1) Does the new MCNP4C treatment utilize weight windows as well as the former MCNP4B treatment? (2) Does the new MCNP4C weight window generator generate importance functions as well as MCNP4B? (3) How do superimposed mesh weight windows compare to cell-based weight windows? (4) What are the shortcomingsmore » of the new MCNP4C weight window generator? Our assessment was carried out with five neutron and photon shielding problems chosen for their demanding variance reduction requirements. The problems were an oil well logging problem, the Oak Ridge fusion shielding benchmark problem, a photon skyshine problem, an air-over-ground problem, and a sample problem for variance reduction.« less

  16. Comparison of Traditional Design Nonlinear Programming Optimization and Stochastic Methods for Structural Design

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.

    2010-01-01

    Structural design generated by traditional method, optimization method and the stochastic design concept are compared. In the traditional method, the constraints are manipulated to obtain the design and weight is back calculated. In design optimization, the weight of a structure becomes the merit function with constraints imposed on failure modes and an optimization algorithm is used to generate the solution. Stochastic design concept accounts for uncertainties in loads, material properties, and other parameters and solution is obtained by solving a design optimization problem for a specified reliability. Acceptable solutions were produced by all the three methods. The variation in the weight calculated by the methods was modest. Some variation was noticed in designs calculated by the methods. The variation may be attributed to structural indeterminacy. It is prudent to develop design by all three methods prior to its fabrication. The traditional design method can be improved when the simplified sensitivities of the behavior constraint is used. Such sensitivity can reduce design calculations and may have a potential to unify the traditional and optimization methods. Weight versus reliabilitytraced out an inverted-S-shaped graph. The center of the graph corresponded to mean valued design. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure. Weight can be reduced to a small value for a most failure-prone design. Probabilistic modeling of load and material properties remained a challenge.

  17. Thermodynamic properties of solvated peptides from selective integrated tempering sampling with a new weighting factor estimation algorithm

    NASA Astrophysics Data System (ADS)

    Shen, Lin; Xie, Liangxu; Yang, Mingjun

    2017-04-01

    Conformational sampling under rugged energy landscape is always a challenge in computer simulations. The recently developed integrated tempering sampling, together with its selective variant (SITS), emerges to be a powerful tool in exploring the free energy landscape or functional motions of various systems. The estimation of weighting factors constitutes a critical step in these methods and requires accurate calculation of partition function ratio between different thermodynamic states. In this work, we propose a new adaptive update algorithm to compute the weighting factors based on the weighted histogram analysis method (WHAM). The adaptive-WHAM algorithm with SITS is then applied to study the thermodynamic properties of several representative peptide systems solvated in an explicit water box. The performance of the new algorithm is validated in simulations of these solvated peptide systems. We anticipate more applications of this coupled optimisation and production algorithm to other complicated systems such as the biochemical reactions in solution.

  18. Grey Language Hesitant Fuzzy Group Decision Making Method Based on Kernel and Grey Scale

    PubMed Central

    Diao, Yuzhu; Hu, Aqin

    2018-01-01

    Based on grey language multi-attribute group decision making, a kernel and grey scale scoring function is put forward according to the definition of grey language and the meaning of the kernel and grey scale. The function introduces grey scale into the decision-making method to avoid information distortion. This method is applied to the grey language hesitant fuzzy group decision making, and the grey correlation degree is used to sort the schemes. The effectiveness and practicability of the decision-making method are further verified by the industry chain sustainable development ability evaluation example of a circular economy. Moreover, its simplicity and feasibility are verified by comparing it with the traditional grey language decision-making method and the grey language hesitant fuzzy weighted arithmetic averaging (GLHWAA) operator integration method after determining the index weight based on the grey correlation. PMID:29498699

  19. Grey Language Hesitant Fuzzy Group Decision Making Method Based on Kernel and Grey Scale.

    PubMed

    Li, Qingsheng; Diao, Yuzhu; Gong, Zaiwu; Hu, Aqin

    2018-03-02

    Based on grey language multi-attribute group decision making, a kernel and grey scale scoring function is put forward according to the definition of grey language and the meaning of the kernel and grey scale. The function introduces grey scale into the decision-making method to avoid information distortion. This method is applied to the grey language hesitant fuzzy group decision making, and the grey correlation degree is used to sort the schemes. The effectiveness and practicability of the decision-making method are further verified by the industry chain sustainable development ability evaluation example of a circular economy. Moreover, its simplicity and feasibility are verified by comparing it with the traditional grey language decision-making method and the grey language hesitant fuzzy weighted arithmetic averaging (GLHWAA) operator integration method after determining the index weight based on the grey correlation.

  20. Element free Galerkin formulation of composite beam with longitudinal slip

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmad, Dzulkarnain; Mokhtaram, Mokhtazul Haizad; Badli, Mohd Iqbal

    2015-05-15

    Behaviour between two materials in composite beam is assumed partially interact when longitudinal slip at its interfacial surfaces is considered. Commonly analysed by the mesh-based formulation, this study used meshless formulation known as Element Free Galerkin (EFG) method in the beam partial interaction analysis, numerically. As meshless formulation implies that the problem domain is discretised only by nodes, the EFG method is based on Moving Least Square (MLS) approach for shape functions formulation with its weak form is developed using variational method. The essential boundary conditions are enforced by Langrange multipliers. The proposed EFG formulation gives comparable results, after beenmore » verified by analytical solution, thus signify its application in partial interaction problems. Based on numerical test results, the Cubic Spline and Quartic Spline weight functions yield better accuracy for the EFG formulation, compares to other proposed weight functions.« less

  1. Near-Optimal Operation of Dual-Fuel Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Ardema, M. D.; Chou, H. C.; Bowles, J. V.

    1996-01-01

    A near-optimal guidance law for the ascent trajectory from earth surface to earth orbit of a fully reusable single-stage-to-orbit pure rocket launch vehicle is derived. Of interest are both the optimal operation of the propulsion system and the optimal flight path. A methodology is developed to investigate the optimal throttle switching of dual-fuel engines. The method is based on selecting propulsion system modes and parameters that maximize a certain performance function. This function is derived from consideration of the energy-state model of the aircraft equations of motion. Because the density of liquid hydrogen is relatively low, the sensitivity of perturbations in volume need to be taken into consideration as well as weight sensitivity. The cost functional is a weighted sum of fuel mass and volume; the weighting factor is chosen to minimize vehicle empty weight for a given payload mass and volume in orbit.

  2. Robust subspace clustering via joint weighted Schatten-p norm and Lq norm minimization

    NASA Astrophysics Data System (ADS)

    Zhang, Tao; Tang, Zhenmin; Liu, Qing

    2017-05-01

    Low-rank representation (LRR) has been successfully applied to subspace clustering. However, the nuclear norm in the standard LRR is not optimal for approximating the rank function in many real-world applications. Meanwhile, the L21 norm in LRR also fails to characterize various noises properly. To address the above issues, we propose an improved LRR method, which achieves low rank property via the new formulation with weighted Schatten-p norm and Lq norm (WSPQ). Specifically, the nuclear norm is generalized to be the Schatten-p norm and different weights are assigned to the singular values, and thus it can approximate the rank function more accurately. In addition, Lq norm is further incorporated into WSPQ to model different noises and improve the robustness. An efficient algorithm based on the inexact augmented Lagrange multiplier method is designed for the formulated problem. Extensive experiments on face clustering and motion segmentation clearly demonstrate the superiority of the proposed WSPQ over several state-of-the-art methods.

  3. Short-term prediction of chaotic time series by using RBF network with regression weights.

    PubMed

    Rojas, I; Gonzalez, J; Cañas, A; Diaz, A F; Rojas, F J; Rodriguez, M

    2000-10-01

    We propose a framework for constructing and training a radial basis function (RBF) neural network. The structure of the gaussian functions is modified using a pseudo-gaussian function (PG) in which two scaling parameters sigma are introduced, which eliminates the symmetry restriction and provides the neurons in the hidden layer with greater flexibility with respect to function approximation. We propose a modified PG-BF (pseudo-gaussian basis function) network in which the regression weights are used to replace the constant weights in the output layer. For this purpose, a sequential learning algorithm is presented to adapt the structure of the network, in which it is possible to create a new hidden unit and also to detect and remove inactive units. A salient feature of the network systems is that the method used for calculating the overall output is the weighted average of the output associated with each receptive field. The superior performance of the proposed PG-BF system over the standard RBF are illustrated using the problem of short-term prediction of chaotic time series.

  4. Inferring drug-disease associations based on known protein complexes.

    PubMed

    Yu, Liang; Huang, Jianbin; Ma, Zhixin; Zhang, Jing; Zou, Yapeng; Gao, Lin

    2015-01-01

    Inferring drug-disease associations is critical in unveiling disease mechanisms, as well as discovering novel functions of available drugs, or drug repositioning. Previous work is primarily based on drug-gene-disease relationship, which throws away many important information since genes execute their functions through interacting others. To overcome this issue, we propose a novel methodology that discover the drug-disease association based on protein complexes. Firstly, the integrated heterogeneous network consisting of drugs, protein complexes, and disease are constructed, where we assign weights to the drug-disease association by using probability. Then, from the tripartite network, we get the indirect weighted relationships between drugs and diseases. The larger the weight, the higher the reliability of the correlation. We apply our method to mental disorders and hypertension, and validate the result by using comparative toxicogenomics database. Our ranked results can be directly reinforced by existing biomedical literature, suggesting that our proposed method obtains higher specificity and sensitivity. The proposed method offers new insight into drug-disease discovery. Our method is publicly available at http://1.complexdrug.sinaapp.com/Drug_Complex_Disease/Data_Download.html.

  5. Inferring drug-disease associations based on known protein complexes

    PubMed Central

    2015-01-01

    Inferring drug-disease associations is critical in unveiling disease mechanisms, as well as discovering novel functions of available drugs, or drug repositioning. Previous work is primarily based on drug-gene-disease relationship, which throws away many important information since genes execute their functions through interacting others. To overcome this issue, we propose a novel methodology that discover the drug-disease association based on protein complexes. Firstly, the integrated heterogeneous network consisting of drugs, protein complexes, and disease are constructed, where we assign weights to the drug-disease association by using probability. Then, from the tripartite network, we get the indirect weighted relationships between drugs and diseases. The larger the weight, the higher the reliability of the correlation. We apply our method to mental disorders and hypertension, and validate the result by using comparative toxicogenomics database. Our ranked results can be directly reinforced by existing biomedical literature, suggesting that our proposed method obtains higher specificity and sensitivity. The proposed method offers new insight into drug-disease discovery. Our method is publicly available at http://1.complexdrug.sinaapp.com/Drug_Complex_Disease/Data_Download.html. PMID:26044949

  6. Meshless Local Petrov-Galerkin Euler-Bernoulli Beam Problems: A Radial Basis Function Approach

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Phillips, D. R.; Krishnamurthy, T.

    2003-01-01

    A radial basis function implementation of the meshless local Petrov-Galerkin (MLPG) method is presented to study Euler-Bernoulli beam problems. Radial basis functions, rather than generalized moving least squares (GMLS) interpolations, are used to develop the trial functions. This choice yields a computationally simpler method as fewer matrix inversions and multiplications are required than when GMLS interpolations are used. Test functions are chosen as simple weight functions as in the conventional MLPG method. Compactly and noncompactly supported radial basis functions are considered. The non-compactly supported cubic radial basis function is found to perform very well. Results obtained from the radial basis MLPG method are comparable to those obtained using the conventional MLPG method for mixed boundary value problems and problems with discontinuous loading conditions.

  7. Modeling an internal gear pump

    NASA Astrophysics Data System (ADS)

    Chen, Zongbin; Xu, Rongwu; He, Lin; Liao, Jian

    2018-05-01

    Considering the nature and characteristics of construction waste piles, this paper analyzed the factors affecting the stability of the slope of construction waste piles, and established the system of the assessment indexes for the slope failure risks of construction waste piles. Based on the basic principles and methods of fuzzy mathematics, the factor set and the remark set were established. The membership grade of continuous factor indexes is determined using the "ridge row distribution" function, while that for the discrete factor indexes was determined by the Delphi Method. For the weight of factors, the subjective weight was determined by the Analytic Hierarchy Process (AHP) and objective weight by the entropy weight method. And the distance function was introduced to determine the combination coefficient. This paper established a fuzzy comprehensive assessment model of slope failure risks of construction waste piles, and assessed pile slopes in the two dimensions of hazard and vulnerability. The root mean square of the hazard assessment result and vulnerability assessment result was the final assessment result. The paper then used a certain construction waste pile slope as the example for analysis, assessed the risks of the four stages of a landfill, verified the assessment model and analyzed the slope's failure risks and preventive measures against a slide.

  8. Study of the relation between body weight and functional limitations and pain in patients with knee osteoarthritis

    PubMed Central

    Alfieri, Fábio Marcon; Silva, Natália Cristina de Oliveira Vargas e; Battistella, Linamara Rizzo

    2017-01-01

    ABSTRACT Objective To assess the influence of the body weight in functional capacity and pain of adult and elderly individuals with knee osteoarthritis. Methods The sample consisted of 107 adult and elderly patients with knee osteoarthritis divided into two groups (adequate weight/adiposity and excessive weight/adiposity) according to body mass index and percent of body fat mass, assessed by electric bioimpedance. Subjects were evaluated for functional mobility (Timed Up and Go Test), pain, stiffness and function (Western Ontario and MacMaster Universities Osteoarthritis Index − WOMAC), pain intensity (Visual Analogue Scale − VAS) and pressure pain tolerance threshold (algometry in vastus medialis and vastus lateralis muscles). Data were analyzed with Statistical Package of the Social Sciences, version 22 for Windows. Comparisons between groups were made through Student’s t test, with significance level set at 5%. Results There was predominance of females in the sample (81.3%), and mean age was 61.8±10.1 years. When dividing the sample by both body mass index and adiposity, 89.7% of them had weight/adiposity excess, and 59.8% were obese. There was no difference between groups regarding age, pain intensity, pressure pain tolerance threshold, functional mobility, stiffness and function. However, pain (WOMAC) was higher (p=0.05) in the group of patients with weight or adiposity excess, and pain perception according to VAS was worse in the group of obese patients (p=0.05). Conclusion Excessive weight had negative impact in patients with osteoarthritis, increasing pain assessed by WOMAC or VAS, although no differences were observed in functionality and pressure pain tolerance. PMID:29091152

  9. SU-E-I-42: Some New Aspects of the Energy Weighting Technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganezer, K; Krmar, M; Josipovic, I

    2015-06-15

    Purpose: The development in the last few years of photon-counting pixel detectors creates an important and significant opportunity for x-ray spectroscopy to be applied in diagnostics. The energy weighting technique was originally developed to obtain the maximum benefit from the spectroscopic information. In all previous published papers the concept of an energy weighting function was tested on relatively simple test objects having only two materials with different absorption coefficients. Methods: In this study the shape of the energy weighting function was investigated with a set of ten trabecular bone test objects each with a similar geometry and structure but withmore » different attenuation properties. In previous publications it was determined that the function E-3 was a very good choice for the weighting function (wi). Results: The most important Result from this study was the discovery that a single function of the form E-b was not sufficient to explain the energy dependence of the different types of materials that might be used to describe common bodily tissues such as trabecular bone. It was also determined from the data contained in this paper that the exponent b is often significantly dependent upon the attenuation properties of the materials that were used to make the test objects. Conclusion: Studies of the attenuation properties will be useful in further studies involving energy weighting.« less

  10. Simulation electromagnetic scattering on bodies through integral equation and neural networks methods

    NASA Astrophysics Data System (ADS)

    Lvovich, I. Ya; Preobrazhenskiy, A. P.; Choporov, O. N.

    2018-05-01

    The paper deals with the issue of electromagnetic scattering on a perfectly conducting diffractive body of a complex shape. Performance calculation of the body scattering is carried out through the integral equation method. Fredholm equation of the second time was used for calculating electric current density. While solving the integral equation through the moments method, the authors have properly described the core singularity. The authors determined piecewise constant functions as basic functions. The chosen equation was solved through the moments method. Within the Kirchhoff integral approach it is possible to define the scattered electromagnetic field, in some way related to obtained electrical currents. The observation angles sector belongs to the area of the front hemisphere of the diffractive body. To improve characteristics of the diffractive body, the authors used a neural network. All the neurons contained a logsigmoid activation function and weighted sums as discriminant functions. The paper presents the matrix of weighting factors of the connectionist model, as well as the results of the optimized dimensions of the diffractive body. The paper also presents some basic steps in calculation technique of the diffractive bodies, based on the combination of integral equation and neural networks methods.

  11. Production of ELZM mirrors: performance coupled with attractive schedule, cost, and risk factors

    NASA Astrophysics Data System (ADS)

    Leys, Antoine; Hull, Tony; Westerhoff, Thomas

    2016-08-01

    Extreme light weighted ZERODUR Mirrors (ELZM) have been developed to exploit the superb thermal characteristics of ZERODUR. Coupled with up to date mechanical and optical fabrication methods this becomes an attractive technical approach. However the process of making mirror substrates has demonstrated to be unusually rapid and especially cost-effective. ELZM is aimed at the knee of the cost as a function of light weighting curve. ELZM mirrors are available at 88% light weighted. Together with their low risk, low cost production methods, this is presented as a strong option for NASA Explorer and Probe class missions.

  12. Moving force identification based on redundant concatenated dictionary and weighted l1-norm regularization

    NASA Astrophysics Data System (ADS)

    Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin; Chen, Ze-Peng; Luo, Wen-Feng

    2018-01-01

    Moving force identification (MFI) is an important inverse problem in the field of bridge structural health monitoring (SHM). Reasonable signal structures of moving forces are rarely considered in the existing MFI methods. Interaction forces are complex because they contain both slowly-varying harmonic and impact signals due to bridge vibration and bumps on a bridge deck, respectively. Therefore, the interaction forces are usually hard to be expressed completely and sparsely by using a single basis function set. Based on the redundant concatenated dictionary and weighted l1-norm regularization method, a hybrid method is proposed for MFI in this study. The redundant dictionary consists of both trigonometric functions and rectangular functions used for matching the harmonic and impact signal features of unknown moving forces. The weighted l1-norm regularization method is introduced for formulation of MFI equation, so that the signal features of moving forces can be accurately extracted. The fast iterative shrinkage-thresholding algorithm (FISTA) is used for solving the MFI problem. The optimal regularization parameter is appropriately chosen by the Bayesian information criterion (BIC) method. In order to assess the accuracy and the feasibility of the proposed method, a simply-supported beam bridge subjected to a moving force is taken as an example for numerical simulations. Finally, a series of experimental studies on MFI of a steel beam are performed in laboratory. Both numerical and experimental results show that the proposed method can accurately identify the moving forces with a strong robustness, and it has a better performance than the Tikhonov regularization method. Some related issues are discussed as well.

  13. A function space framework for structural total variation regularization with applications in inverse problems

    NASA Astrophysics Data System (ADS)

    Hintermüller, Michael; Holler, Martin; Papafitsoros, Kostas

    2018-06-01

    In this work, we introduce a function space setting for a wide class of structural/weighted total variation (TV) regularization methods motivated by their applications in inverse problems. In particular, we consider a regularizer that is the appropriate lower semi-continuous envelope (relaxation) of a suitable TV type functional initially defined for sufficiently smooth functions. We study examples where this relaxation can be expressed explicitly, and we also provide refinements for weighted TV for a wide range of weights. Since an integral characterization of the relaxation in function space is, in general, not always available, we show that, for a rather general linear inverse problems setting, instead of the classical Tikhonov regularization problem, one can equivalently solve a saddle-point problem where no a priori knowledge of an explicit formulation of the structural TV functional is needed. In particular, motivated by concrete applications, we deduce corresponding results for linear inverse problems with norm and Poisson log-likelihood data discrepancy terms. Finally, we provide proof-of-concept numerical examples where we solve the saddle-point problem for weighted TV denoising as well as for MR guided PET image reconstruction.

  14. A Method for Whole Protein Isolation from Human Cranial Bone

    PubMed Central

    Lyon, Sarah M.; Mayampurath, Anoop; Rogers, M. Rose; Wolfgeher, Donald J.; Fisher, Sean M.; Volchenboum, Samuel L.; He, Tong-Chuan; Reid, Russell R.

    2016-01-01

    The presence of the dense hydroxyapatite matrix within human bone limits the applicability of conventional protocols for protein extraction. This has hindered the complete and accurate characterization of the human bone proteome thus far, leaving many bone-related disorders poorly understood. We sought to refine an existing method of protein extraction from mouse bone to extract whole proteins of varying molecular weights from human cranial bone. Whole protein was extracted from human cranial suture by mechanically processing samples using a method that limits protein degradation by minimizing heat introduction to proteins. The presence of whole protein was confirmed by western blotting. Mass spectrometry was used to sequence peptides and identify isolated proteins. The data have been deposited to the ProteomeXchange with identifier PXD003215. Extracted proteins were characterized as both intra- and extracellular and had molecular weights ranging from 9.4-629 kDa. High correlation scores among suture protein spectral counts support the reproducibility of the method. Ontology analytics revealed proteins of myriad functions including mediators of metabolic processes and cell organelles. These results demonstrate a reproducible method for isolation of whole protein from human cranial bone, representing a large range of molecular weights, origins and functions. PMID:27677936

  15. Effects of Linking Methods on Detection of DIF.

    ERIC Educational Resources Information Center

    Kim, Seock-Ho; Cohen, Allan S.

    1992-01-01

    Effects of the following methods for linking metrics on detection of differential item functioning (DIF) were compared: (1) test characteristic curve method (TCC); (2) weighted mean and sigma method; and (3) minimum chi-square method. With large samples, results were essentially the same. With small samples, TCC was most accurate. (SLD)

  16. Online probabilistic learning with an ensemble of forecasts

    NASA Astrophysics Data System (ADS)

    Thorey, Jean; Mallet, Vivien; Chaussin, Christophe

    2016-04-01

    Our objective is to produce a calibrated weighted ensemble to forecast a univariate time series. In addition to a meteorological ensemble of forecasts, we rely on observations or analyses of the target variable. The celebrated Continuous Ranked Probability Score (CRPS) is used to evaluate the probabilistic forecasts. However applying the CRPS on weighted empirical distribution functions (deriving from the weighted ensemble) may introduce a bias because of which minimizing the CRPS does not produce the optimal weights. Thus we propose an unbiased version of the CRPS which relies on clusters of members and is strictly proper. We adapt online learning methods for the minimization of the CRPS. These methods generate the weights associated to the members in the forecasted empirical distribution function. The weights are updated before each forecast step using only past observations and forecasts. Our learning algorithms provide the theoretical guarantee that, in the long run, the CRPS of the weighted forecasts is at least as good as the CRPS of any weighted ensemble with weights constant in time. In particular, the performance of our forecast is better than that of any subset ensemble with uniform weights. A noteworthy advantage of our algorithm is that it does not require any assumption on the distributions of the observations and forecasts, both for the application and for the theoretical guarantee to hold. As application example on meteorological forecasts for photovoltaic production integration, we show that our algorithm generates a calibrated probabilistic forecast, with significant performance improvements on probabilistic diagnostic tools (the CRPS, the reliability diagram and the rank histogram).

  17. Importance of curvature evaluation scale for predictive simulations of dynamic gas-liquid interfaces

    NASA Astrophysics Data System (ADS)

    Owkes, Mark; Cauble, Eric; Senecal, Jacob; Currie, Robert A.

    2018-07-01

    The effect of the scale used to compute the interfacial curvature on the prediction of dynamic gas-liquid interfaces is investigated. A new interface curvature calculation methodology referred to herein as the Adjustable Curvature Evaluation Scale (ACES) is proposed. ACES leverages a weighted least squares regression to fit a polynomial through points computed on the volume-of-fluid representation of the gas-liquid interface. The interface curvature is evaluated from this polynomial. Varying the least squares weight with distance from the location where the curvature is being computed, adjusts the scale the curvature is evaluated on. ACES is verified using canonical static test cases and compared against second- and fourth-order height function methods. Simulations of dynamic interfaces, including a standing wave and oscillating droplet, are performed to assess the impact of the curvature evaluation scale for predicting interface motions. ACES and the height function methods are combined with two different unsplit geometric volume-of-fluid (VoF) schemes that define the interface on meshes with different levels of refinement. We find that the results depend significantly on curvature evaluation scale. Particularly, the ACES scheme with a properly chosen weight function is accurate, but fails when the scale is too small or large. Surprisingly, the second-order height function method is more accurate than the fourth-order variant for the dynamic tests even though the fourth-order method performs better for static interfaces. Comparing the curvature evaluation scale of the second- and fourth-order height function methods, we find the second-order method is closer to the optimum scale identified with ACES. This result suggests that the curvature scale is driving the accuracy of the dynamics. This work highlights the importance of studying numerical methods with realistic (dynamic) test cases and that the interactions of the various discretizations is as important as the accuracy of one part of the discretization.

  18. Point-based warping with optimized weighting factors of displacement vectors

    NASA Astrophysics Data System (ADS)

    Pielot, Ranier; Scholz, Michael; Obermayer, Klaus; Gundelfinger, Eckart D.; Hess, Andreas

    2000-06-01

    The accurate comparison of inter-individual 3D image brain datasets requires non-affine transformation techniques (warping) to reduce geometric variations. Constrained by the biological prerequisites we use in this study a landmark-based warping method with weighted sums of displacement vectors, which is enhanced by an optimization process. Furthermore, we investigate fast automatic procedures for determining landmarks to improve the practicability of 3D warping. This combined approach was tested on 3D autoradiographs of Gerbil brains. The autoradiographs were obtained after injecting a non-metabolized radioactive glucose derivative into the Gerbil thereby visualizing neuronal activity in the brain. Afterwards the brain was processed with standard autoradiographical methods. The landmark-generator computes corresponding reference points simultaneously within a given number of datasets by Monte-Carlo-techniques. The warping function is a distance weighted exponential function with a landmark- specific weighting factor. These weighting factors are optimized by a computational evolution strategy. The warping quality is quantified by several coefficients (correlation coefficient, overlap-index, and registration error). The described approach combines a highly suitable procedure to automatically detect landmarks in autoradiographical brain images and an enhanced point-based warping technique, optimizing the local weighting factors. This optimization process significantly improves the similarity between the warped and the target dataset.

  19. Nonperturbative Series Expansion of Green's Functions: The Anatomy of Resonant Inelastic X-Ray Scattering in the Doped Hubbard Model

    NASA Astrophysics Data System (ADS)

    Lu, Yi; Haverkort, Maurits W.

    2017-12-01

    We present a nonperturbative, divergence-free series expansion of Green's functions using effective operators. The method is especially suited for computing correlators of complex operators as a series of correlation functions of simpler forms. We apply the method to study low-energy excitations in resonant inelastic x-ray scattering (RIXS) in doped one- and two-dimensional single-band Hubbard models. The RIXS operator is expanded into polynomials of spin, density, and current operators weighted by fundamental x-ray spectral functions. These operators couple to different polarization channels resulting in simple selection rules. The incident photon energy dependent coefficients help to pinpoint main RIXS contributions from different degrees of freedom. We show in particular that, with parameters pertaining to cuprate superconductors, local spin excitation dominates the RIXS spectral weight over a wide doping range in the cross-polarization channel.

  20. Comparison of weighting techniques for acoustic full waveform inversion

    NASA Astrophysics Data System (ADS)

    Jeong, Gangwon; Hwang, Jongha; Min, Dong-Joo

    2017-12-01

    To reconstruct long-wavelength structures in full waveform inversion (FWI), the wavefield-damping and weighting techniques have been used to synthesize and emphasize low-frequency data components in frequency-domain FWI. However, these methods have some weak points. The application of wavefield-damping method on filtered data fails to synthesize reliable low-frequency data; the optimization formula obtained introducing the weighting technique is not theoretically complete, because it is not directly derived from the objective function. In this study, we address these weak points and present how to overcome them. We demonstrate that the source estimation in FWI using damped wavefields fails when the data used in the FWI process does not satisfy the causality condition. This phenomenon occurs when a non-causal filter is applied to data. We overcome this limitation by designing a causal filter. Also we modify the conventional weighting technique so that its optimization formula is directly derived from the objective function, retaining its original characteristic of emphasizing the low-frequency data components. Numerical results show that the newly designed causal filter enables to recover long-wavelength structures using low-frequency data components synthesized by damping wavefields in frequency-domain FWI, and the proposed weighting technique enhances the inversion results.

  1. Time-to-event continual reassessment method incorporating treatment cycle information with application to an oncology phase I trial.

    PubMed

    Huang, Bo; Kuan, Pei Fen

    2014-11-01

    Delayed dose limiting toxicities (i.e. beyond first cycle of treatment) is a challenge for phase I trials. The time-to-event continual reassessment method (TITE-CRM) is a Bayesian dose-finding design to address the issue of long observation time and early patient drop-out. It uses a weighted binomial likelihood with weights assigned to observations by the unknown time-to-toxicity distribution, and is open to accrual continually. To avoid dosing at overly toxic levels while retaining accuracy and efficiency for DLT evaluation that involves multiple cycles, we propose an adaptive weight function by incorporating cyclical data of the experimental treatment with parameters updated continually. This provides a reasonable estimate for the time-to-toxicity distribution by accounting for inter-cycle variability and maintains the statistical properties of consistency and coherence. A case study of a First-in-Human trial in cancer for an experimental biologic is presented using the proposed design. Design calibrations for the clinical and statistical parameters are conducted to ensure good operating characteristics. Simulation results show that the proposed TITE-CRM design with adaptive weight function yields significantly shorter trial duration, does not expose patients to additional risk, is competitive against the existing weighting methods, and possesses some desirable properties. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Improved Function With Enhanced Protein Intake per Meal: A Pilot Study of Weight Reduction in Frail, Obese Older Adults

    PubMed Central

    Pieper, Carl F.; Orenduff, Melissa C.; McDonald, Shelley R.; McClure, Luisa B.; Zhou, Run; Payne, Martha E.; Bales, Connie W.

    2016-01-01

    Abstract Background: Obesity is a significant cause of functional limitations in older adults; yet, concerns that weight reduction could diminish muscle along with fat mass have impeded progress toward an intervention. Meal-based enhancement of protein intake could protect function and/or lean mass but has not been studied during geriatric obesity reduction. Methods: In this 6-month randomized controlled trial, 67 obese (body mass index ≥30kg/m2) older (≥60 years) adults with a Short Physical Performance Battery score of 4–10 were randomly assigned to a traditional (Control) weight loss regimen or one with higher protein intake (>30g) at each meal (Protein). All participants were prescribed a hypo-caloric diet, and weighed and provided dietary guidance weekly. Physical function (Short Physical Performance Battery) and lean mass (BOD POD), along with secondary measures, were assessed at 0, 3, and 6 months. Results: At the 6-month endpoint, there was significant (p < .001) weight loss in both the Control (−7.5±6.2kg) and Protein (−8.7±7.4kg) groups. Both groups also improved function but the increase in the Protein (+2.4±1.7 units; p < .001) was greater than in the Control (+0.9±1.7 units; p < .01) group (p = .02). Conclusion: Obese, functionally limited older adults undergoing a 6-month weight loss intervention with a meal-based enhancement of protein quantity and quality lost similar amounts of weight but had greater functional improvements relative to the Control group. If confirmed, this dietary approach could have important implications for improving the functional status of this vulnerable population (ClinicalTrials.gov identifier: NCT01715753). PMID:26786203

  3. Spatially weighted mutual information image registration for image guided radiation therapy.

    PubMed

    Park, Samuel B; Rhee, Frank C; Monroe, James I; Sohn, Jason W

    2010-09-01

    To develop a new metric for image registration that incorporates the (sub)pixelwise differential importance along spatial location and to demonstrate its application for image guided radiation therapy (IGRT). It is well known that rigid-body image registration with mutual information is dependent on the size and location of the image subset on which the alignment analysis is based [the designated region of interest (ROI)]. Therefore, careful review and manual adjustments of the resulting registration are frequently necessary. Although there were some investigations of weighted mutual information (WMI), these efforts could not apply the differential importance to a particular spatial location since WMI only applies the weight to the joint histogram space. The authors developed the spatially weighted mutual information (SWMI) metric by incorporating an adaptable weight function with spatial localization into mutual information. SWMI enables the user to apply the selected transform to medically "important" areas such as tumors and critical structures, so SWMI is neither dominated by, nor neglects the neighboring structures. Since SWMI can be utilized with any weight function form, the authors presented two examples of weight functions for IGRT application: A Gaussian-shaped weight function (GW) applied to a user-defined location and a structures-of-interest (SOI) based weight function. An image registration example using a synthesized 2D image is presented to illustrate the efficacy of SWMI. The convergence and feasibility of the registration method as applied to clinical imaging is illustrated by fusing a prostate treatment planning CT with a clinical cone beam CT (CBCT) image set acquired for patient alignment. Forty-one trials are run to test the speed of convergence. The authors also applied SWMI registration using two types of weight functions to two head and neck cases and a prostate case with clinically acquired CBCT/ MVCT image sets. The SWMI registration with a Gaussian weight function (SWMI-GW) was tested between two different imaging modalities: CT and MRI image sets. SWMI-GW converges 10% faster than registration using mutual information with an ROI. SWMI-GW as well as SWMI with SOI-based weight function (SWMI-SOI) shows better compensation of the target organ's deformation and neighboring critical organs' deformation. SWMI-GW was also used to successfully fuse MRI and CT images. Rigid-body image registration using our SWMI-GW and SWMI-SOI as cost functions can achieve better registration results in (a) designated image region(s) as well as faster convergence. With the theoretical foundation established, we believe SWMI could be extended to larger clinical testing.

  4. On the optimization of a mixed speaker array in an enclosed space using the virtual-speaker weighting method

    NASA Astrophysics Data System (ADS)

    Peng, Bo; Zheng, Sifa; Liao, Xiangning; Lian, Xiaomin

    2018-03-01

    In order to achieve sound field reproduction in a wide frequency band, multiple-type speakers are used. The reproduction accuracy is not only affected by the signals sent to the speakers, but also depends on the position and the number of each type of speaker. The method of optimizing a mixed speaker array is investigated in this paper. A virtual-speaker weighting method is proposed to optimize both the position and the number of each type of speaker. In this method, a virtual-speaker model is proposed to quantify the increment of controllability of the speaker array when the speaker number increases. While optimizing a mixed speaker array, the gain of the virtual-speaker transfer function is used to determine the priority orders of the candidate speaker positions, which optimizes the position of each type of speaker. Then the relative gain of the virtual-speaker transfer function is used to determine whether the speakers are redundant, which optimizes the number of each type of speaker. Finally the virtual-speaker weighting method is verified by reproduction experiments of the interior sound field in a passenger car. The results validate that the optimum mixed speaker array can be obtained using the proposed method.

  5. Fractionation of Organosolv Lignin Using Acetone:Water and Properties of the Obtained Fractions

    DOE PAGES

    Sadeghifar, Hasan; Wells, Tyrone; Le, Rosemary Khuu; ...

    2016-11-07

    In this study, lignin fractions with different molecular weight were prepared using a simple and almost green method from switchgrass and pine organosolv lignin. Different proportions of acetone in water, ranging from 30 to 60%, were used for lignin fractionation. A higher concentration of acetone dissolved higher molecular weight fractions of the lignin. Fractionated organosolv lignin showed different molecular weight and functional groups. Higher molecular weight fractions exhibited more aliphatic and less phenolic OH than lower molecular weight fractions. Lower molecular weight fractions lead to more homogeneous structure compared to samples with a higher molecular weight. In conclusion, all fractionsmore » showed strong antioxidant activity.« less

  6. Analysis of Skeletal Muscle Metrics as Predictors of Functional Task Performance

    NASA Technical Reports Server (NTRS)

    Ryder, Jeffrey W.; Buxton, Roxanne E.; Redd, Elizabeth; Scott-Pandorf, Melissa; Hackney, Kyle J.; Fiedler, James; Ploutz-Snyder, Robert J.; Bloomberg, Jacob J.; Ploutz-Snyder, Lori L.

    2010-01-01

    PURPOSE: The ability to predict task performance using physiological performance metrics is vital to ensure that astronauts can execute their jobs safely and effectively. This investigation used a weighted suit to evaluate task performance at various ratios of strength, power, and endurance to body weight. METHODS: Twenty subjects completed muscle performance tests and functional tasks representative of those that would be required of astronauts during planetary exploration (see table for specific tests/tasks). Subjects performed functional tasks while wearing a weighted suit with additional loads ranging from 0-120% of initial body weight. Performance metrics were time to completion for all tasks except hatch opening, which consisted of total work. Task performance metrics were plotted against muscle metrics normalized to "body weight" (subject weight + external load; BW) for each trial. Fractional polynomial regression was used to model the relationship between muscle and task performance. CONCLUSION: LPMIF/BW is the best predictor of performance for predominantly lower-body tasks that are ambulatory and of short duration. LPMIF/BW is a very practical predictor of occupational task performance as it is quick and relatively safe to perform. Accordingly, bench press work best predicts hatch-opening work performance.

  7. Simultaneous Estimation of Regression Functions for Marine Corps Technical Training Specialties.

    ERIC Educational Resources Information Center

    Dunbar, Stephen B.; And Others

    This paper considers the application of Bayesian techniques for simultaneous estimation to the specification of regression weights for selection tests used in various technical training courses in the Marine Corps. Results of a method for m-group regression developed by Molenaar and Lewis (1979) suggest that common weights for training courses…

  8. Algebraic grid adaptation method using non-uniform rational B-spline surface modeling

    NASA Technical Reports Server (NTRS)

    Yang, Jiann-Cherng; Soni, B. K.

    1992-01-01

    An algebraic adaptive grid system based on equidistribution law and utilized by the Non-Uniform Rational B-Spline (NURBS) surface for redistribution is presented. A weight function, utilizing a properly weighted boolean sum of various flow field characteristics is developed. Computational examples are presented to demonstrate the success of this technique.

  9. Adolescents' Perceptions of Controllability and Its Relationship to Explicit Obesity Bias

    ERIC Educational Resources Information Center

    Rukavina, Paul B.; Li, Weidong

    2011-01-01

    Background: The purpose of the study was to assess adolescents' perceptions of controllability and its relation to weight stereotypes as a function of gender. Methods: Two hundred and thirty-one seventh and eighth graders from physical education classes completed a perception of controllability questionnaire and weight stereotype explicit scale…

  10. Aerodynamic and Nonlinear Dynamic Acoustic Analysis of Tension Asymmetry in Excised Canine Larynges

    ERIC Educational Resources Information Center

    Devine, Erin E.; Bulleit, Erin E.; Hoffman, Matthew R.; McCulloch, Timothy M.; Jiang, Jack J.

    2012-01-01

    Purpose: To model tension asymmetry caused by superior laryngeal nerve paralysis (SLNP) in excised larynges and apply perturbation, nonlinear dynamic, and aerodynamic analyses. Method: SLNP was modeled in 8 excised larynges using sutures and weights to mimic cricothyroid (CT) muscle function. Weights were removed from one side to create tension…

  11. An optimized time varying filtering based empirical mode decomposition method with grey wolf optimizer for machinery fault diagnosis

    NASA Astrophysics Data System (ADS)

    Zhang, Xin; Liu, Zhiwen; Miao, Qiang; Wang, Lei

    2018-03-01

    A time varying filtering based empirical mode decomposition (EMD) (TVF-EMD) method was proposed recently to solve the mode mixing problem of EMD method. Compared with the classical EMD, TVF-EMD was proven to improve the frequency separation performance and be robust to noise interference. However, the decomposition parameters (i.e., bandwidth threshold and B-spline order) significantly affect the decomposition results of this method. In original TVF-EMD method, the parameter values are assigned in advance, which makes it difficult to achieve satisfactory analysis results. To solve this problem, this paper develops an optimized TVF-EMD method based on grey wolf optimizer (GWO) algorithm for fault diagnosis of rotating machinery. Firstly, a measurement index termed weighted kurtosis index is constructed by using kurtosis index and correlation coefficient. Subsequently, the optimal TVF-EMD parameters that match with the input signal can be obtained by GWO algorithm using the maximum weighted kurtosis index as objective function. Finally, fault features can be extracted by analyzing the sensitive intrinsic mode function (IMF) owning the maximum weighted kurtosis index. Simulations and comparisons highlight the performance of TVF-EMD method for signal decomposition, and meanwhile verify the fact that bandwidth threshold and B-spline order are critical to the decomposition results. Two case studies on rotating machinery fault diagnosis demonstrate the effectiveness and advantages of the proposed method.

  12. A Meshless Method Using Radial Basis Functions for Beam Bending Problems

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Phillips, D. R.; Krishnamurthy, T.

    2004-01-01

    A meshless local Petrov-Galerkin (MLPG) method that uses radial basis functions (RBFs) as trial functions in the study of Euler-Bernoulli beam problems is presented. RBFs, rather than generalized moving least squares (GMLS) interpolations, are used to develop the trial functions. This choice yields a computationally simpler method as fewer matrix inversions and multiplications are required than when GMLS interpolations are used. Test functions are chosen as simple weight functions as they are in the conventional MLPG method. Compactly and noncompactly supported RBFs are considered. Noncompactly supported cubic RBFs are found to be preferable. Patch tests, mixed boundary value problems, and problems with complex loading conditions are considered. Results obtained from the radial basis MLPG method are either of comparable or better accuracy than those obtained when using the conventional MLPG method.

  13. A Performance Weighted Collaborative Filtering algorithm for personalized radiology education.

    PubMed

    Lin, Hongli; Yang, Xuedong; Wang, Weisheng; Luo, Jiawei

    2014-10-01

    Devising an accurate prediction algorithm that can predict the difficulty level of cases for individuals and then selects suitable cases for them is essential to the development of a personalized training system. In this paper, we propose a novel approach, called Performance Weighted Collaborative Filtering (PWCF), to predict the difficulty level of each case for individuals. The main idea of PWCF is to assign an optimal weight to each rating used for predicting the difficulty level of a target case for a trainee, rather than using an equal weight for all ratings as in traditional collaborative filtering methods. The assigned weight is a function of the performance level of the trainee at which the rating was made. The PWCF method and the traditional method are compared using two datasets. The experimental data are then evaluated by means of the MAE metric. Our experimental results show that PWCF outperforms the traditional methods by 8.12% and 17.05%, respectively, over the two datasets, in terms of prediction precision. This suggests that PWCF is a viable method for the development of personalized training systems in radiology education. Copyright © 2014. Published by Elsevier Inc.

  14. Obtaining high-resolution velocity spectra using weighted semblance

    NASA Astrophysics Data System (ADS)

    Ebrahimi, Saleh; Kahoo, Amin Roshandel; Porsani, Milton J.; Kalateh, Ali Nejati

    2017-02-01

    Velocity analysis employs coherency measurement along a hyperbolic or non-hyperbolic trajectory time window to build velocity spectra. Accuracy and resolution are strictly related to the method of coherency measurements. Semblance, the most common coherence measure, has poor resolution velocity which affects one's ability to distinguish and pick distinct peaks. Increase the resolution of the semblance velocity spectra causes the accuracy of estimated velocity for normal moveout correction and stacking is improved. The low resolution of semblance spectra depends on its low sensitivity to velocity changes. In this paper, we present a new weighted semblance method that ensures high-resolution velocity spectra. To increase the resolution of semblance spectra, we introduce two weighting functions based on the first to second singular values ratio of the time window and the position of the seismic wavelet in the time window to the semblance equation. We test the method on both synthetic and real field data to compare the resolution of weighted and conventional semblance methods. Numerical examples with synthetic and real seismic data indicate that the new proposed weighted semblance method provides higher resolution than conventional semblance and can separate the reflectors which are mixed in the semblance spectrum.

  15. Influence of Type of Frequency Weighting Function On VDV Analysis

    NASA Astrophysics Data System (ADS)

    Kowalska-Koczwara, Alicja; Stypuła, Krzysztof

    2017-10-01

    Transport vibrations are the subject of many research, mostly their influence on structural elements of the building is investigated. However, nowadays, especially in the centres of large cities were apartments, residential buildings are closer to the transport vibration sources, an increasing attention is given to providing vibrational comfort to humans in buildings. Currently, in most countries, two main methods of evaluation are used: root mean squared method (RMS) and vibration dose value (VDV). In this article, VDV method is presented and the analysis of the weighting functions selection on value of VDV is made. Measurements required for the analysis were made in Krakow, on masonry, residential, two storey building located in the city centre. The building is subjected into two transport vibration sources: tram passages and vehicle passages on very close located road. Measurement points were located on the basement wall at ground level to control the excitation and in the middle of the floor on the highest storey (in the place where people percept vibration). The room chosen for measurements is located closest to the transport excitation sources. During the measurements, 25 vibration events were recorded and analysed. VDV values were calculated for three different weighting functions according to standard: ISO 2631-1, ISO 2631-2 and BS-6841. Differences in VDV values are shown, but also influence of the weighting function selection on result of evaluation is also presented. VDV analysis was performed not only for the individual vibration event but also all day and night vibration exposure were calculated using formulas contained in the annex to the standard BS-6841. It is demonstrated that, although there are differences in the values of VDV, an influence on all day and night exposure is no longer so significant.

  16. Valuing SF-6D Health States Using a Discrete Choice Experiment.

    PubMed

    Norman, Richard; Viney, Rosalie; Brazier, John; Burgess, Leonie; Cronin, Paula; King, Madeleine; Ratcliffe, Julie; Street, Deborah

    2014-08-01

    SF-6D utility weights are conventionally produced using a standard gamble (SG). SG-derived weights consistently demonstrate a floor effect not observed with other elicitation techniques. Recent advances in discrete choice methods have allowed estimation of utility weights. The objective was to produce Australian utility weights for the SF-6D and to explore the application of discrete choice experiment (DCE) methods in this context. We hypothesized that weights derived using this method would reflect the largely monotonic construction of the SF-6D. We designed an online DCE and administered it to an Australia-representative online panel (n = 1017). A range of specifications investigating nonlinear preferences with respect to additional life expectancy were estimated using a random-effects probit model. The preferred model was then used to estimate a preference index such that full health and death were valued at 1 and 0, respectively, to provide an algorithm for Australian cost-utility analyses. Physical functioning, pain, mental health, and vitality were the largest drivers of utility weights. Combining levels to remove illogical orderings did not lead to a poorer model fit. Relative to international SG-derived weights, the range of utility weights was larger with 5% of health states valued below zero. s. DCEs can be used to investigate preferences for health profiles and to estimate utility weights for multi-attribute utility instruments. Australian cost-utility analyses can now use domestic SF-6D weights. The comparability of DCE results to those using other elicitation methods for estimating utility weights for quality-adjusted life-year calculations should be further investigated. © The Author(s) 2013.

  17. A Monte Carlo simulation based inverse propagation method for stochastic model updating

    NASA Astrophysics Data System (ADS)

    Bao, Nuo; Wang, Chunjie

    2015-08-01

    This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.

  18. Weighted Optimization-Based Distributed Kalman Filter for Nonlinear Target Tracking in Collaborative Sensor Networks.

    PubMed

    Chen, Jie; Li, Jiahong; Yang, Shuanghua; Deng, Fang

    2017-11-01

    The identification of the nonlinearity and coupling is crucial in nonlinear target tracking problem in collaborative sensor networks. According to the adaptive Kalman filtering (KF) method, the nonlinearity and coupling can be regarded as the model noise covariance, and estimated by minimizing the innovation or residual errors of the states. However, the method requires large time window of data to achieve reliable covariance measurement, making it impractical for nonlinear systems which are rapidly changing. To deal with the problem, a weighted optimization-based distributed KF algorithm (WODKF) is proposed in this paper. The algorithm enlarges the data size of each sensor by the received measurements and state estimates from its connected sensors instead of the time window. A new cost function is set as the weighted sum of the bias and oscillation of the state to estimate the "best" estimate of the model noise covariance. The bias and oscillation of the state of each sensor are estimated by polynomial fitting a time window of state estimates and measurements of the sensor and its neighbors weighted by the measurement noise covariance. The best estimate of the model noise covariance is computed by minimizing the weighted cost function using the exhaustive method. The sensor selection method is in addition to the algorithm to decrease the computation load of the filter and increase the scalability of the sensor network. The existence, suboptimality and stability analysis of the algorithm are given. The local probability data association method is used in the proposed algorithm for the multitarget tracking case. The algorithm is demonstrated in simulations on tracking examples for a random signal, one nonlinear target, and four nonlinear targets. Results show the feasibility and superiority of WODKF against other filtering algorithms for a large class of systems.

  19. Sensing Attribute Weights: A Novel Basic Belief Assignment Method

    PubMed Central

    Jiang, Wen; Zhuang, Miaoyan; Xie, Chunhe; Wu, Jun

    2017-01-01

    Dempster–Shafer evidence theory is widely used in many soft sensors data fusion systems on account of its good performance for handling the uncertainty information of soft sensors. However, how to determine basic belief assignment (BBA) is still an open issue. The existing methods to determine BBA do not consider the reliability of each attribute; at the same time, they cannot effectively determine BBA in the open world. In this paper, based on attribute weights, a novel method to determine BBA is proposed not only in the closed world, but also in the open world. The Gaussian model of each attribute is built using the training samples firstly. Second, the similarity between the test sample and the attribute model is measured based on the Gaussian membership functions. Then, the attribute weights are generated using the overlap degree among the classes. Finally, BBA is determined according to the sensed attribute weights. Several examples with small datasets show the validity of the proposed method. PMID:28358325

  20. Sensing Attribute Weights: A Novel Basic Belief Assignment Method.

    PubMed

    Jiang, Wen; Zhuang, Miaoyan; Xie, Chunhe; Wu, Jun

    2017-03-30

    Dempster-Shafer evidence theory is widely used in many soft sensors data fusion systems on account of its good performance for handling the uncertainty information of soft sensors. However, how to determine basic belief assignment (BBA) is still an open issue. The existing methods to determine BBA do not consider the reliability of each attribute; at the same time, they cannot effectively determine BBA in the open world. In this paper, based on attribute weights, a novel method to determine BBA is proposed not only in the closed world, but also in the open world. The Gaussian model of each attribute is built using the training samples firstly. Second, the similarity between the test sample and the attribute model is measured based on the Gaussian membership functions. Then, the attribute weights are generated using the overlap degree among the classes. Finally, BBA is determined according to the sensed attribute weights. Several examples with small datasets show the validity of the proposed method.

  1. A Method for Predicting Protein Complexes from Dynamic Weighted Protein-Protein Interaction Networks.

    PubMed

    Liu, Lizhen; Sun, Xiaowu; Song, Wei; Du, Chao

    2018-06-01

    Predicting protein complexes from protein-protein interaction (PPI) network is of great significance to recognize the structure and function of cells. A protein may interact with different proteins under different time or conditions. Existing approaches only utilize static PPI network data that may lose much temporal biological information. First, this article proposed a novel method that combines gene expression data at different time points with traditional static PPI network to construct different dynamic subnetworks. Second, to further filter out the data noise, the semantic similarity based on gene ontology is regarded as the network weight together with the principal component analysis, which is introduced to deal with the weight computing by three traditional methods. Third, after building a dynamic PPI network, a predicting protein complexes algorithm based on "core-attachment" structural feature is applied to detect complexes from each dynamic subnetworks. Finally, it is revealed from the experimental results that our method proposed in this article performs well on detecting protein complexes from dynamic weighted PPI networks.

  2. Optimal Frequency-Domain System Realization with Weighting

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Maghami, Peiman G.

    1999-01-01

    Several approaches are presented to identify an experimental system model directly from frequency response data. The formulation uses a matrix-fraction description as the model structure. Frequency weighting such as exponential weighting is introduced to solve a weighted least-squares problem to obtain the coefficient matrices for the matrix-fraction description. A multi-variable state-space model can then be formed using the coefficient matrices of the matrix-fraction description. Three different approaches are introduced to fine-tune the model using nonlinear programming methods to minimize the desired cost function. The first method uses an eigenvalue assignment technique to reassign a subset of system poles to improve the identified model. The second method deals with the model in the real Schur or modal form, reassigns a subset of system poles, and adjusts the columns (rows) of the input (output) influence matrix using a nonlinear optimizer. The third method also optimizes a subset of poles, but the input and output influence matrices are refined at every optimization step through least-squares procedures.

  3. Optimal design application on the advanced aeroelastic rotor blade

    NASA Technical Reports Server (NTRS)

    Wei, F. S.; Jones, R.

    1985-01-01

    The vibration and performance optimization procedure using regression analysis was successfully applied to an advanced aeroelastic blade design study. The major advantage of this regression technique is that multiple optimizations can be performed to evaluate the effects of various objective functions and constraint functions. The data bases obtained from the rotorcraft flight simulation program C81 and Myklestad mode shape program are analytically determined as a function of each design variable. This approach has been verified for various blade radial ballast weight locations and blade planforms. This method can also be utilized to ascertain the effect of a particular cost function which is composed of several objective functions with different weighting factors for various mission requirements without any additional effort.

  4. Predicting critical micelle concentration and micelle molecular weight of polysorbate 80 using compendial methods.

    PubMed

    Braun, Alexandra C; Ilko, David; Merget, Benjamin; Gieseler, Henning; Germershaus, Oliver; Holzgrabe, Ulrike; Meinel, Lorenz

    2015-08-01

    This manuscript addresses the capability of compendial methods in controlling polysorbate 80 (PS80) functionality. Based on the analysis of sixteen batches, functionality related characteristics (FRC) including critical micelle concentration (CMC), cloud point, hydrophilic-lipophilic balance (HLB) value and micelle molecular weight were correlated to chemical composition including fatty acids before and after hydrolysis, content of non-esterified polyethylene glycols and sorbitan polyethoxylates, sorbitan- and isosorbide polyethoxylate fatty acid mono- and diesters, polyoxyethylene diesters, and peroxide values. Batches from some suppliers had a high variability in functionality related characteristic (FRC), questioning the ability of the current monograph in controlling these. Interestingly, the combined use of the input parameters oleic acid content and peroxide value - both of which being monographed methods - resulted in a model adequately predicting CMC. Confining the batches to those complying with specifications for peroxide value proved oleic acid content alone as being predictive for CMC. Similarly, a four parameter model based on chemical analyses alone was instrumental in predicting the molecular weight of PS80 micelles. Improved models based on analytical outcome from fingerprint analyses are also presented. A road map controlling PS80 batches with respect to FRC and based on chemical analyses alone is provided for the formulator. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Improving Functional MRI Registration Using Whole-Brain Functional Correlation Tensors.

    PubMed

    Zhou, Yujia; Yap, Pew-Thian; Zhang, Han; Zhang, Lichi; Feng, Qianjin; Shen, Dinggang

    2017-09-01

    Population studies of brain function with resting-state functional magnetic resonance imaging (rs-fMRI) largely rely on the accurate inter-subject registration of functional areas. This is typically achieved through registration of the corresponding T1-weighted MR images with more structural details. However, accumulating evidence has suggested that such strategy cannot well-align functional regions which are not necessarily confined by the anatomical boundaries defined by the T1-weighted MR images. To mitigate this problem, various registration algorithms based directly on rs-fMRI data have been developed, most of which have utilized functional connectivity (FC) as features for registration. However, most of the FC-based registration methods usually extract the functional features only from the thin and highly curved cortical grey matter (GM), posing a great challenge in accurately estimating the whole-brain deformation field. In this paper, we demonstrate that the additional useful functional features can be extracted from brain regions beyond the GM, particularly, white-matter (WM) based on rs-fMRI, for improving the overall functional registration. Specifically, we quantify the local anisotropic correlation patterns of the blood oxygenation level-dependent (BOLD) signals, modeled by functional correlation tensors (FCTs), in both GM and WM. Functional registration is then performed based on multiple components of the whole-brain FCTs using a multichannel Large Deformation Diffeomorphic Metric Mapping (mLDDMM) algorithm. Experimental results show that our proposed method achieves superior functional registration performance, compared with other conventional registration methods.

  6. Instability risk assessment of construction waste pile slope based on fuzzy entropy

    NASA Astrophysics Data System (ADS)

    Ma, Yong; Xing, Huige; Yang, Mao; Nie, Tingting

    2018-05-01

    Considering the nature and characteristics of construction waste piles, this paper analyzed the factors affecting the stability of the slope of construction waste piles, and established the system of the assessment indexes for the slope failure risks of construction waste piles. Based on the basic principles and methods of fuzzy mathematics, the factor set and the remark set were established. The membership grade of continuous factor indexes is determined using the "ridge row distribution" function, while that for the discrete factor indexes was determined by the Delphi Method. For the weight of factors, the subjective weight was determined by the Analytic Hierarchy Process (AHP) and objective weight by the entropy weight method. And the distance function was introduced to determine the combination coefficient. This paper established a fuzzy comprehensive assessment model of slope failure risks of construction waste piles, and assessed pile slopes in the two dimensions of hazard and vulnerability. The root mean square of the hazard assessment result and vulnerability assessment result was the final assessment result. The paper then used a certain construction waste pile slope as the example for analysis, assessed the risks of the four stages of a landfill, verified the assessment model and analyzed the slope's failure risks and preventive measures against a slide.

  7. Active Flexion in Weight Bearing Better Correlates with Functional Outcomes of Total Knee Arthroplasty than Passive Flexion

    PubMed Central

    Song, Young Dong; Jain, Nimash; Kang, Yeon Gwi; Kim, Tae Yune

    2016-01-01

    Purpose Correlations between maximum flexion and functional outcomes in total knee arthroplasty (TKA) patients are reportedly weak. We investigated whether there are differences between passive maximum flexion in nonweight bearing and other types of maximum flexion and whether the type of maximum flexion correlates with functional outcomes. Materials and Methods A total of 210 patients (359 knees) underwent preoperative evaluation and postoperative follow-up evaluations (6, 12, and 24 months) for the assessment of clinical outcomes including maximum knee flexion. Maximum flexion was measured under five conditions: passive nonweight bearing, passive weight bearing, active nonweight bearing, and active weight bearing with or without arm support. Data were analyzed for relationships between passive maximum flexion in nonweight bearing by Pearson correlation analyses, and a variance comparison between measurement techniques via paired t test. Results We observed substantial differences between passive maximum flexion in nonweight bearing and the other four maximum flexion types. At all time points, passive maximum flexion in nonweight bearing correlated poorly with active maximum flexion in weight bearing with or without arm support. Active maximum flexion in weight bearing better correlated with functional outcomes than the other maximum flexion types. Conclusions Our study suggests active maximum flexion in weight bearing should be reported together with passive maximum flexion in nonweight bearing in research on the knee motion arc after TKA. PMID:27274468

  8. A climate index indicative of cloudiness derived from satellite infrared sounder data

    NASA Technical Reports Server (NTRS)

    Abel, M. D.; Cox, S. K.

    1981-01-01

    In many current studies conducted to enhance the usefulness of meteorological satellite radiance data, one common objective is to infer conventional weather variables. The present investigation, on the other hand, is mainly concerned with the efficient retrieval (minimization of errors) of a nonstandard atmospheric descriptor. The atmosphere's Vertical Infrared Radiative Emitting Structure (VIRES) is retrieved. VIRES is described by the broadband infrared weighting function curve. The shapes of these weighting curves are primarily a function of the three-dimensional cloud structure. The weighting curves are retrieved by a method which uses satellite spectral radiance data. The basic theory involved in the VIRES retrieval procedure parallels the technique used to retrieve temperature soundings.

  9. SU-E-T-385: Evaluation of DVH Change for PTV Due to Patient Weight Loss in Prostate VMAT Using Gaussian Error Function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Viraganathan, H; Jiang, R; Chow, J

    Purpose: We proposed a method to predict the change of dose-volume histogram (DVH) for PTV due to patient weight loss in prostate volumetric modulated arc therapy (VMAT). This method is based on a pre-calculated patient dataset and DVH curve fitting using the Gaussian error function (GEF). Methods: Pre-calculated dose-volume data from patients having weight loss in prostate VMAT was employed to predict the change of PTV coverage due to reduced depth in external contour. The effect of patient weight loss in treatment was described by a prostate dose-volume factor (PDVF), which was evaluated by the prostate PTV. Along with themore » PDVF, the GEF was used to fit into the DVH curve for the PTV. To predict a new DVH due to weight loss, parameters from the GEF describing the shape of DVH curve were determined. Since the parameters were related to the PDVF as per the specific reduced depth, we could first predict the PDVF at a reduced depth based on the prostate size from the pre-calculated dataset. Then parameters of the GEF could be determined from the PDVF to plot the new DVH for the PTV corresponding to the reduced depth. Results: A MATLAB program was built basing on the patient dataset with different prostate sizes. We input data of the prostate size and reduced depth of the patient into the program. The program then calculated the PDVF and DVH for the PTV considering the patient weight loss. The program was verified by different patient cases with various reduced depths. Conclusion: Our method can estimate the change of DVH for the PTV due to patient weight loss quickly without CT rescan and replan. This would help the radiation staff to predict the change of PTV coverage, when patient’s external contour reduced in prostate VMAT.« less

  10. Supplier Selection Using Weighted Utility Additive Method

    NASA Astrophysics Data System (ADS)

    Karande, Prasad; Chakraborty, Shankar

    2015-10-01

    Supplier selection is a multi-criteria decision-making (MCDM) problem which mainly involves evaluating a number of available suppliers according to a set of common criteria for choosing the best one to meet the organizational needs. For any manufacturing or service organization, selecting the right upstream suppliers is a key success factor that will significantly reduce purchasing cost, increase downstream customer satisfaction and improve competitive ability. The past researchers have attempted to solve the supplier selection problem employing different MCDM techniques which involve active participation of the decision makers in the decision-making process. This paper deals with the application of weighted utility additive (WUTA) method for solving supplier selection problems. The WUTA method, an extension of utility additive approach, is based on ordinal regression and consists of building a piece-wise linear additive decision model from a preference structure using linear programming (LP). It adopts preference disaggregation principle and addresses the decision-making activities through operational models which need implicit preferences in the form of a preorder of reference alternatives or a subset of these alternatives present in the process. The preferential preorder provided by the decision maker is used as a restriction of a LP problem, which has its own objective function, minimization of the sum of the errors associated with the ranking of each alternative. Based on a given reference ranking of alternatives, one or more additive utility functions are derived. Using these utility functions, the weighted utilities for individual criterion values are combined into an overall weighted utility for a given alternative. It is observed that WUTA method, having a sound mathematical background, can provide accurate ranking to the candidate suppliers and choose the best one to fulfill the organizational requirements. Two real time examples are illustrated to prove its applicability and appropriateness in solving supplier selection problems.

  11. Prediction of microRNAs Associated with Human Diseases Based on Weighted k Most Similar Neighbors

    PubMed Central

    Guo, Maozu; Guo, Yahong; Li, Jinbao; Ding, Jian; Liu, Yong; Dai, Qiguo; Li, Jin; Teng, Zhixia; Huang, Yufei

    2013-01-01

    Background The identification of human disease-related microRNAs (disease miRNAs) is important for further investigating their involvement in the pathogenesis of diseases. More experimentally validated miRNA-disease associations have been accumulated recently. On the basis of these associations, it is essential to predict disease miRNAs for various human diseases. It is useful in providing reliable disease miRNA candidates for subsequent experimental studies. Methodology/Principal Findings It is known that miRNAs with similar functions are often associated with similar diseases and vice versa. Therefore, the functional similarity of two miRNAs has been successfully estimated by measuring the semantic similarity of their associated diseases. To effectively predict disease miRNAs, we calculated the functional similarity by incorporating the information content of disease terms and phenotype similarity between diseases. Furthermore, the members of miRNA family or cluster are assigned higher weight since they are more probably associated with similar diseases. A new prediction method, HDMP, based on weighted k most similar neighbors is presented for predicting disease miRNAs. Experiments validated that HDMP achieved significantly higher prediction performance than existing methods. In addition, the case studies examining prostatic neoplasms, breast neoplasms, and lung neoplasms, showed that HDMP can uncover potential disease miRNA candidates. Conclusions The superior performance of HDMP can be attributed to the accurate measurement of miRNA functional similarity, the weight assignment based on miRNA family or cluster, and the effective prediction based on weighted k most similar neighbors. The online prediction and analysis tool is freely available at http://nclab.hit.edu.cn/hdmpred. PMID:23950912

  12. An analytic-numerical method for the construction of the reference law of operation for a class of mechanical controlled systems

    NASA Astrophysics Data System (ADS)

    Mizhidon, A. D.; Mizhidon, K. A.

    2017-04-01

    An analytic-numerical method for the construction of a reference law of operation for a class of dynamic systems describing vibrations in controlled mechanical systems is proposed. By the reference law of operation of a system, we mean a law of the system motion that satisfies all the requirements for the quality and design features of the system under permanent external disturbances. As disturbances, we consider polyharmonic functions with known amplitudes and frequencies of the harmonics but unknown initial phases. For constructing the reference law of motion, an auxiliary optimal control problem is solved in which the cost function depends on a weighting coefficient. The choice of the weighting coefficient ensures the design of the reference law. Theoretical foundations of the proposed method are given.

  13. A unifying theoretical and algorithmic framework for least squares methods of estimation in diffusion tensor imaging.

    PubMed

    Koay, Cheng Guan; Chang, Lin-Ching; Carew, John D; Pierpaoli, Carlo; Basser, Peter J

    2006-09-01

    A unifying theoretical and algorithmic framework for diffusion tensor estimation is presented. Theoretical connections among the least squares (LS) methods, (linear least squares (LLS), weighted linear least squares (WLLS), nonlinear least squares (NLS) and their constrained counterparts), are established through their respective objective functions, and higher order derivatives of these objective functions, i.e., Hessian matrices. These theoretical connections provide new insights in designing efficient algorithms for NLS and constrained NLS (CNLS) estimation. Here, we propose novel algorithms of full Newton-type for the NLS and CNLS estimations, which are evaluated with Monte Carlo simulations and compared with the commonly used Levenberg-Marquardt method. The proposed methods have a lower percent of relative error in estimating the trace and lower reduced chi2 value than those of the Levenberg-Marquardt method. These results also demonstrate that the accuracy of an estimate, particularly in a nonlinear estimation problem, is greatly affected by the Hessian matrix. In other words, the accuracy of a nonlinear estimation is algorithm-dependent. Further, this study shows that the noise variance in diffusion weighted signals is orientation dependent when signal-to-noise ratio (SNR) is low (

  14. Parametric study of modern airship productivity

    NASA Technical Reports Server (NTRS)

    Ardema, M. D.; Flaig, K.

    1980-01-01

    A method for estimating the specific productivity of both hybrid and fully buoyant airships is developed. Various methods of estimating structural weight of deltoid hybrids are discussed and a derived weight estimating relationship is presented. Specific productivity is used as a figure of merit in a parametric study of fully buoyant ellipsoidal and deltoid hybrid semi-buoyant vehicles. The sensitivity of results as a function of assumptions is also determined. No airship configurations were found to have superior specific productivity to transport airplanes.

  15. Poly(ethyleneoxide) functionalization through alkylation

    DOEpatents

    Sivanandan, Kulandaivelu; Eitouni, Hany Basam; Li, Yan; Pratt, Russell Clayton

    2015-04-21

    A new and efficient method of functionalizing high molecular weight polymers through alkylation using a metal amide base is described. This novel procedure can also be used to synthesize polymer-based macro-initiators containing radical initiating groups at the chain-ends for synthesis of block copolymers.

  16. A weighted ℓ{sub 1}-minimization approach for sparse polynomial chaos expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, Ji; Hampton, Jerrad; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu

    2014-06-15

    This work proposes a method for sparse polynomial chaos (PC) approximation of high-dimensional stochastic functions based on non-adapted random sampling. We modify the standard ℓ{sub 1}-minimization algorithm, originally proposed in the context of compressive sampling, using a priori information about the decay of the PC coefficients, when available, and refer to the resulting algorithm as weightedℓ{sub 1}-minimization. We provide conditions under which we may guarantee recovery using this weighted scheme. Numerical tests are used to compare the weighted and non-weighted methods for the recovery of solutions to two differential equations with high-dimensional random inputs: a boundary value problem with amore » random elliptic operator and a 2-D thermally driven cavity flow with random boundary condition.« less

  17. Improved patch-based learning for image deblurring

    NASA Astrophysics Data System (ADS)

    Dong, Bo; Jiang, Zhiguo; Zhang, Haopeng

    2015-05-01

    Most recent image deblurring methods only use valid information found in input image as the clue to fill the deblurring region. These methods usually have the defects of insufficient prior information and relatively poor adaptiveness. Patch-based method not only uses the valid information of the input image itself, but also utilizes the prior information of the sample images to improve the adaptiveness. However the cost function of this method is quite time-consuming and the method may also produce ringing artifacts. In this paper, we propose an improved non-blind deblurring algorithm based on learning patch likelihoods. On one hand, we consider the effect of the Gaussian mixture model with different weights and normalize the weight values, which can optimize the cost function and reduce running time. On the other hand, a post processing method is proposed to solve the ringing artifacts produced by traditional patch-based method. Extensive experiments are performed. Experimental results verify that our method can effectively reduce the execution time, suppress the ringing artifacts effectively, and keep the quality of deblurred image.

  18. Metabolic Cost, Mechanical Work, and Efficiency during Normal Walking in Obese and Normal-Weight Children

    ERIC Educational Resources Information Center

    Huang, Liang; Chen, Peijie; Zhuang, Jie; Zhang, Yanxin; Walt, Sharon

    2013-01-01

    Purpose: This study aimed to investigate the influence of childhood obesity on energetic cost during normal walking and to determine if obese children choose a walking strategy optimizing their gait pattern. Method: Sixteen obese children with no functional abnormalities were matched by age and gender with 16 normal-weight children. All…

  19. Intra-rater reliability and agreement of various methods of measurement to assess dorsiflexion in the Weight Bearing Dorsiflexion Lunge Test (WBLT) among female athletes.

    PubMed

    Langarika-Rocafort, Argia; Emparanza, José Ignacio; Aramendi, José F; Castellano, Julen; Calleja-González, Julio

    2017-01-01

    To examine the intra-observer reliability and agreement between five methods of measurement for dorsiflexion during Weight Bearing Dorsiflexion Lunge Test and to assess the degree of agreement between three methods in female athletes. Repeated measurements study design. Volleyball club. Twenty-five volleyball players. Dorsiflexion was evaluated using five methods: heel-wall distance, first toe-wall distance, inclinometer at tibia, inclinometer at Achilles tendon and the dorsiflexion angle obtained by a simple trigonometric function. For the statistical analysis, agreement was studied using the Bland-Altman method, the Standard Error of Measurement and the Minimum Detectable Change. Reliability analysis was performed using the Intraclass Correlation Coefficient. Measurement methods using the inclinometer had more than 6° of measurement error. The angle calculated by trigonometric function had 3.28° error. The reliability of inclinometer based methods had ICC values < 0.90. Distance based methods and trigonometric angle measurement had an ICC values > 0.90. Concerning the agreement between methods, there was from 1.93° to 14.42° bias, and from 4.24° to 7.96° random error. To assess DF angle in WBLT, the angle calculated by a trigonometric function is the most repeatable method. The methods of measurement cannot be used interchangeably. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Robust, Adaptive Functional Regression in Functional Mixed Model Framework.

    PubMed

    Zhu, Hongxiao; Brown, Philip J; Morris, Jeffrey S

    2011-09-01

    Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets.

  1. Robust, Adaptive Functional Regression in Functional Mixed Model Framework

    PubMed Central

    Zhu, Hongxiao; Brown, Philip J.; Morris, Jeffrey S.

    2012-01-01

    Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets. PMID:22308015

  2. Structure-based Markov random field model for representing evolutionary constraints on functional sites.

    PubMed

    Jeong, Chan-Seok; Kim, Dongsup

    2016-02-24

    Elucidating the cooperative mechanism of interconnected residues is an important component toward understanding the biological function of a protein. Coevolution analysis has been developed to model the coevolutionary information reflecting structural and functional constraints. Recently, several methods have been developed based on a probabilistic graphical model called the Markov random field (MRF), which have led to significant improvements for coevolution analysis; however, thus far, the performance of these models has mainly been assessed by focusing on the aspect of protein structure. In this study, we built an MRF model whose graphical topology is determined by the residue proximity in the protein structure, and derived a novel positional coevolution estimate utilizing the node weight of the MRF model. This structure-based MRF method was evaluated for three data sets, each of which annotates catalytic site, allosteric site, and comprehensively determined functional site information. We demonstrate that the structure-based MRF architecture can encode the evolutionary information associated with biological function. Furthermore, we show that the node weight can more accurately represent positional coevolution information compared to the edge weight. Lastly, we demonstrate that the structure-based MRF model can be reliably built with only a few aligned sequences in linear time. The results show that adoption of a structure-based architecture could be an acceptable approximation for coevolution modeling with efficient computation complexity.

  3. The assessment of weight status in children and young people attending a spina bifida outpatient clinic: a retrospective medical record review

    PubMed Central

    Swift, Judy Anne; Yung, Emily; Lyons, Julia; Church, Paige

    2013-01-01

    Purpose Children with disabilities are two to three times more likely to become overweight or obese than typically developing children. Children with spina bifida (SB) are at particular risk, yet obesity prevalence and weight management with this population are under-researched. This retrospective chart review explored how weight is assessed and discussed in a children’s SB outpatient clinic. Method Height/weight data were extracted from records of children aged 2–18 with a diagnosis of SB attending an outpatient clinic at least once between June 2009–2011. Body mass index was calculated and classified using Centers for Disease Control and Prevention cut-offs. Notes around weight, diet and physical/sedentary activities were transcribed verbatim and analysed using descriptive thematic analysis. Results Of 180 eligible patients identified, only 63 records had sufficient data to calculate BMI; 15 patients were overweight (23.81%) and 11 obese (17.46%). Weight and physical activity discussions were typically related to function (e.g. mobility, pain). Diet discussions focused on bowel and bladder function and dietary challenges. Conclusions Anthropometrics were infrequently recorded, leaving an incomplete picture of weight status in children with SB and suggesting that weight is not prioritised. Bowel/bladder function was highlighted over other benefits of a healthy body weight, indicating that health promotion opportunities are being missed. Implications for Rehabilitation It is important to assess, categorise and record anthropometric data for children and youth with spina bifida as they may be at particular risk of excess weight. Information around weight categorisation should be discussed openly and non-judgmentally with children and their families. Health promotion opportunities may be missed by focusing solely on symptom management or function. Healthcare professionals should emphasise the broad benefits of healthy eating and physical activity, offering strategies to enable the child to incorporate healthy lifestyle behaviours appropriate to their level of ability. PMID:23510013

  4. Insights From Google Play Store User Reviews for the Development of Weight Loss Apps: Mixed-Method Analysis

    PubMed Central

    Hartmann-Boyce, Jamie; Jebb, Susan; Albury, Charlotte; Nourse, Rebecca; Aveyard, Paul

    2017-01-01

    Background Significant weight loss takes several months to achieve, and behavioral support can enhance weight loss success. Weight loss apps could provide ongoing support and deliver innovative interventions, but to do so, developers must ensure user satisfaction. Objective The aim of this study was to conduct a review of Google Play Store apps to explore what users like and dislike about weight loss and weight-tracking apps and to examine qualitative feedback through analysis of user reviews. Methods The Google Play Store was searched and screened for weight loss apps using the search terms weight loss and weight track*, resulting in 179 mobile apps. A content analysis was conducted based on the Oxford Food and Activity Behaviors taxonomy. Correlational analyses were used to assess the association between complexity of mobile health (mHealth) apps and popularity indicators. The sample was then screened for popular apps that primarily focus on weight-tracking. For the resulting subset of 15 weight-tracking apps, 569 user reviews were sampled from the Google Play Store. Framework and thematic analysis of user reviews was conducted to assess which features users valued and how design influenced users’ responses. Results The complexity (number of components) of weight loss apps was significantly positively correlated with the rating (r=.25; P=.001), number of reviews (r=.28; P<.001), and number of downloads (r=.48; P<.001) of the app. In contrast, in the qualitative analysis of weight-tracking apps, users expressed preference for simplicity and ease of use. In addition, we found that positive reinforcement through detailed feedback fostered users’ motivation for further weight loss. Smooth functioning and reliable data storage emerged as critical prerequisites for long-term app usage. Conclusions Users of weight-tracking apps valued simplicity, whereas users of comprehensive weight loss apps appreciated availability of more features, indicating that complexity demands are specific to different target populations. The provision of feedback on progress can motivate users to continue their weight loss attempts. Users value seamless functioning and reliable data storage. PMID:29273575

  5. Current dosing of low-molecular-weight heparins does not reflect licensed product labels: an international survey

    PubMed Central

    Barras, Michael A; Kirkpatrick, Carl M J; Green, Bruce

    2010-01-01

    AIMS Low-molecular-weight heparins (LMWHs) are used globally to treat thromboembolic diseases; however, there is much debate on how to prescribe effectively for patients who have renal impairment and/or obesity. We aimed to investigate the strategies used to dose-individualize LMWH therapy. METHODS We conducted an online survey of selected hospitals in Australia, New Zealand (NZ), United Kingdom (UK) and the United States (US). Outcome measures included: the percentage of hospitals which recommended that LMWHs were prescribed according to the product label (PL), the percentage of hospitals that dose-individualized LMWHs outside the PL based on renal function, body weight and anti-Xa activity and a summary of methods used to dose-individualize therapy. RESULTS A total of 257 surveys were suitable for analysis: 84 (33%) from Australia, 79 (31%) from the UK, 73 (28%) from the US and 21 (8%) from NZ. Formal dosing protocols were used in 207 (81%) hospitals, of which 198 (96%) did not adhere to the PL. Of these 198 hospitals, 175 (87%) preferred to dose-individualize based on renal function, 128 (62%) on body weight and 48 (23%) by monitoring anti-Xa activity. All three of these variables were used in 29 (14%) hospitals, 98 (47%) used two variables and 71 (34%) used only one variable. CONCLUSIONS Dose-individualization strategies for LMWHs, which contravene the PL, were present in 96% of surveyed hospitals. Common individualization methods included dose-capping, use of lean body size descriptors to calculate renal function and the starting dose, followed by post dose anti-Xa monitoring. PMID:20573088

  6. An improved method for functional similarity analysis of genes based on Gene Ontology.

    PubMed

    Tian, Zhen; Wang, Chunyu; Guo, Maozu; Liu, Xiaoyan; Teng, Zhixia

    2016-12-23

    Measures of gene functional similarity are essential tools for gene clustering, gene function prediction, evaluation of protein-protein interaction, disease gene prioritization and other applications. In recent years, many gene functional similarity methods have been proposed based on the semantic similarity of GO terms. However, these leading approaches may make errorprone judgments especially when they measure the specificity of GO terms as well as the IC of a term set. Therefore, how to estimate the gene functional similarity reliably is still a challenging problem. We propose WIS, an effective method to measure the gene functional similarity. First of all, WIS computes the IC of a term by employing its depth, the number of its ancestors as well as the topology of its descendants in the GO graph. Secondly, WIS calculates the IC of a term set by means of considering the weighted inherited semantics of terms. Finally, WIS estimates the gene functional similarity based on the IC overlap ratio of term sets. WIS is superior to some other representative measures on the experiments of functional classification of genes in a biological pathway, collaborative evaluation of GO-based semantic similarity measures, protein-protein interaction prediction and correlation with gene expression. Further analysis suggests that WIS takes fully into account the specificity of terms and the weighted inherited semantics of terms between GO terms. The proposed WIS method is an effective and reliable way to compare gene function. The web service of WIS is freely available at http://nclab.hit.edu.cn/WIS/ .

  7. Coherent mode decomposition using mixed Wigner functions of Hermite-Gaussian beams.

    PubMed

    Tanaka, Takashi

    2017-04-15

    A new method of coherent mode decomposition (CMD) is proposed that is based on a Wigner-function representation of Hermite-Gaussian beams. In contrast to the well-known method using the cross spectral density (CSD), it directly determines the mode functions and their weights without solving the eigenvalue problem. This facilitates the CMD of partially coherent light whose Wigner functions (and thus CSDs) are not separable, in which case the conventional CMD requires solving an eigenvalue problem with a large matrix and thus is numerically formidable. An example is shown regarding the CMD of synchrotron radiation, one of the most important applications of the proposed method.

  8. Functional Interfaces Constructed by Controlled/Living Radical Polymerization for Analytical Chemistry.

    PubMed

    Wang, Huai-Song; Song, Min; Hang, Tai-Jun

    2016-02-10

    The high-value applications of functional polymers in analytical science generally require well-defined interfaces, including precisely synthesized molecular architectures and compositions. Controlled/living radical polymerization (CRP) has been developed as a versatile and powerful tool for the preparation of polymers with narrow molecular weight distributions and predetermined molecular weights. Among the CRP system, atom transfer radical polymerization (ATRP) and reversible addition-fragmentation chain transfer (RAFT) are well-used to develop new materials for analytical science, such as surface-modified core-shell particles, monoliths, MIP micro- or nanospheres, fluorescent nanoparticles, and multifunctional materials. In this review, we summarize the emerging functional interfaces constructed by RAFT and ATRP for applications in analytical science. Various polymers with precisely controlled architectures including homopolymers, block copolymers, molecular imprinted copolymers, and grafted copolymers were synthesized by CRP methods for molecular separation, retention, or sensing. We expect that the CRP methods will become the most popular technique for preparing functional polymers that can be broadly applied in analytical chemistry.

  9. Comparison of two schemes for automatic keyword extraction from MEDLINE for functional gene clustering.

    PubMed

    Liu, Ying; Ciliax, Brian J; Borges, Karin; Dasigi, Venu; Ram, Ashwin; Navathe, Shamkant B; Dingledine, Ray

    2004-01-01

    One of the key challenges of microarray studies is to derive biological insights from the unprecedented quatities of data on gene-expression patterns. Clustering genes by functional keyword association can provide direct information about the nature of the functional links among genes within the derived clusters. However, the quality of the keyword lists extracted from biomedical literature for each gene significantly affects the clustering results. We extracted keywords from MEDLINE that describes the most prominent functions of the genes, and used the resulting weights of the keywords as feature vectors for gene clustering. By analyzing the resulting cluster quality, we compared two keyword weighting schemes: normalized z-score and term frequency-inverse document frequency (TFIDF). The best combination of background comparison set, stop list and stemming algorithm was selected based on precision and recall metrics. In a test set of four known gene groups, a hierarchical algorithm correctly assigned 25 of 26 genes to the appropriate clusters based on keywords extracted by the TDFIDF weighting scheme, but only 23 og 26 with the z-score method. To evaluate the effectiveness of the weighting schemes for keyword extraction for gene clusters from microarray profiles, 44 yeast genes that are differentially expressed during the cell cycle were used as a second test set. Using established measures of cluster quality, the results produced from TFIDF-weighted keywords had higher purity, lower entropy, and higher mutual information than those produced from normalized z-score weighted keywords. The optimized algorithms should be useful for sorting genes from microarray lists into functionally discrete clusters.

  10. Methods for the synthesis of polysilanes

    DOEpatents

    Zeigler, John M.

    1991-01-01

    A method of controlling the yield of polysilane of a desired molecular weight and/or polydispersity prepared in a reductive condensation of corresponding silane monomers on a solid catalyst dispersed in an inert solvent for both the monomers and the growing polymer chains, comprises determining the variation of molecular weight and/or polydispersity of the polysilane as a function of the solubility of the polysilane in reaction solvent, determining thereby a chosen optimum solubility of the polysilane in solvent for obtaining a desired yield of polysilane of said desired molecular weight and/or polydispersity, and thereafter carrying out the preparation of the polysilane in a solvent in which the polysilane has said chosen optimum solubility.

  11. A Novel Weighted Kernel PCA-Based Method for Optimization and Uncertainty Quantification

    NASA Astrophysics Data System (ADS)

    Thimmisetty, C.; Talbot, C.; Chen, X.; Tong, C. H.

    2016-12-01

    It has been demonstrated that machine learning methods can be successfully applied to uncertainty quantification for geophysical systems through the use of the adjoint method coupled with kernel PCA-based optimization. In addition, it has been shown through weighted linear PCA how optimization with respect to both observation weights and feature space control variables can accelerate convergence of such methods. Linear machine learning methods, however, are inherently limited in their ability to represent features of non-Gaussian stochastic random fields, as they are based on only the first two statistical moments of the original data. Nonlinear spatial relationships and multipoint statistics leading to the tortuosity characteristic of channelized media, for example, are captured only to a limited extent by linear PCA. With the aim of coupling the kernel-based and weighted methods discussed, we present a novel mathematical formulation of kernel PCA, Weighted Kernel Principal Component Analysis (WKPCA), that both captures nonlinear relationships and incorporates the attribution of significance levels to different realizations of the stochastic random field of interest. We also demonstrate how new instantiations retaining defining characteristics of the random field can be generated using Bayesian methods. In particular, we present a novel WKPCA-based optimization method that minimizes a given objective function with respect to both feature space random variables and observation weights through which optimal snapshot significance levels and optimal features are learned. We showcase how WKPCA can be applied to nonlinear optimal control problems involving channelized media, and in particular demonstrate an application of the method to learning the spatial distribution of material parameter values in the context of linear elasticity, and discuss further extensions of the method to stochastic inversion.

  12. A novel iris patterns matching algorithm of weighted polar frequency correlation

    NASA Astrophysics Data System (ADS)

    Zhao, Weijie; Jiang, Linhua

    2014-11-01

    Iris recognition is recognized as one of the most accurate techniques for biometric authentication. In this paper, we present a novel correlation method - Weighted Polar Frequency Correlation(WPFC) - to match and evaluate two iris images, actually it can also be used for evaluating the similarity of any two images. The WPFC method is a novel matching and evaluating method for iris image matching, which is complete different from the conventional methods. For instance, the classical John Daugman's method of iris recognition uses 2D Gabor wavelets to extract features of iris image into a compact bit stream, and then matching two bit streams with hamming distance. Our new method is based on the correlation in the polar coordinate system in frequency domain with regulated weights. The new method is motivated by the observation that the pattern of iris that contains far more information for recognition is fine structure at high frequency other than the gross shapes of iris images. Therefore, we transform iris images into frequency domain and set different weights to frequencies. Then calculate the correlation of two iris images in frequency domain. We evaluate the iris images by summing the discrete correlation values with regulated weights, comparing the value with preset threshold to tell whether these two iris images are captured from the same person or not. Experiments are carried out on both CASIA database and self-obtained images. The results show that our method is functional and reliable. Our method provides a new prospect for iris recognition system.

  13. Compensation for the phase-type spatial periodic modulation of the near-field beam at 1053 nm

    NASA Astrophysics Data System (ADS)

    Gao, Yaru; Liu, Dean; Yang, Aihua; Tang, Ruyu; Zhu, Jianqiang

    2017-10-01

    A phase-only spatial light modulator is used to provide and compensate for the spatial periodic modulation (SPM) of the near-field beam at the near infrared at 1053nm wavelength with an improved iterative weight-based method. The transmission characteristics of the incident beam has been changed by a spatial light modulator (SLM) to shape the spatial intensity of the output beam. The propagation and reverse propagation of the light in free space are two important processes in the iterative process. The based theory is the beam angular spectrum transmit formula (ASTF) and the principle of the iterative weight-based method. We have made two improvements to the originally proposed iterative weight-based method. We select the appropriate parameter by choosing the minimum value of the output beam contrast degree and use the MATLAB built-in angle function to acquire the corresponding phase of the light wave function. The required phase that compensates for the intensity distribution of the incident SPM beam is iterated by this algorithm, which can decrease the magnitude of the SPM of the intensity on the observation plane. The experimental results show that the phase-type SPM of the near-field beam is subject to a certain restriction. We have also analyzed some factors that make the results imperfect. The experiment results verifies the possible applicability of this iterative weight-based method to compensate for the SPM of the near-field beam.

  14. Family Functioning: Associations with Weight Status, Eating Behaviors, and Physical Activity in Adolescents

    PubMed Central

    Berge, Jerica M.; Wall, Melanie; Larson, Nicole; Loth, Katie A.; Neumark-Sztainer, Dianne

    2012-01-01

    Purpose This paper examines the relationship between family functioning (e.g. communication, closeness, problem solving, behavioral control) and adolescent weight status and relevant eating and physical activity behaviors. Methods Data are from EAT 2010 (Eating and Activity in Teens), a population-based study that assessed eating and activity among socioeconomically and racially/ethnically diverse youth (n = 2,793). Adolescents (46.8% boys, 53.2% girls) completed anthropometric assessments and surveys at school in 2009–2010. Multiple linear regression was used to test the relationship between family functioning and adolescent weight, dietary intake, family meal patterns, and physical activity. Additional regression models were fit to test for interactions by race/ethnicity. Results For adolescent girls, higher family functioning was associated with lower body mass index z-score and percent overweight, less sedentary behavior, higher intake of fruits and vegetables, and more frequent family meals and breakfast consumption. For adolescent boys, higher family functioning was associated with more physical activity, less sedentary behavior, less fast food consumption, and more frequent family meals and breakfast consumption. There was one significant interaction by race/ethnicity for family meals; the association between higher family functioning and more frequent family meals was stronger for non-white boys compared to white boys. Overall, strengths of associations tended to be small with effect sizes ranging from - 0.07 to 0.31 for statistically significant associations. Conclusions Findings suggest that family functioning may be protective for adolescent weight and weight-related health behaviors across all race/ethnicities, although assumptions regarding family functioning in the homes of overweight children should be avoided given small effect sizes. PMID:23299010

  15. The effects of weight change on glomerular filtration rate.

    PubMed

    Chang, Alex; Greene, Tom H; Wang, Xuelei; Kendrick, Cynthia; Kramer, Holly; Wright, Jackson; Astor, Brad; Shafi, Tariq; Toto, Robert; Lewis, Julia; Appel, Lawrence J; Grams, Morgan

    2015-11-01

    Little is known about the effect of weight loss/gain on kidney function. Analyses are complicated by uncertainty about optimal body surface indexing strategies for measured glomerular filtration rate (mGFR). Using data from the African-American Study of Kidney Disease and Hypertension (AASK), we determined the association of change in weight with three different estimates of change in kidney function: (i) unindexed mGFR estimated by renal clearance of iodine-125-iothalamate, (ii) mGFR indexed to concurrently measured BSA and (iii) GFR estimated from serum creatinine (eGFR). All models were adjusted for baseline weight, time, randomization group and time-varying diuretic use. We also examined whether these relationships were consistent across a number of subgroups, including tertiles of baseline 24-h urine sodium excretion. In 1094 participants followed over an average of 3.6 years, a 5-kg weight gain was associated with a 1.10 mL/min/1.73 m(2) (95% CI: 0.87 to 1.33; P < 0.001) increase in unindexed mGFR. There was no association between weight change and mGFR indexed for concurrent BSA (per 5 kg weight gain, 0.21; 95% CI: -0.02 to 0.44; P = 0.1) or between weight change and eGFR (-0.09; 95% CI: -0.32 to 0.14; P = 0.4). The effect of weight change on unindexed mGFR was less pronounced in individuals with higher baseline sodium excretion (P = 0.08 for interaction). The association between weight change and kidney function varies depending on the method of assessment. Future clinical trials should examine the effect of intentional weight change on measured GFR or filtration markers robust to changes in muscle mass. © The Author 2015. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  16. Prioritizing chronic obstructive pulmonary disease (COPD) candidate genes in COPD-related networks

    PubMed Central

    Zhang, Yihua; Li, Wan; Feng, Yuyan; Guo, Shanshan; Zhao, Xilei; Wang, Yahui; He, Yuehan; He, Weiming; Chen, Lina

    2017-01-01

    Chronic obstructive pulmonary disease (COPD) is a multi-factor disease, which could be caused by many factors, including disturbances of metabolism and protein-protein interactions (PPIs). In this paper, a weighted COPD-related metabolic network and a weighted COPD-related PPI network were constructed base on COPD disease genes and functional information. Candidate genes in these weighted COPD-related networks were prioritized by making use of a gene prioritization method, respectively. Literature review and functional enrichment analysis of the top 100 genes in these two networks suggested the correlation of COPD and these genes. The performance of our gene prioritization method was superior to that of ToppGene and ToppNet for genes from the COPD-related metabolic network or the COPD-related PPI network after assessing using leave-one-out cross-validation, literature validation and functional enrichment analysis. The top-ranked genes prioritized from COPD-related metabolic and PPI networks could promote the better understanding about the molecular mechanism of this disease from different perspectives. The top 100 genes in COPD-related metabolic network or COPD-related PPI network might be potential markers for the diagnosis and treatment of COPD. PMID:29262568

  17. Prioritizing chronic obstructive pulmonary disease (COPD) candidate genes in COPD-related networks.

    PubMed

    Zhang, Yihua; Li, Wan; Feng, Yuyan; Guo, Shanshan; Zhao, Xilei; Wang, Yahui; He, Yuehan; He, Weiming; Chen, Lina

    2017-11-28

    Chronic obstructive pulmonary disease (COPD) is a multi-factor disease, which could be caused by many factors, including disturbances of metabolism and protein-protein interactions (PPIs). In this paper, a weighted COPD-related metabolic network and a weighted COPD-related PPI network were constructed base on COPD disease genes and functional information. Candidate genes in these weighted COPD-related networks were prioritized by making use of a gene prioritization method, respectively. Literature review and functional enrichment analysis of the top 100 genes in these two networks suggested the correlation of COPD and these genes. The performance of our gene prioritization method was superior to that of ToppGene and ToppNet for genes from the COPD-related metabolic network or the COPD-related PPI network after assessing using leave-one-out cross-validation, literature validation and functional enrichment analysis. The top-ranked genes prioritized from COPD-related metabolic and PPI networks could promote the better understanding about the molecular mechanism of this disease from different perspectives. The top 100 genes in COPD-related metabolic network or COPD-related PPI network might be potential markers for the diagnosis and treatment of COPD.

  18. Functional data analysis of sleeping energy expenditure.

    PubMed

    Lee, Jong Soo; Zakeri, Issa F; Butte, Nancy F

    2017-01-01

    Adequate sleep is crucial during childhood for metabolic health, and physical and cognitive development. Inadequate sleep can disrupt metabolic homeostasis and alter sleeping energy expenditure (SEE). Functional data analysis methods were applied to SEE data to elucidate the population structure of SEE and to discriminate SEE between obese and non-obese children. Minute-by-minute SEE in 109 children, ages 5-18, was measured in room respiration calorimeters. A smoothing spline method was applied to the calorimetric data to extract the true smoothing function for each subject. Functional principal component analysis was used to capture the important modes of variation of the functional data and to identify differences in SEE patterns. Combinations of functional principal component analysis and classifier algorithm were used to classify SEE. Smoothing effectively removed instrumentation noise inherent in the room calorimeter data, providing more accurate data for analysis of the dynamics of SEE. SEE exhibited declining but subtly undulating patterns throughout the night. Mean SEE was markedly higher in obese than non-obese children, as expected due to their greater body mass. SEE was higher among the obese than non-obese children (p<0.01); however, the weight-adjusted mean SEE was not statistically different (p>0.1, after post hoc testing). Functional principal component scores for the first two components explained 77.8% of the variance in SEE and also differed between groups (p = 0.037). Logistic regression, support vector machine or random forest classification methods were able to distinguish weight-adjusted SEE between obese and non-obese participants with good classification rates (62-64%). Our results implicate other factors, yet to be uncovered, that affect the weight-adjusted SEE of obese and non-obese children. Functional data analysis revealed differences in the structure of SEE between obese and non-obese children that may contribute to disruption of metabolic homeostasis.

  19. Effect of 48 h Fasting on Autonomic Function, Brain Activity, Cognition, and Mood in Amateur Weight Lifters.

    PubMed

    Solianik, Rima; Sujeta, Artūras; Terentjevienė, Asta; Skurvydas, Albertas

    2016-01-01

    Objectives. The acute fasting-induced cardiovascular autonomic response and its effect on cognition and mood remain debatable. Thus, the main purpose of this study was to estimate the effect of a 48 h, zero-calorie diet on autonomic function, brain activity, cognition, and mood in amateur weight lifters. Methods. Nine participants completed a 48 h, zero-calorie diet program. Cardiovascular autonomic function, resting frontal brain activity, cognitive performance, and mood were evaluated before and after fasting. Results. Fasting decreased ( p < 0.05) weight, heart rate, and systolic blood pressure, whereas no changes were evident regarding any of the measured heart rate variability indices. Fasting decreased ( p < 0.05) the concentration of oxygenated hemoglobin and improved ( p < 0.05) mental flexibility and shifting set, whereas no changes were observed in working memory, visuospatial discrimination, and spatial orientation ability. Fasting also increased ( p < 0.05) anger, whereas other mood states were not affected by it. Conclusions. 48 h fasting resulted in higher parasympathetic activity and decreased resting frontal brain activity, increased anger, and improved prefrontal-cortex-related cognitive functions, such as mental flexibility and set shifting, in amateur weight lifters. In contrast, hippocampus-related cognitive functions were not affected by it.

  20. Effect of 48 h Fasting on Autonomic Function, Brain Activity, Cognition, and Mood in Amateur Weight Lifters

    PubMed Central

    Skurvydas, Albertas

    2016-01-01

    Objectives. The acute fasting-induced cardiovascular autonomic response and its effect on cognition and mood remain debatable. Thus, the main purpose of this study was to estimate the effect of a 48 h, zero-calorie diet on autonomic function, brain activity, cognition, and mood in amateur weight lifters. Methods. Nine participants completed a 48 h, zero-calorie diet program. Cardiovascular autonomic function, resting frontal brain activity, cognitive performance, and mood were evaluated before and after fasting. Results. Fasting decreased (p < 0.05) weight, heart rate, and systolic blood pressure, whereas no changes were evident regarding any of the measured heart rate variability indices. Fasting decreased (p < 0.05) the concentration of oxygenated hemoglobin and improved (p < 0.05) mental flexibility and shifting set, whereas no changes were observed in working memory, visuospatial discrimination, and spatial orientation ability. Fasting also increased (p < 0.05) anger, whereas other mood states were not affected by it. Conclusions. 48 h fasting resulted in higher parasympathetic activity and decreased resting frontal brain activity, increased anger, and improved prefrontal-cortex-related cognitive functions, such as mental flexibility and set shifting, in amateur weight lifters. In contrast, hippocampus-related cognitive functions were not affected by it. PMID:28025637

  1. Filtering genetic variants and placing informative priors based on putative biological function.

    PubMed

    Friedrichs, Stefanie; Malzahn, Dörthe; Pugh, Elizabeth W; Almeida, Marcio; Liu, Xiao Qing; Bailey, Julia N

    2016-02-03

    High-density genetic marker data, especially sequence data, imply an immense multiple testing burden. This can be ameliorated by filtering genetic variants, exploiting or accounting for correlations between variants, jointly testing variants, and by incorporating informative priors. Priors can be based on biological knowledge or predicted variant function, or even be used to integrate gene expression or other omics data. Based on Genetic Analysis Workshop (GAW) 19 data, this article discusses diversity and usefulness of functional variant scores provided, for example, by PolyPhen2, SIFT, or RegulomeDB annotations. Incorporating functional scores into variant filters or weights and adjusting the significance level for correlations between variants yielded significant associations with blood pressure traits in a large family study of Mexican Americans (GAW19 data set). Marker rs218966 in gene PHF14 and rs9836027 in MAP4 significantly associated with hypertension; additionally, rare variants in SNUPN significantly associated with systolic blood pressure. Variant weights strongly influenced the power of kernel methods and burden tests. Apart from variant weights in test statistics, prior weights may also be used when combining test statistics or to informatively weight p values while controlling false discovery rate (FDR). Indeed, power improved when gene expression data for FDR-controlled informative weighting of association test p values of genes was used. Finally, approaches exploiting variant correlations included identity-by-descent mapping and the optimal strategy for joint testing rare and common variants, which was observed to depend on linkage disequilibrium structure.

  2. An improved local radial point interpolation method for transient heat conduction analysis

    NASA Astrophysics Data System (ADS)

    Wang, Feng; Lin, Gao; Zheng, Bao-Jing; Hu, Zhi-Qiang

    2013-06-01

    The smoothing thin plate spline (STPS) interpolation using the penalty function method according to the optimization theory is presented to deal with transient heat conduction problems. The smooth conditions of the shape functions and derivatives can be satisfied so that the distortions hardly occur. Local weak forms are developed using the weighted residual method locally from the partial differential equations of the transient heat conduction. Here the Heaviside step function is used as the test function in each sub-domain to avoid the need for a domain integral. Essential boundary conditions can be implemented like the finite element method (FEM) as the shape functions possess the Kronecker delta property. The traditional two-point difference method is selected for the time discretization scheme. Three selected numerical examples are presented in this paper to demonstrate the availability and accuracy of the present approach comparing with the traditional thin plate spline (TPS) radial basis functions.

  3. Nutrition management methods effective in increasing weight, survival time and functional status in ALS patients: a systematic review.

    PubMed

    Kellogg, Jaylin; Bottman, Lindsey; Arra, Erin J; Selkirk, Stephen M; Kozlowski, Frances

    2018-02-01

    Poor prognosis and decreased survival time correlate with the nutritional status of patients with amyotrophic lateral sclerosis (ALS). Various studies were reviewed which assessed weight, body mass index (BMI), survival time and ALS functional rating scale revised (ALSFRS-R) in order to determine the best nutrition management methods for this patient population. A systematic review was conducted using CINAHL, Medline, and PubMed, and various search terms in order to determine the most recent clinical trials and observational studies that have been conducted concerning nutrition and ALS. Four articles met criteria to be included in the review. Data were extracted from these articles and were inputted into the Data Extraction Tool (DET) provided by the Academy of Nutrition and Dietetics (AND). Results showed that nutrition supplementation does promote weight stabilisation or weight gain in individuals with ALS. Given the low risk and low cost associated with intervention, early and aggressive nutrition intervention is recommended. This systematic review shows that there is a lack of high quality evidence regarding the efficacy of any dietary interventions for promoting survival in ALS or slowing disease progression; therefore more research is necessary related to effects of nutrition interventions.

  4. Association Between Dietary Intake and Function in Amyotrophic Lateral Sclerosis.

    PubMed

    Nieves, Jeri W; Gennings, Chris; Factor-Litvak, Pam; Hupf, Jonathan; Singleton, Jessica; Sharf, Valerie; Oskarsson, Björn; Fernandes Filho, J Americo M; Sorenson, Eric J; D'Amico, Emanuele; Goetz, Ray; Mitsumoto, Hiroshi

    2016-12-01

    There is growing interest in the role of nutrition in the pathogenesis and progression of amyotrophic lateral sclerosis (ALS). To evaluate the associations between nutrients, individually and in groups, and ALS function and respiratory function at diagnosis. A cross-sectional baseline analysis of the Amyotrophic Lateral Sclerosis Multicenter Cohort Study of Oxidative Stress study was conducted from March 14, 2008, to February 27, 2013, at 16 ALS clinics throughout the United States among 302 patients with ALS symptom duration of 18 months or less. Nutrient intake, measured using a modified Block Food Frequency Questionnaire (FFQ). Amyotrophic lateral sclerosis function, measured using the ALS Functional Rating Scale-Revised (ALSFRS-R), and respiratory function, measured using percentage of predicted forced vital capacity (FVC). Baseline data were available on 302 patients with ALS (median age, 63.2 years [interquartile range, 55.5-68.0 years]; 178 men and 124 women). Regression analysis of nutrients found that higher intakes of antioxidants and carotenes from vegetables were associated with higher ALSFRS-R scores or percentage FVC. Empirically weighted indices using the weighted quantile sum regression method of "good" micronutrients and "good" food groups were positively associated with ALSFRS-R scores (β [SE], 2.7 [0.69] and 2.9 [0.9], respectively) and percentage FVC (β [SE], 12.1 [2.8] and 11.5 [3.4], respectively) (all P < .001). Positive and significant associations with ALSFRS-R scores (β [SE], 1.5 [0.61]; P = .02) and percentage FVC (β [SE], 5.2 [2.2]; P = .02) for selected vitamins were found in exploratory analyses. Antioxidants, carotenes, fruits, and vegetables were associated with higher ALS function at baseline by regression of nutrient indices and weighted quantile sum regression analysis. We also demonstrated the usefulness of the weighted quantile sum regression method in the evaluation of diet. Those responsible for nutritional care of the patient with ALS should consider promoting fruit and vegetable intake since they are high in antioxidants and carotenes.

  5. Mineral inversion for element capture spectroscopy logging based on optimization theory

    NASA Astrophysics Data System (ADS)

    Zhao, Jianpeng; Chen, Hui; Yin, Lu; Li, Ning

    2017-12-01

    Understanding the mineralogical composition of a formation is an essential key step in the petrophysical evaluation of petroleum reservoirs. Geochemical logging tools can provide quantitative measurements of a wide range of elements. In this paper, element capture spectroscopy (ECS) was taken as an example and an optimization method was adopted to solve the mineral inversion problem for ECS. This method used the converting relationship between elements and minerals as response equations and took into account the statistical uncertainty of the element measurements and established an optimization function for ECS. Objective function value and reconstructed elemental logs were used to check the robustness and reliability of the inversion method. Finally, the inversion mineral results had a good agreement with x-ray diffraction laboratory data. The accurate conversion of elemental dry weights to mineral dry weights formed the foundation for the subsequent applications based on ECS.

  6. Sensor Drift Compensation Algorithm based on PDF Distance Minimization

    NASA Astrophysics Data System (ADS)

    Kim, Namyong; Byun, Hyung-Gi; Persaud, Krishna C.; Huh, Jeung-Soo

    2009-05-01

    In this paper, a new unsupervised classification algorithm is introduced for the compensation of sensor drift effects of the odor sensing system using a conducting polymer sensor array. The proposed method continues updating adaptive Radial Basis Function Network (RBFN) weights in the testing phase based on minimizing Euclidian Distance between two Probability Density Functions (PDFs) of a set of training phase output data and another set of testing phase output data. The output in the testing phase using the fixed weights of the RBFN are significantly dispersed and shifted from each target value due mostly to sensor drift effect. In the experimental results, the output data by the proposed methods are observed to be concentrated closer again to their own target values significantly. This indicates that the proposed method can be effectively applied to improved odor sensing system equipped with the capability of sensor drift effect compensation

  7. kappa-Version of Finite Element Method: A New Mathematical and Computational Framework for BVP and IVP

    DTIC Science & Technology

    2007-01-01

    differentiability, fluid-solid interaction, error estimation, re-discretization, moving meshes 16. SECURITY CLASSIFICATION OF: 17 . LIMITATION OF 18. NUMBER...method the weight function is an indepen- dent function v = 0 6 4Ph , with v = 0 on F, if W = W0 on F1. 2. Galerkin method (GM): If Wh is an approximation...This can be demonstrated by considering a simple I-D case (like described above) in which the discretization 17 is uniform with characteristic length

  8. Real Time Monitoring of Dissolved Organic Carbon Concentration and Disinfection By-Product Formation Potential in a Surface Water Treatment Plant with Simulaneous UV-VIS Absorbance and Fluorescence Excitation-Emission Mapping

    NASA Astrophysics Data System (ADS)

    Gilmore, A. M.

    2015-12-01

    This study describes a method based on simultaneous absorbance and fluorescence excitation-emission mapping for rapidly and accurately monitoring dissolved organic carbon concentration and disinfection by-product formation potential for surface water sourced drinking water treatment. The method enables real-time monitoring of the Dissolved Organic Carbon (DOC), absorbance at 254 nm (UVA), the Specific UV Absorbance (SUVA) as well as the Simulated Distribution System Trihalomethane (THM) Formation Potential (SDS-THMFP) for the source and treated water among other component parameters. The method primarily involves Parallel Factor Analysis (PARAFAC) decomposition of the high and lower molecular weight humic and fulvic organic component concentrations. The DOC calibration method involves calculating a single slope factor (with the intercept fixed at 0 mg/l) by linear regression for the UVA divided by the ratio of the high and low molecular weight component concentrations. This method thus corrects for the changes in the molecular weight component composition as a function of the source water composition and coagulation treatment effects. The SDS-THMFP calibration involves a multiple linear regression of the DOC, organic component ratio, chlorine residual, pH and alkalinity. Both the DOC and SDS-THMFP correlations over a period of 18 months exhibited adjusted correlation coefficients with r2 > 0.969. The parameters can be reported as a function of compliance rules associated with required % removals of DOC (as a function of alkalinity) and predicted maximum contaminant levels (MCL) of THMs. The single instrument method, which is compatible with continuous flow monitoring or grab sampling, provides a rapid (2-3 minute) and precise indicator of drinking water disinfectant treatability without the need for separate UV photometric and DOC meter measurements or independent THM determinations.

  9. LEGO: a novel method for gene set over-representation analysis by incorporating network-based gene weights

    PubMed Central

    Dong, Xinran; Hao, Yun; Wang, Xiao; Tian, Weidong

    2016-01-01

    Pathway or gene set over-representation analysis (ORA) has become a routine task in functional genomics studies. However, currently widely used ORA tools employ statistical methods such as Fisher’s exact test that reduce a pathway into a list of genes, ignoring the constitutive functional non-equivalent roles of genes and the complex gene-gene interactions. Here, we develop a novel method named LEGO (functional Link Enrichment of Gene Ontology or gene sets) that takes into consideration these two types of information by incorporating network-based gene weights in ORA analysis. In three benchmarks, LEGO achieves better performance than Fisher and three other network-based methods. To further evaluate LEGO’s usefulness, we compare LEGO with five gene expression-based and three pathway topology-based methods using a benchmark of 34 disease gene expression datasets compiled by a recent publication, and show that LEGO is among the top-ranked methods in terms of both sensitivity and prioritization for detecting target KEGG pathways. In addition, we develop a cluster-and-filter approach to reduce the redundancy among the enriched gene sets, making the results more interpretable to biologists. Finally, we apply LEGO to two lists of autism genes, and identify relevant gene sets to autism that could not be found by Fisher. PMID:26750448

  10. LEGO: a novel method for gene set over-representation analysis by incorporating network-based gene weights.

    PubMed

    Dong, Xinran; Hao, Yun; Wang, Xiao; Tian, Weidong

    2016-01-11

    Pathway or gene set over-representation analysis (ORA) has become a routine task in functional genomics studies. However, currently widely used ORA tools employ statistical methods such as Fisher's exact test that reduce a pathway into a list of genes, ignoring the constitutive functional non-equivalent roles of genes and the complex gene-gene interactions. Here, we develop a novel method named LEGO (functional Link Enrichment of Gene Ontology or gene sets) that takes into consideration these two types of information by incorporating network-based gene weights in ORA analysis. In three benchmarks, LEGO achieves better performance than Fisher and three other network-based methods. To further evaluate LEGO's usefulness, we compare LEGO with five gene expression-based and three pathway topology-based methods using a benchmark of 34 disease gene expression datasets compiled by a recent publication, and show that LEGO is among the top-ranked methods in terms of both sensitivity and prioritization for detecting target KEGG pathways. In addition, we develop a cluster-and-filter approach to reduce the redundancy among the enriched gene sets, making the results more interpretable to biologists. Finally, we apply LEGO to two lists of autism genes, and identify relevant gene sets to autism that could not be found by Fisher.

  11. Resolving anatomical and functional structure in human brain organization: identifying mesoscale organization in weighted network representations.

    PubMed

    Lohse, Christian; Bassett, Danielle S; Lim, Kelvin O; Carlson, Jean M

    2014-10-01

    Human brain anatomy and function display a combination of modular and hierarchical organization, suggesting the importance of both cohesive structures and variable resolutions in the facilitation of healthy cognitive processes. However, tools to simultaneously probe these features of brain architecture require further development. We propose and apply a set of methods to extract cohesive structures in network representations of brain connectivity using multi-resolution techniques. We employ a combination of soft thresholding, windowed thresholding, and resolution in community detection, that enable us to identify and isolate structures associated with different weights. One such mesoscale structure is bipartivity, which quantifies the extent to which the brain is divided into two partitions with high connectivity between partitions and low connectivity within partitions. A second, complementary mesoscale structure is modularity, which quantifies the extent to which the brain is divided into multiple communities with strong connectivity within each community and weak connectivity between communities. Our methods lead to multi-resolution curves of these network diagnostics over a range of spatial, geometric, and structural scales. For statistical comparison, we contrast our results with those obtained for several benchmark null models. Our work demonstrates that multi-resolution diagnostic curves capture complex organizational profiles in weighted graphs. We apply these methods to the identification of resolution-specific characteristics of healthy weighted graph architecture and altered connectivity profiles in psychiatric disease.

  12. Consumption of High-Polyphenol Dark Chocolate Improves Endothelial Function in Individuals with Stage 1 Hypertension and Excess Body Weight

    PubMed Central

    Nogueira, Lívia de Paula; Knibel, Marcela Paranhos; Torres, Márcia Regina Simas Gonçalves; Nogueira Neto, José Firmino; Sanjuliani, Antonio Felipe

    2012-01-01

    Background. Hypertension and excess body weight are important risk factors for endothelial dysfunction. Recent evidence suggests that high-polyphenol dark chocolate improves endothelial function and lowers blood pressure. This study aimed to evaluate the association of chocolate 70% cocoa intake with metabolic profile, oxidative stress, inflammation, blood pressure, and endothelial function in stage 1 hypertensives with excess body weight. Methods. Intervention clinical trial includes 22 stage 1 hypertensives without previous antihypertensive treatment, aged 18 to 60 years and presents a body mass index between 25.0 and 34.9 kg/m2. All participants were instructed to consume 50 g of chocolate 70% cocoa/day (2135 mg polyphenols) for 4 weeks. Endothelial function was evaluated by peripheral artery tonometry using Endo-PAT 2000 (Itamar Medical). Results. Twenty participants (10 men) completed the study. Comparison of pre-post intervention revealed that (1) there were no significant changes in anthropometric parameters, percentage body fat, glucose metabolism, lipid profile, biomarkers of inflammation, adhesion molecules, oxidized LDL, and blood pressure; (2) the assessment of endothelial function through the reactive hyperemia index showed a significant increase: 1.94 ± 0.18 to 2.22 ± 0.08, P = 0.01. Conclusion.In individuals with stage 1 hypertension and excess body weight, high-polyphenol dark chocolate improves endothelial function. PMID:23209885

  13. Multi-Objective Programming for Lot-Sizing with Quantity Discount

    NASA Astrophysics Data System (ADS)

    Kang, He-Yau; Lee, Amy H. I.; Lai, Chun-Mei; Kang, Mei-Sung

    2011-11-01

    Multi-objective programming (MOP) is one of the popular methods for decision making in a complex environment. In a MOP, decision makers try to optimize two or more objectives simultaneously under various constraints. A complete optimal solution seldom exists, and a Pareto-optimal solution is usually used. Some methods, such as the weighting method which assigns priorities to the objectives and sets aspiration levels for the objectives, are used to derive a compromise solution. The ɛ-constraint method is a modified weight method. One of the objective functions is optimized while the other objective functions are treated as constraints and are incorporated in the constraint part of the model. This research considers a stochastic lot-sizing problem with multi-suppliers and quantity discounts. The model is transformed into a mixed integer programming (MIP) model next based on the ɛ-constraint method. An illustrative example is used to illustrate the practicality of the proposed model. The results demonstrate that the model is an effective and accurate tool for determining the replenishment of a manufacturer from multiple suppliers for multi-periods.

  14. Convergence and Efficiency of Adaptive Importance Sampling Techniques with Partial Biasing

    NASA Astrophysics Data System (ADS)

    Fort, G.; Jourdain, B.; Lelièvre, T.; Stoltz, G.

    2018-04-01

    We propose a new Monte Carlo method to efficiently sample a multimodal distribution (known up to a normalization constant). We consider a generalization of the discrete-time Self Healing Umbrella Sampling method, which can also be seen as a generalization of well-tempered metadynamics. The dynamics is based on an adaptive importance technique. The importance function relies on the weights (namely the relative probabilities) of disjoint sets which form a partition of the space. These weights are unknown but are learnt on the fly yielding an adaptive algorithm. In the context of computational statistical physics, the logarithm of these weights is, up to an additive constant, the free-energy, and the discrete valued function defining the partition is called the collective variable. The algorithm falls into the general class of Wang-Landau type methods, and is a generalization of the original Self Healing Umbrella Sampling method in two ways: (i) the updating strategy leads to a larger penalization strength of already visited sets in order to escape more quickly from metastable states, and (ii) the target distribution is biased using only a fraction of the free-energy, in order to increase the effective sample size and reduce the variance of importance sampling estimators. We prove the convergence of the algorithm and analyze numerically its efficiency on a toy example.

  15. Explaining ethnic disparities in lung function among young adults: A pilot investigation

    PubMed Central

    Patel, Jaymini; Minelli, Cosetta; Burney, Peter G. J.

    2017-01-01

    Background Ethnic disparities in lung function have been linked mainly to anthropometric factors but have not been fully explained. We conducted a cross-sectional pilot study to investigate how best to study ethnic differences in lung function in young adults and evaluate whether these could be explained by birth weight and socio-economic factors. Methods We recruited 112 university students of White and South Asian British ethnicity, measured post-bronchodilator lung function, obtained information on respiratory symptoms and socio-economic factors through questionnaires, and acquired birth weight through data linkage. We regressed lung function against ethnicity and candidate predictors defined a priori using linear regression, and used penalised regression to examine a wider range of factors. We reviewed the implications of our findings for the feasibility of a larger study. Results There was a similar parental socio-economic environment and no difference in birth weight between the two ethnic groups, but the ethnic difference in FVC adjusted for sex, age, height, demi-span, father’s occupation, birth weight, maternal educational attainment and maternal upbringing was 0.81L (95%CI: -1.01 to -0.54L). Difference in body proportions did not explain the ethnic differences although parental immigration was an important predictor of FVC independent of ethnic group. Participants were comfortable with study procedures and we were able to link birth weight data to clinical measurements. Conclusion Studies of ethnic disparities in lung function among young adults are feasible. Future studies should recruit a socially more diverse sample and investigate the role of markers of acculturation in explaining such differences. PMID:28575113

  16. Optimal apodization design for medical ultrasound using constrained least squares part I: theory.

    PubMed

    Guenther, Drake A; Walker, William F

    2007-02-01

    Aperture weighting functions are critical design parameters in the development of ultrasound systems because beam characteristics affect the contrast and point resolution of the final output image. In previous work by our group, we developed a metric that quantifies a broadband imaging system's contrast resolution performance. We now use this metric to formulate a novel general ultrasound beamformer design method. In our algorithm, we use constrained least squares (CLS) techniques and a linear algebra formulation to describe the system point spread function (PSF) as a function of the aperture weightings. In one approach, we minimize the energy of the PSF outside a certain boundary and impose a linear constraint on the aperture weights. In a second approach, we minimize the energy of the PSF outside a certain boundary while imposing a quadratic constraint on the energy of the PSF inside the boundary. We present detailed analysis for an arbitrary ultrasound imaging system and discuss several possible applications of the CLS techniques, such as designing aperture weightings to maximize contrast resolution and improve the system depth of field.

  17. Correlative weighted stacking for seismic data in the wavelet domain

    USGS Publications Warehouse

    Zhang, S.; Xu, Y.; Xia, J.; ,

    2004-01-01

    Horizontal stacking plays a crucial role for modern seismic data processing, for it not only compresses random noise and multiple reflections, but also provides a foundational data for subsequent migration and inversion. However, a number of examples showed that random noise in adjacent traces exhibits correlation and coherence. The average stacking and weighted stacking based on the conventional correlative function all result in false events, which are caused by noise. Wavelet transform and high order statistics are very useful methods for modern signal processing. The multiresolution analysis in wavelet theory can decompose signal on difference scales, and high order correlative function can inhibit correlative noise, for which the conventional correlative function is of no use. Based on the theory of wavelet transform and high order statistics, high order correlative weighted stacking (HOCWS) technique is presented in this paper. Its essence is to stack common midpoint gathers after the normal moveout correction by weight that is calculated through high order correlative statistics in the wavelet domain. Synthetic examples demonstrate its advantages in improving the signal to noise (S/N) ration and compressing the correlative random noise.

  18. [Comparison of medical and surgical treatment of infantile hypothalamic obesity].

    PubMed

    Bode, H H; Botstein, P M; Crawford, J D; Russel, P S

    1975-01-01

    The jejunoileal bypass is, of all the current therapeutic possibilities, the only permanent method for the successful treatment of a patient with hypothalamic obesity. Pre-operatively, it is advisable, however, to reduce the body weight by exclusive alimentation with Vivonex, in order to improve lung function and diminish the operation risks. Putting a smaller section of the bowel at rest will prevent major weight loss, as well as more severe complications. The disturbances of the calcium and potassium metabolism and of liver function, which frequently occur after jejunoileal bypass operation, were not observed, when on both sides of the immobilised bowel section a section of small bowel 23 to 38 cm long was maintained in normal function.

  19. Enhancing biological relevance of a weighted gene co-expression network for functional module identification.

    PubMed

    Prom-On, Santitham; Chanthaphan, Atthawut; Chan, Jonathan Hoyin; Meechai, Asawin

    2011-02-01

    Relationships among gene expression levels may be associated with the mechanisms of the disease. While identifying a direct association such as a difference in expression levels between case and control groups links genes to disease mechanisms, uncovering an indirect association in the form of a network structure may help reveal the underlying functional module associated with the disease under scrutiny. This paper presents a method to improve the biological relevance in functional module identification from the gene expression microarray data by enhancing the structure of a weighted gene co-expression network using minimum spanning tree. The enhanced network, which is called a backbone network, contains only the essential structural information to represent the gene co-expression network. The entire backbone network is decoupled into a number of coherent sub-networks, and then the functional modules are reconstructed from these sub-networks to ensure minimum redundancy. The method was tested with a simulated gene expression dataset and case-control expression datasets of autism spectrum disorder and colorectal cancer studies. The results indicate that the proposed method can accurately identify clusters in the simulated dataset, and the functional modules of the backbone network are more biologically relevant than those obtained from the original approach.

  20. An evolutionary algorithm that constructs recurrent neural networks.

    PubMed

    Angeline, P J; Saunders, G M; Pollack, J B

    1994-01-01

    Standard methods for simultaneously inducing the structure and weights of recurrent neural networks limit every task to an assumed class of architectures. Such a simplification is necessary since the interactions between network structure and function are not well understood. Evolutionary computations, which include genetic algorithms and evolutionary programming, are population-based search methods that have shown promise in many similarly complex tasks. This paper argues that genetic algorithms are inappropriate for network acquisition and describes an evolutionary program, called GNARL, that simultaneously acquires both the structure and weights for recurrent networks. GNARL's empirical acquisition method allows for the emergence of complex behaviors and topologies that are potentially excluded by the artificial architectural constraints imposed in standard network induction methods.

  1. Automatic weight determination in nonlinear model predictive control of wind turbines using swarm optimization technique

    NASA Astrophysics Data System (ADS)

    Tofighi, Elham; Mahdizadeh, Amin

    2016-09-01

    This paper addresses the problem of automatic tuning of weighting coefficients for the nonlinear model predictive control (NMPC) of wind turbines. The choice of weighting coefficients in NMPC is critical due to their explicit impact on efficiency of the wind turbine control. Classically, these weights are selected based on intuitive understanding of the system dynamics and control objectives. The empirical methods, however, may not yield optimal solutions especially when the number of parameters to be tuned and the nonlinearity of the system increase. In this paper, the problem of determining weighting coefficients for the cost function of the NMPC controller is formulated as a two-level optimization process in which the upper- level PSO-based optimization computes the weighting coefficients for the lower-level NMPC controller which generates control signals for the wind turbine. The proposed method is implemented to tune the weighting coefficients of a NMPC controller which drives the NREL 5-MW wind turbine. The results are compared with similar simulations for a manually tuned NMPC controller. Comparison verify the improved performance of the controller for weights computed with the PSO-based technique.

  2. Foam composition for treating asbestos-containing materials and method of using same

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Block, J.; Krupkin, N.V.; Kuespert, D.R.

    A composition for transforming a chrysotile asbestos-containing material into a non-asbestos material is disclosed. The composition comprises water, at least about 30% by weight of an acid component, at least about 0.1% by weight of a source of fluoride ions, and a stable foam forming amount of a foaming agent system having both cationic and non-ionic functionality. A method of transforming the asbestos-containing material into a non-asbestos material using the present composition in the form of a foam also disclosed.

  3. Foam composition for treating asbestos-containing materials and method of using same

    DOEpatents

    Block, Jacob; Krupkin, Natalia Vera; Kuespert, Daniel Reid; Nishioka, Gary Masaru; Lau, John Wing-Keung; Palmer, Nigel Innes

    1998-04-28

    A composition for transforming a chrysotile asbestos-containing material into a non-asbestos material is disclosed, wherein the composition comprises water, at least about 30% by weight of an acid component, at least about 0.1% by weight of a source of fluoride ions, and a stable foam forming amount of a foaming agent system having both cationic and non-ionic functionality. A method of transforming the asbestos-containing material into a non-asbestos material using the present composition in the form of a foam also disclosed.

  4. Foam composition for treating asbestos-containing materials and method of using same

    DOEpatents

    Block, J.; Krupkin, N.V.; Kuespert, D.R.; Nishioka, G.M.; Lau, J.W.K.; Palmer, N.I.

    1998-04-28

    A composition for transforming a chrysotile asbestos-containing material into a non-asbestos material is disclosed. The composition comprises water, at least about 30% by weight of an acid component, at least about 0.1% by weight of a source of fluoride ions, and a stable foam forming amount of a foaming agent system having both cationic and non-ionic functionality. A method of transforming the asbestos-containing material into a non-asbestos material using the present composition in the form of a foam also disclosed.

  5. Method for protein structure alignment

    DOEpatents

    Blankenbecler, Richard; Ohlsson, Mattias; Peterson, Carsten; Ringner, Markus

    2005-02-22

    This invention provides a method for protein structure alignment. More particularly, the present invention provides a method for identification, classification and prediction of protein structures. The present invention involves two key ingredients. First, an energy or cost function formulation of the problem simultaneously in terms of binary (Potts) assignment variables and real-valued atomic coordinates. Second, a minimization of the energy or cost function by an iterative method, where in each iteration (1) a mean field method is employed for the assignment variables and (2) exact rotation and/or translation of atomic coordinates is performed, weighted with the corresponding assignment variables.

  6. Comparing Weight Loss-Maintenance Outcomes of a Worksite-Based Lifestyle Program Delivered via DVD and Face-to-Face: A Randomized Trial.

    PubMed

    Ing, Claire Townsend; Miyamoto, Robin E S; Fang, Rui; Antonio, Mapuana; Paloma, Diane; Braun, Kathryn L; Kaholokula, Joseph Keawe'aimoku

    2018-03-01

    Native Hawaiians and other Pacific Islanders have high rates of overweight and obesity compared with other ethnic groups in Hawai'i. Effective weight loss and weight loss-maintenance programs are needed to address obesity and obesity-related health inequities for this group. Compare the effectiveness of a 9-month, worksite-based, weight loss-maintenance intervention delivered via DVD versus face-to-face in continued weight reduction and weight loss maintenance beyond the initial weight loss phase. We tested DVD versus face-to-face delivery of the PILI@Work Program's 9-month, weight loss-maintenance phase in Native Hawaiian-serving organizations. After completing the 3-month weight loss phase, participants ( n = 217) were randomized to receive the weight loss-maintenance phase delivered via trained peer facilitators or DVDs. Participant assessments at randomization and postintervention included weight, height, blood pressure, physical functioning, exercise frequency, and fat intake. Eighty-three face-to-face participants were retained at 12 months (74.1%) compared with 73 DVD participants (69.5%). There was no significant difference between groups in weight loss or weight loss maintenance. The number of lessons attended in Phase 1 of the intervention (β = 0.358, p = .022) and baseline systolic blood pressure (β = -0.038, p = .048) predicted percent weight loss at 12 months. Weight loss maintenance was similar across groups. This suggests that low-cost delivery methods for worksite-based interventions targeting at-risk populations can help address obesity and obesity-related disparities. Additionally, attendance during the weight loss phase and lower baseline systolic blood pressure predicted greater percent weight loss during the weight loss-maintenance phase, suggesting that early engagement and initial physical functioning improve long-term weight loss outcomes.

  7. Effectiveness of a physical activity programme based on the Pilates method in pregnancy and labour.

    PubMed

    Rodríguez-Díaz, Luciano; Ruiz-Frutos, Carlos; Vázquez-Lara, Juana María; Ramírez-Rodrigo, Jesús; Villaverde-Gutiérrez, Carmen; Torres-Luque, Gema

    To assess the effectiveness and safety of a physical activity programme based on use of the Pilates method, over eight weeks in pregnant women, on functional parameters, such as weight, blood pressure, strength, flexibility and spinal curvature, and on labour parameters, such as, type of delivery, episiotomy, analgesia and newborn weight. A randomized clinical trial was carried out on pregnant women, applying a programme of physical activity using the Pilates method, designed specifically for this population. A sample consisting of a total of 105 pregnant women was divided into two groups: intervention group (n=50) (32.87±4.46 years old) and control group (n=55) (31.52±4.95 years old). The intervention group followed a physical activity programme based on the Pilates method, for 2 weekly sessions, whereas the control group did not follow the program. Significant improvements (p<0.05) in blood pressure, hand grip strength, hamstring flexibility and spinal curvature, in addition to improvements during labour, decreasing the number of Caesareans and obstructed labour, episiotomies, analgesia and the weight of the newborns were found at the end of the intervention. A physical activity programme of 8 weeks based on the Pilates method improves functional parameters in pregnant women and benefits delivery. Copyright © 2017 Elsevier España, S.L.U. All rights reserved.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, Hyun-Ju; Chung, Chin-Wook, E-mail: joykang@hanyang.ac.kr; Choi, Hyeok

    A modified central difference method (MCDM) is proposed to obtain the electron energy distribution functions (EEDFs) in single Langmuir probes. Numerical calculation of the EEDF with MCDM is simple and has less noise. This method provides the second derivatives at a given point as the weighted average of second order central difference derivatives calculated at different voltage intervals, weighting each by the square of the interval. In this paper, the EEDFs obtained from MCDM are compared to those calculated via the averaged central difference method. It is found that MCDM effectively suppresses the noises in the EEDF, while the samemore » number of points are used to calculate of the second derivative.« less

  9. Exploiting the functional and taxonomic structure of genomic data by probabilistic topic modeling.

    PubMed

    Chen, Xin; Hu, Xiaohua; Lim, Tze Y; Shen, Xiajiong; Park, E K; Rosen, Gail L

    2012-01-01

    In this paper, we present a method that enable both homology-based approach and composition-based approach to further study the functional core (i.e., microbial core and gene core, correspondingly). In the proposed method, the identification of major functionality groups is achieved by generative topic modeling, which is able to extract useful information from unlabeled data. We first show that generative topic model can be used to model the taxon abundance information obtained by homology-based approach and study the microbial core. The model considers each sample as a “document,” which has a mixture of functional groups, while each functional group (also known as a “latent topic”) is a weight mixture of species. Therefore, estimating the generative topic model for taxon abundance data will uncover the distribution over latent functions (latent topic) in each sample. Second, we show that, generative topic model can also be used to study the genome-level composition of “N-mer” features (DNA subreads obtained by composition-based approaches). The model consider each genome as a mixture of latten genetic patterns (latent topics), while each functional pattern is a weighted mixture of the “N-mer” features, thus the existence of core genomes can be indicated by a set of common N-mer features. After studying the mutual information between latent topics and gene regions, we provide an explanation of the functional roles of uncovered latten genetic patterns. The experimental results demonstrate the effectiveness of proposed method.

  10. [Effect of polysaccharides in processed Sibiraea on immunologic function of immunosuppression mice].

    PubMed

    Duan, Bowen; Li, Yun; Liu, Xin; Yang, Yongjian

    2010-06-01

    To study the effect of polysaccharides in processed Sibiraea on the immunologic function of immunosuppression mice. The immunosuppressed mice were induced by cyclophosphamide. After the treatment, the organ weight index and the delayed type hypersensitivity of the mice were investigated. The humoral immune function was determined by serum hemolysin assay. Non-specific immune function was determined by carbon clearance method. Cellular immune function was determined by spleen lymphocyte proliferation test. Two hundred kunming mice were randomly divided into five groups: normal controls, model group, low-dose group (110 mg x kg(-1)), middle-dose group (220 mg x kg(-1)), high-dose group (440 mg x kg(-1)). Drugs were given to the mice by oral gavage every day. The immunosuppressed mice treated with Sibiraea polysibcharide at intragastrica dose of 110-440 mg x kg(-1) have increased weight of the immune organs, increased content of DTH and content in serum hemolysin lgG and lgM. Mean while the rate of carbon clearance was enhanced and the proliferation of spleen lymphocyte was increased. Polysaccharides in processed Sibiraea can increase the weight of the immune organs. At the same time, non-specific immune, DTH, humoral immune and cellular immune function were enhanced significantly.

  11. A novel method linking neural connectivity to behavioral fluctuations: Behavior-regressed connectivity.

    PubMed

    Passaro, Antony D; Vettel, Jean M; McDaniel, Jonathan; Lawhern, Vernon; Franaszczuk, Piotr J; Gordon, Stephen M

    2017-03-01

    During an experimental session, behavioral performance fluctuates, yet most neuroimaging analyses of functional connectivity derive a single connectivity pattern. These conventional connectivity approaches assume that since the underlying behavior of the task remains constant, the connectivity pattern is also constant. We introduce a novel method, behavior-regressed connectivity (BRC), to directly examine behavioral fluctuations within an experimental session and capture their relationship to changes in functional connectivity. This method employs the weighted phase lag index (WPLI) applied to a window of trials with a weighting function. Using two datasets, the BRC results are compared to conventional connectivity results during two time windows: the one second before stimulus onset to identify predictive relationships, and the one second after onset to capture task-dependent relationships. In both tasks, we replicate the expected results for the conventional connectivity analysis, and extend our understanding of the brain-behavior relationship using the BRC analysis, demonstrating subject-specific BRC maps that correspond to both positive and negative relationships with behavior. Comparison with Existing Method(s): Conventional connectivity analyses assume a consistent relationship between behaviors and functional connectivity, but the BRC method examines performance variability within an experimental session to understand dynamic connectivity and transient behavior. The BRC approach examines connectivity as it covaries with behavior to complement the knowledge of underlying neural activity derived from conventional connectivity analyses. Within this framework, BRC may be implemented for the purpose of understanding performance variability both within and between participants. Published by Elsevier B.V.

  12. The fundamentals of adaptive grid movement

    NASA Technical Reports Server (NTRS)

    Eiseman, Peter R.

    1990-01-01

    Basic grid point movement schemes are studied. The schemes are referred to as adaptive grids. Weight functions and equidistribution in one dimension are treated. The specification of coefficients in the linear weight, attraction to a given grid or a curve, and evolutionary forces are considered. Curve by curve and finite volume methods are described. The temporal coupling of partial differential equations solvers and grid generators was discussed.

  13. Treadmill Training with Partial Body-Weight Support in Children with Cerebral Palsy: A Systematic Review

    ERIC Educational Resources Information Center

    Mutlu, Akmer; Krosschell, Kristin; Spira, Deborah Gaebler

    2009-01-01

    OKAim: The aim of this systematic review was to examine the literature on the effects of partial body-weight support treadmill training (PBWSTT) in children with cerebral palsy (CP) on functional outcomes and attainment of ambulation. Method: We searched the relevant literature from 1950 to July 2007. We found eight studies on the use of PWSBTT on…

  14. Wood density-moisture profiles in old-growth Douglas-fir and western hemlock.

    Treesearch

    W.Y. Pong; Dale R. Waddell; Lambert Michael B.

    1986-01-01

    Accurate estimation of the weight of each load of logs is necessary for safe and efficient aerial logging operations. The prediction of green density (lb/ft3) as a function of height is a critical element in the accurate estimation of tree bole and log weights. Two sampling methods, disk and increment core (Bergstrom xylodensimeter), were used to measure the density-...

  15. [Effect of Codonopsis Radix maintained with sulfur fumigation on immune function in mice].

    PubMed

    Liu, Cheng-song; Wang, Yu-ping; Shi, Yan-bin; Ma, Xing-ming; Li, Hui-li; Zhang, Xiao-yun; Li, Shou-tang

    2014-11-01

    To investigate the immune function of mice being given the extract of Codonopsis Radix maintained with sulfur fumigation. Mice were divided into five groups. Except the normal control group, the mice were fed with the extract of Codonopsis Radix maintained with sulfur fumigation at the high,medium and low doses, as well as medium dose of Codonopsis Radix maintained with low-temperature vacuum method, respectively. Mice were treated once a day for 10 continuous days. Weight change,organ indexes, blood cell indices, macrophage phagocytic function, and IL-2 and IFN-γ levels were measured. Compared with normal control group, Codonopsis Radix maintained with sulfur fumigation at medium and high doses inhibited body weight increase of mice; white blood cell count of high dose group was significantly increased; significant increase of macrophage phagocytosis were observed for all groups except the normal control group; and spleen index and IFN-γ level of Codonopsis Radix maintained with sulfur fumigation medium dose group were increased significantly. Codonopsis Radix maintained with sulfur fumigation can promote mouse immune function to a certain degree. There was no difference in immune effect between Codonopsis Radix maintained with sulfur fumigation and low-temperature vacuum method during experimental period. However,taking the extract of Codonopsis Radix maintained with sulfur fumigation can exert negative effect on appetite and body weight in mice.

  16. FW-CADIS Method for Global and Semi-Global Variance Reduction of Monte Carlo Radiation Transport Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, John C; Peplow, Douglas E.; Mosher, Scott W

    2014-01-01

    This paper presents a new hybrid (Monte Carlo/deterministic) method for increasing the efficiency of Monte Carlo calculations of distributions, such as flux or dose rate distributions (e.g., mesh tallies), as well as responses at multiple localized detectors and spectra. This method, referred to as Forward-Weighted CADIS (FW-CADIS), is an extension of the Consistent Adjoint Driven Importance Sampling (CADIS) method, which has been used for more than a decade to very effectively improve the efficiency of Monte Carlo calculations of localized quantities, e.g., flux, dose, or reaction rate at a specific location. The basis of this method is the development ofmore » an importance function that represents the importance of particles to the objective of uniform Monte Carlo particle density in the desired tally regions. Implementation of this method utilizes the results from a forward deterministic calculation to develop a forward-weighted source for a deterministic adjoint calculation. The resulting adjoint function is then used to generate consistent space- and energy-dependent source biasing parameters and weight windows that are used in a forward Monte Carlo calculation to obtain more uniform statistical uncertainties in the desired tally regions. The FW-CADIS method has been implemented and demonstrated within the MAVRIC sequence of SCALE and the ADVANTG/MCNP framework. Application of the method to representative, real-world problems, including calculation of dose rate and energy dependent flux throughout the problem space, dose rates in specific areas, and energy spectra at multiple detectors, is presented and discussed. Results of the FW-CADIS method and other recently developed global variance reduction approaches are also compared, and the FW-CADIS method outperformed the other methods in all cases considered.« less

  17. New Term Weighting Formulas for the Vector Space Method in Information Retrieval

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chisholm, E.; Kolda, T.G.

    The goal in information retrieval is to enable users to automatically and accurately find data relevant to their queries. One possible approach to this problem i use the vector space model, which models documents and queries as vectors in the term space. The components of the vectors are determined by the term weighting scheme, a function of the frequencies of the terms in the document or query as well as throughout the collection. We discuss popular term weighting schemes and present several new schemes that offer improved performance.

  18. The weighted function method: A handy tool for flood frequency analysis or just a curiosity?

    NASA Astrophysics Data System (ADS)

    Bogdanowicz, Ewa; Kochanek, Krzysztof; Strupczewski, Witold G.

    2018-04-01

    The idea of the Weighted Function (WF) method for estimation of Pearson type 3 (Pe3) distribution introduced by Ma in 1984 has been revised and successfully applied for shifted inverse Gaussian (IGa3) distribution. Also the conditions of WF applicability to a shifted distribution have been formulated. The accuracy of WF flood quantiles for both Pe3 and IGa3 distributions was assessed by Monte Caro simulations under the true and false distribution assumption versus the maximum likelihood (MLM), moment (MOM) and L-moments (LMM) methods. Three datasets of annual peak flows of Polish catchments serve the case studies to compare the results of the WF, MOM, MLM and LMM performance for the real flood data. For the hundred-year flood the WF method revealed the explicit superiority only over the MLM surpassing the MOM and especially LMM both for the true and false distributional assumption with respect to relative bias and relative mean root square error values. Generally, the WF method performs well and for hydrological sample size and constitutes good alternative for the estimation of the flood upper quantiles.

  19. Effects of a weight loss plus exercise program on physical function in overweight, older women: a randomized controlled trial

    PubMed Central

    Anton, Stephen D; Manini, Todd M; Milsom, Vanessa A; Dubyak, Pamela; Cesari, Matteo; Cheng, Jing; Daniels, Michael J; Marsiske, Michael; Pahor, Marco; Leeuwenburgh, Christiaan; Perri, Michael G

    2011-01-01

    Background: Obesity and a sedentary lifestyle are associated with physical impairments and biologic changes in older adults. Weight loss combined with exercise may reduce inflammation and improve physical functioning in overweight, sedentary, older adults. This study tested whether a weight loss program combined with moderate exercise could improve physical function in obese, older adult women. Methods: Participants (N = 34) were generally healthy, obese, older adult women (age range 55–79 years) with mild to moderate physical impairments (ie, functional limitations). Participants were randomly assigned to one of two groups for 24 weeks: (i) weight loss plus exercise (WL+E; n = 17; mean age = 63.7 years [4.5]) or (ii) educational control (n = 17; mean age = 63.7 [6.7]). In the WL+E group, participants attended a group-based weight management session plus three supervised exercise sessions within their community each week. During exercise sessions, participants engaged in brisk walking and lower-body resistance training of moderate intensity. Participants in the educational control group attended monthly health education lectures on topics relevant to older adults. Outcomes were: (i) body weight, (ii) walking speed (assessed by 400-meter walk test), (iii) the Short Physical Performance Battery (SPPB), and (iv) knee extension isokinetic strength. Results: Participants randomized to the WL+E group lost significantly more weight than participants in the educational control group (5.95 [0.992] vs 0.23 [0.99] kg; P < 0.01). Additionally, the walking speed of participants in the WL+E group significantly increased compared with that of the control group (reduction in time on the 400-meter walk test = 44 seconds; P < 0.05). Scores on the SPPB improved in both the intervention and educational control groups from pre- to post-test (P < 0.05), with significant differences between groups (P = 0.02). Knee extension strength was maintained in both groups. Conclusion: Our findings suggest that a lifestyle-based weight loss program consisting of moderate caloric restriction plus moderate exercise can produce significant weight loss and improve physical function while maintaining muscle strength in obese, older adult women with mild to moderate physical impairments. PMID:21753869

  20. Effects of low-dose paroxetine 7.5 mg on weight and sexual function during treatment of vasomotor symptoms associated with menopause

    PubMed Central

    Portman, David J.; Kaunitz, Andrew M.; Kazempour, Kazem; Mekonnen, Hana; Bhaskar, Sailaja; Lippman, Joel

    2014-01-01

    Abstract Objective Two phase 3, randomized, placebo-controlled trials demonstrated that low-dose paroxetine 7.5 mg reduced the frequency and severity of vasomotor symptoms (VMS) associated with menopause and had a favorable tolerability profile. The impact of paroxetine 7.5 mg on body weight and sexual function was evaluated in a pooled analysis. Methods Postmenopausal women aged 40 years or older who had moderate to severe VMS were randomly assigned to receive paroxetine 7.5 mg or placebo once daily for 12 or 24 weeks. Assessments included changes in body mass index (BMI) and weight, Arizona Sexual Experiences Scale score, Hot Flash–Related Daily Interference Scale sexuality subscore, and adverse events related to weight or sexual dysfunction. Results Pooled efficacy and safety populations comprised 1,174 and 1,175 participants, respectively. Baseline values were similar for median weight (∼75 kg), median BMI (∼28 kg/m2), and the proportion of women with sexual dysfunction (∼58%). No clinically meaningful or statistically significant changes from baseline in weight or sexual function assessments occurred in the paroxetine 7.5 mg group. Small but statistically significant increases in weight and BMI were observed in the placebo group only on week 4. No significant difference between treatment groups was observed in the proportion of participants who had 7% or higher gain in body weight on week 4, 12, or 24. Rates of adverse events suggestive of sexual dysfunction were low and similar in both treatment groups. Conclusions Paroxetine 7.5 mg does not cause weight gain or negative changes in libido when used to treat menopause-associated VMS in postmenopausal women. PMID:24552977

  1. Achieving Body Weight Adjustments for Feeding Status and Pregnant or Non-Pregnant Condition in Beef Cows

    PubMed Central

    Gionbelli, Mateus P.; Duarte, Marcio S.; Valadares Filho, Sebastião C.; Detmann, Edenio; Chizzotti, Mario L.; Rodrigues, Felipe C.; Zanetti, Diego; Gionbelli, Tathyane R. S.; Machado, Marcelo G.

    2015-01-01

    Background Beef cows herd accounts for 70% of the total energy used in the beef production system. However, there are still limited studies regarding improvement of production efficiency in this category, mainly in developing countries and in tropical areas. One of the limiting factors is the difficulty to obtain reliable estimates of weight variation in mature cows. This occurs due to the interaction of weight of maternal tissues with specific physiological stages such as pregnancy. Moreover, variation in gastrointestinal contents due to feeding status in ruminant animals is a major source of error in body weight measurements. Objectives Develop approaches to estimate the individual proportion of weight from maternal tissues and from gestation in pregnant cows, adjusting for feeding status and stage of gestation. Methods and Findings Dataset of 49 multiparous non-lactating Nellore cows (32 pregnant and 17 non-pregnant) were used. To establish the relationships between the body weight, depending on the feeding status of pregnant and non-pregnant cows as a function of days of pregnancy, a set of general equations was tested, based on theoretical suppositions. We proposed the concept of pregnant compound (PREG), which represents the weight that is genuinely related to pregnancy. The PREG includes the gravid uterus minus the non-pregnant uterus plus the accretion in udder related to pregnancy. There was no accretion in udder weight up to 238 days of pregnancy. By subtracting the PREG from live weight of a pregnant cow, we obtained estimates of the weight of only maternal tissues in pregnant cows. Non-linear functions were adjusted to estimate the relationship between fasted, non-fasted and empty body weight, for pregnant and non-pregnant cows. Conclusions Our results allow for estimating the actual live weight of pregnant cows and their body constituents, and subsequent comparison as a function of days of gestation and feeding status. PMID:25793770

  2. Effects of weight training on cognitive functions in elderly with Alzheimer's disease

    PubMed Central

    Vital, Thays Martins; Hernández, Salma S. Soleman; Pedroso, Renata Valle; Teixeira, Camila Vieira Ligo; Garuffi, Marcelo; Stein, Angelica Miki; Costa, José Luiz Riani; Stella, Florindo

    2012-01-01

    Deterioration in cognitive functions is characteristic in Alzheimer's disease (AD) and may be associated with decline in daily living activities with consequent reduced quality of life. Objective To analyze weight training effects on cognitive functions in elderly with AD. Subjects 34 elderly with AD were allocated into two groups: Training Group (TG) and Social Gathering Group (SGG). Methods Global cognitive status was determined using the Mini-Mental State Exam. Specific cognitive functions were measured using the Brief Cognitive Battery, Clock Drawing Test and Verbal Fluency Test. The protocols were performed three times a week, one hour per session. The weight training protocol consisted of three sets of 20 repetitions, with two minutes of rest between sets and exercises. The activities proposed for the SGG were not systematized and aimed at promoting social interaction among patients. The statistical analyses were performed with the U Mann Whitney and Wilcoxon tests for group comparisons. All analyses were considered statistically significant at a p-value of 0.05. Results There were no significant differences associated to the effects of the practice of weight training on cognition in AD patients. Conclusion In this study, no improvement in cognitive functions was evident in elderly with AD who followed a low intensity resistance exercise protocol. Thus, future studies could evaluate the effect of more intense exercise programs. PMID:29213805

  3. Stripe nonuniformity correction for infrared imaging system based on single image optimization

    NASA Astrophysics Data System (ADS)

    Hua, Weiping; Zhao, Jufeng; Cui, Guangmang; Gong, Xiaoli; Ge, Peng; Zhang, Jiang; Xu, Zhihai

    2018-06-01

    Infrared imaging is often disturbed by stripe nonuniformity noise. Scene-based correction method can effectively reduce the impact of stripe noise. In this paper, a stripe nonuniformity correction method based on differential constraint is proposed. Firstly, the gray distribution of stripe nonuniformity is analyzed and the penalty function is constructed by the difference of horizontal gradient and vertical gradient. With the weight function, the penalty function is optimized to obtain the corrected image. Comparing with other single-frame approaches, experiments show that the proposed method performs better in both subjective and objective analysis, and does less damage to edge and detail. Meanwhile, the proposed method runs faster. We have also discussed the differences between the proposed idea and multi-frame methods. Our method is finally well applied in hardware system.

  4. Weighted graph cuts without eigenvectors a multilevel approach.

    PubMed

    Dhillon, Inderjit S; Guan, Yuqiang; Kulis, Brian

    2007-11-01

    A variety of clustering algorithms have recently been proposed to handle data that is not linearly separable; spectral clustering and kernel k-means are two of the main methods. In this paper, we discuss an equivalence between the objective functions used in these seemingly different methods--in particular, a general weighted kernel k-means objective is mathematically equivalent to a weighted graph clustering objective. We exploit this equivalence to develop a fast, high-quality multilevel algorithm that directly optimizes various weighted graph clustering objectives, such as the popular ratio cut, normalized cut, and ratio association criteria. This eliminates the need for any eigenvector computation for graph clustering problems, which can be prohibitive for very large graphs. Previous multilevel graph partitioning methods, such as Metis, have suffered from the restriction of equal-sized clusters; our multilevel algorithm removes this restriction by using kernel k-means to optimize weighted graph cuts. Experimental results show that our multilevel algorithm outperforms a state-of-the-art spectral clustering algorithm in terms of speed, memory usage, and quality. We demonstrate that our algorithm is applicable to large-scale clustering tasks such as image segmentation, social network analysis and gene network analysis.

  5. Weighted Global Artificial Bee Colony Algorithm Makes Gas Sensor Deployment Efficient

    PubMed Central

    Jiang, Ye; He, Ziqing; Li, Yanhai; Xu, Zhengyi; Wei, Jianming

    2016-01-01

    This paper proposes an improved artificial bee colony algorithm named Weighted Global ABC (WGABC) algorithm, which is designed to improve the convergence speed in the search stage of solution search equation. The new method not only considers the effect of global factors on the convergence speed in the search phase, but also provides the expression of global factor weights. Experiment on benchmark functions proved that the algorithm can improve the convergence speed greatly. We arrive at the gas diffusion concentration based on the theory of CFD and then simulate the gas diffusion model with the influence of buildings based on the algorithm. Simulation verified the effectiveness of the WGABC algorithm in improving the convergence speed in optimal deployment scheme of gas sensors. Finally, it is verified that the optimal deployment method based on WGABC algorithm can improve the monitoring efficiency of sensors greatly as compared with the conventional deployment methods. PMID:27322262

  6. New deconvolution method for microscopic images based on the continuous Gaussian radial basis function interpolation model.

    PubMed

    Chen, Zhaoxue; Chen, Hao

    2014-01-01

    A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.

  7. Connectivity Strength-Weighted Sparse Group Representation-Based Brain Network Construction for MCI Classification

    PubMed Central

    Yu, Renping; Zhang, Han; An, Le; Chen, Xiaobo; Wei, Zhihui; Shen, Dinggang

    2017-01-01

    Brain functional network analysis has shown great potential in understanding brain functions and also in identifying biomarkers for brain diseases, such as Alzheimer's disease (AD) and its early stage, mild cognitive impairment (MCI). In these applications, accurate construction of biologically meaningful brain network is critical. Sparse learning has been widely used for brain network construction; however, its l1-norm penalty simply penalizes each edge of a brain network equally, without considering the original connectivity strength which is one of the most important inherent linkwise characters. Besides, based on the similarity of the linkwise connectivity, brain network shows prominent group structure (i.e., a set of edges sharing similar attributes). In this article, we propose a novel brain functional network modeling framework with a “connectivity strength-weighted sparse group constraint.” In particular, the network modeling can be optimized by considering both raw connectivity strength and its group structure, without losing the merit of sparsity. Our proposed method is applied to MCI classification, a challenging task for early AD diagnosis. Experimental results based on the resting-state functional MRI, from 50 MCI patients and 49 healthy controls, show that our proposed method is more effective (i.e., achieving a significantly higher classification accuracy, 84.8%) than other competing methods (e.g., sparse representation, accuracy = 65.6%). Post hoc inspection of the informative features further shows more biologically meaningful brain functional connectivities obtained by our proposed method. PMID:28150897

  8. An Efficient Numerical Approach for Nonlinear Fokker-Planck equations

    NASA Astrophysics Data System (ADS)

    Otten, Dustin; Vedula, Prakash

    2009-03-01

    Fokker-Planck equations which are nonlinear with respect to their probability densities that occur in many nonequilibrium systems relevant to mean field interaction models, plasmas, classical fermions and bosons can be challenging to solve numerically. To address some underlying challenges in obtaining numerical solutions, we propose a quadrature based moment method for efficient and accurate determination of transient (and stationary) solutions of nonlinear Fokker-Planck equations. In this approach the distribution function is represented as a collection of Dirac delta functions with corresponding quadrature weights and locations, that are in turn determined from constraints based on evolution of generalized moments. Properties of the distribution function can be obtained by solution of transport equations for quadrature weights and locations. We will apply this computational approach to study a wide range of problems, including the Desai-Zwanzig Model (for nonlinear muscular contraction) and multivariate nonlinear Fokker-Planck equations describing classical fermions and bosons, and will also demonstrate good agreement with results obtained from Monte Carlo and other standard numerical methods.

  9. Finite-horizon differential games for missile-target interception system using adaptive dynamic programming with input constraints

    NASA Astrophysics Data System (ADS)

    Sun, Jingliang; Liu, Chunsheng

    2018-01-01

    In this paper, the problem of intercepting a manoeuvring target within a fixed final time is posed in a non-linear constrained zero-sum differential game framework. The Nash equilibrium solution is found by solving the finite-horizon constrained differential game problem via adaptive dynamic programming technique. Besides, a suitable non-quadratic functional is utilised to encode the control constraints into a differential game problem. The single critic network with constant weights and time-varying activation functions is constructed to approximate the solution of associated time-varying Hamilton-Jacobi-Isaacs equation online. To properly satisfy the terminal constraint, an additional error term is incorporated in a novel weight-updating law such that the terminal constraint error is also minimised over time. By utilising Lyapunov's direct method, the closed-loop differential game system and the estimation weight error of the critic network are proved to be uniformly ultimately bounded. Finally, the effectiveness of the proposed method is demonstrated by using a simple non-linear system and a non-linear missile-target interception system, assuming first-order dynamics for the interceptor and target.

  10. Novel Analog For Muscle Deconditioning

    NASA Technical Reports Server (NTRS)

    Ploutz-Snyder, Lori; Ryder, Jeff; Buxton, Roxanne; Redd, Elizabeth; Scott-Pandorf, Melissa; Hackney, Kyle; Fiedler, James; Bloomberg, Jacob

    2010-01-01

    Existing models of muscle deconditioning are cumbersome and expensive (ex: bedrest). We propose a new model utilizing a weighted suit to manipulate strength, power or endurance (function) relative to body weight (BW). Methods: 20 subjects performed 7 occupational astronaut tasks while wearing a suit weighted with 0-120% of BW. Models of the full relationship between muscle function/BW and task completion time were developed using fractional polynomial regression and verified by the addition of pre- and post-flight astronaut performance data using the same tasks. Spline regression was used to identify muscle function thresholds below which task performance was impaired. Results: Thresholds of performance decline were identified for each task. Seated egress & walk (most difficult task) showed thresholds of: leg press (LP) isometric peak force/BW of 18 N/kg, LP power/BW of 18 W/kg, LP work/ BW of 79 J/kg, knee extension (KE) isokinetic/BW of 6 Nm/Kg and KE torque/BW of 1.9 Nm/kg. Conclusions: Laboratory manipulation of strength / BW has promise as an appropriate analog for spaceflight-induced loss of muscle function for predicting occupational task performance and establishing operationally relevant exercise targets.

  11. Novel Analog For Muscle Deconditioning

    NASA Technical Reports Server (NTRS)

    Ploutz-Snyder, Lori; Ryder, Jeff; Buxton, Roxanne; Redd. Elizabeth; Scott-Pandorf, Melissa; Hackney, Kyle; Fiedler, James; Ploutz-Snyder, Robert; Bloomberg, Jacob

    2011-01-01

    Existing models (such as bed rest) of muscle deconditioning are cumbersome and expensive. We propose a new model utilizing a weighted suit to manipulate strength, power, or endurance (function) relative to body weight (BW). Methods: 20 subjects performed 7 occupational astronaut tasks while wearing a suit weighted with 0-120% of BW. Models of the full relationship between muscle function/BW and task completion time were developed using fractional polynomial regression and verified by the addition of pre-and postflightastronaut performance data for the same tasks. Splineregression was used to identify muscle function thresholds below which task performance was impaired. Results: Thresholds of performance decline were identified for each task. Seated egress & walk (most difficult task) showed thresholds of leg press (LP) isometric peak force/BW of 18 N/kg, LP power/BW of 18 W/kg, LP work/BW of 79 J/kg, isokineticknee extension (KE)/BW of 6 Nm/kg, and KE torque/BW of 1.9 Nm/kg.Conclusions: Laboratory manipulation of relative strength has promise as an appropriate analog for spaceflight-induced loss of muscle function, for predicting occupational task performance and establishing operationally relevant strength thresholds.

  12. [Locally weighted least squares estimation of DPOAE evoked by continuously sweeping primaries].

    PubMed

    Han, Xiaoli; Fu, Xinxing; Cui, Jie; Xiao, Ling

    2013-12-01

    Distortion product otoacoustic emission (DPOAE) signal can be used for diagnosis of hearing loss so that it has an important clinical value. Continuously using sweeping primaries to measure DPOAE provides an efficient tool to record DPOAE data rapidly when DPOAE is measured in a large frequency range. In this paper, locally weighted least squares estimation (LWLSE) of 2f1-f2 DPOAE is presented based on least-squares-fit (LSF) algorithm, in which DPOAE is evoked by continuously sweeping tones. In our study, we used a weighted error function as the loss function and the weighting matrixes in the local sense to obtain a smaller estimated variance. Firstly, ordinary least squares estimation of the DPOAE parameters was obtained. Then the error vectors were grouped and the different local weighting matrixes were calculated in each group. And finally, the parameters of the DPOAE signal were estimated based on least squares estimation principle using the local weighting matrixes. The simulation results showed that the estimate variance and fluctuation errors were reduced, so the method estimates DPOAE and stimuli more accurately and stably, which facilitates extraction of clearer DPOAE fine structure.

  13. Psychophysics of the probability weighting function

    NASA Astrophysics Data System (ADS)

    Takahashi, Taiki

    2011-03-01

    A probability weighting function w(p) for an objective probability p in decision under risk plays a pivotal role in Kahneman-Tversky prospect theory. Although recent studies in econophysics and neuroeconomics widely utilized probability weighting functions, psychophysical foundations of the probability weighting functions have been unknown. Notably, a behavioral economist Prelec (1998) [4] axiomatically derived the probability weighting function w(p)=exp(-() (0<α<1 and w(0)=1,w(1e)=1e,w(1)=1), which has extensively been studied in behavioral neuroeconomics. The present study utilizes psychophysical theory to derive Prelec's probability weighting function from psychophysical laws of perceived waiting time in probabilistic choices. Also, the relations between the parameters in the probability weighting function and the probability discounting function in behavioral psychology are derived. Future directions in the application of the psychophysical theory of the probability weighting function in econophysics and neuroeconomics are discussed.

  14. Wave height data assimilation using non-stationary kriging

    NASA Astrophysics Data System (ADS)

    Tolosana-Delgado, R.; Egozcue, J. J.; Sáchez-Arcilla, A.; Gómez, J.

    2011-03-01

    Data assimilation into numerical models should be both computationally fast and physically meaningful, in order to be applicable in online environmental surveillance. We present a way to improve assimilation for computationally intensive models, based on non-stationary kriging and a separable space-time covariance function. The method is illustrated with significant wave height data. The covariance function is expressed as a collection of fields: each one is obtained as the empirical covariance between the studied property (significant wave height in log-scale) at a pixel where a measurement is located (a wave-buoy is available) and the same parameter at every other pixel of the field. These covariances are computed from the available history of forecasts. The method provides a set of weights, that can be mapped for each measuring location, and that do not vary with time. Resulting weights may be used in a weighted average of the differences between the forecast and measured parameter. In the case presented, these weights may show long-range connection patterns, such as between the Catalan coast and the eastern coast of Sardinia, associated to common prevailing meteo-oceanographic conditions. When such patterns are considered as non-informative of the present situation, it is always possible to diminish their influence by relaxing the covariance maps.

  15. Identifying disease-related subnetwork connectome biomarkers by sparse hypergraph learning.

    PubMed

    Zu, Chen; Gao, Yue; Munsell, Brent; Kim, Minjeong; Peng, Ziwen; Cohen, Jessica R; Zhang, Daoqiang; Wu, Guorong

    2018-06-14

    The functional brain network has gained increased attention in the neuroscience community because of its ability to reveal the underlying architecture of human brain. In general, majority work of functional network connectivity is built based on the correlations between discrete-time-series signals that link only two different brain regions. However, these simple region-to-region connectivity models do not capture complex connectivity patterns between three or more brain regions that form a connectivity subnetwork, or subnetwork for short. To overcome this current limitation, a hypergraph learning-based method is proposed to identify subnetwork differences between two different cohorts. To achieve our goal, a hypergraph is constructed, where each vertex represents a subject and also a hyperedge encodes a subnetwork with similar functional connectivity patterns between different subjects. Unlike previous learning-based methods, our approach is designed to jointly optimize the weights for all hyperedges such that the learned representation is in consensus with the distribution of phenotype data, i.e. clinical labels. In order to suppress the spurious subnetwork biomarkers, we further enforce a sparsity constraint on the hyperedge weights, where a larger hyperedge weight indicates the subnetwork with the capability of identifying the disorder condition. We apply our hypergraph learning-based method to identify subnetwork biomarkers in Autism Spectrum Disorder (ASD) and Attention Deficit Hyperactivity Disorder (ADHD). A comprehensive quantitative and qualitative analysis is performed, and the results show that our approach can correctly classify ASD and ADHD subjects from normal controls with 87.65 and 65.08% accuracies, respectively.

  16. A prospective study of change in bone mass with age in postmenopausal women.

    PubMed

    Hui, S L; Wiske, P S; Norton, J A; Johnston, C C

    1982-01-01

    For the first time a model for age-related bone loss has been developed from prospective data utilizing a new weighted least squares method. Two hundred and sixty-eight Caucasian women ranging in age from 50 to 95 were studied. A quadratic function best fit the data, and correcting for body weight and bone width reduced variance. The derived equation is: bone mass = (0.6032) (bone width) (cm) + (0.003059) (body weight) (kg) - (0.0163) (age - 50) + (0.0002249) (age - 50)2. Analysis of cross-sectional data on 583 Caucasian women of similar age showed a quadratic function with very similar coefficients. This quadratic function predicts an increase in bone mass after age 86, therefore 42 women over age 70 who had been followed for at least 2.5 yr were identified to test for this effect. of these, 13 had significantly positive regression coefficients of bone mass on age, and rate of change in bone width was positive in 40 of 42 individuals, of which 5 were significant. Since photon absorptiometry measures net changes on all bone envelopes, the most likely explanation for the observed changes is an early exponential loss of endosteal bone which ultimately slows or perhaps stops. There is a positive balance on the periosteal envelope which only becomes apparent in later years when the endosteal loss stops. These new statistical methods allow the development of models utilizing data collected at irregular intervals. The methods used are applicable to other biological data collected prospectively.

  17. Determination of the Deacetylation Degree of Chitooligosaccharides

    PubMed Central

    Fu, Chuhan; Wu, Sihui; Liu, Guihua; Guo, Jiao; Su, Zhengquan

    2017-01-01

    The methods for determination of chitosan content recommended in the Chinese Pharmacopoeia and the European Pharmacopoeia are not applicable for evaluation of the extent of deacetylation (deacetylation degree, DD) in chitooligosaccharides (COS). This study explores two different methods for assessment of DD in COS having relatively high and low molecular weights: an acid-base titration with bromocresol green indicator and a first order derivative UV spectrophotometric method for assessment of DD in COS. The accuracy of both methods as a function of molecular weight was also investigated and compared to results obtained using 1H NMR spectroscopy. Our study demonstrates two simple, fast, widely adaptable, highly precise, accurate, and inexpensive methods for the effective determination of DD in COS, which have the potential for widespread commercial applications in developing country. PMID:29068401

  18. Concerning an application of the method of least squares with a variable weight matrix

    NASA Technical Reports Server (NTRS)

    Sukhanov, A. A.

    1979-01-01

    An estimate of a state vector for a physical system when the weight matrix in the method of least squares is a function of this vector is considered. An iterative procedure is proposed for calculating the desired estimate. Conditions for the existence and uniqueness of the limit of this procedure are obtained, and a domain is found which contains the limit estimate. A second method for calculating the desired estimate which reduces to the solution of a system of algebraic equations is proposed. The question of applying Newton's method of tangents to solving the given system of algebraic equations is considered and conditions for the convergence of the modified Newton's method are obtained. Certain properties of the estimate obtained are presented together with an example.

  19. Using spatiotemporal source separation to identify prominent features in multichannel data without sinusoidal filters.

    PubMed

    Cohen, Michael X

    2017-09-27

    The number of simultaneously recorded electrodes in neuroscience is steadily increasing, providing new opportunities for understanding brain function, but also new challenges for appropriately dealing with the increase in dimensionality. Multivariate source separation analysis methods have been particularly effective at improving signal-to-noise ratio while reducing the dimensionality of the data and are widely used for cleaning, classifying and source-localizing multichannel neural time series data. Most source separation methods produce a spatial component (that is, a weighted combination of channels to produce one time series); here, this is extended to apply source separation to a time series, with the idea of obtaining a weighted combination of successive time points, such that the weights are optimized to satisfy some criteria. This is achieved via a two-stage source separation procedure, in which an optimal spatial filter is first constructed and then its optimal temporal basis function is computed. This second stage is achieved with a time-delay-embedding matrix, in which additional rows of a matrix are created from time-delayed versions of existing rows. The optimal spatial and temporal weights can be obtained by solving a generalized eigendecomposition of covariance matrices. The method is demonstrated in simulated data and in an empirical electroencephalogram study on theta-band activity during response conflict. Spatiotemporal source separation has several advantages, including defining empirical filters without the need to apply sinusoidal narrowband filters. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  20. Meshless Local Petrov-Galerkin Method for Bending Problems

    NASA Technical Reports Server (NTRS)

    Phillips, Dawn R.; Raju, Ivatury S.

    2002-01-01

    Recent literature shows extensive research work on meshless or element-free methods as alternatives to the versatile Finite Element Method. One such meshless method is the Meshless Local Petrov-Galerkin (MLPG) method. In this report, the method is developed for bending of beams - C1 problems. A generalized moving least squares (GMLS) interpolation is used to construct the trial functions, and spline and power weight functions are used as the test functions. The method is applied to problems for which exact solutions are available to evaluate its effectiveness. The accuracy of the method is demonstrated for problems with load discontinuities and continuous beam problems. A Petrov-Galerkin implementation of the method is shown to greatly reduce computational time and effort and is thus preferable over the previously developed Galerkin approach. The MLPG method for beam problems yields very accurate deflections and slopes and continuous moment and shear forces without the need for elaborate post-processing techniques.

  1. Evolving cell models for systems and synthetic biology.

    PubMed

    Cao, Hongqing; Romero-Campero, Francisco J; Heeb, Stephan; Cámara, Miguel; Krasnogor, Natalio

    2010-03-01

    This paper proposes a new methodology for the automated design of cell models for systems and synthetic biology. Our modelling framework is based on P systems, a discrete, stochastic and modular formal modelling language. The automated design of biological models comprising the optimization of the model structure and its stochastic kinetic constants is performed using an evolutionary algorithm. The evolutionary algorithm evolves model structures by combining different modules taken from a predefined module library and then it fine-tunes the associated stochastic kinetic constants. We investigate four alternative objective functions for the fitness calculation within the evolutionary algorithm: (1) equally weighted sum method, (2) normalization method, (3) randomly weighted sum method, and (4) equally weighted product method. The effectiveness of the methodology is tested on four case studies of increasing complexity including negative and positive autoregulation as well as two gene networks implementing a pulse generator and a bandwidth detector. We provide a systematic analysis of the evolutionary algorithm's results as well as of the resulting evolved cell models.

  2. Modern rotor balancing - Emerging technologies

    NASA Technical Reports Server (NTRS)

    Zorzi, E. S.; Von Pragenau, G. L.

    1985-01-01

    Modern balancing methods for flexible and rigid rotors are explored. Rigid rotor balancing is performed at several hundred rpm, well below the first bending mode of the shaft. High speed balancing is necessary when the nominal rotational speed is higher than the first bending mode. Both methods introduce weights which will produce rotor responses at given speeds that will be exactly out of phase with the responses of an unbalanced rotor. Modal balancing seeks to add weights which will leave other rotor modes unaffected. Also, influence coefficients can be determined by trial and error addition of weights and recording of their effects on vibration at speeds of interest. The latter method is useful for balancing rotors at other than critical speeds and for performing unified balancing beginning with the first critical speed. Finally, low-speed flexible balancing permits low-speed tests and adjustments of rotor assemblies which will not be accessible when operating in their high-speed functional configuration. The method was developed for the high pressure liquid oxygen turbopumps for the Shuttle.

  3. Time-dependent importance sampling in semiclassical initial value representation calculations for time correlation functions.

    PubMed

    Tao, Guohua; Miller, William H

    2011-07-14

    An efficient time-dependent importance sampling method is developed for the Monte Carlo calculation of time correlation functions via the initial value representation (IVR) of semiclassical (SC) theory. A prefactor-free time-dependent sampling function weights the importance of a trajectory based on the magnitude of its contribution to the time correlation function, and global trial moves are used to facilitate the efficient sampling the phase space of initial conditions. The method can be generally applied to sampling rare events efficiently while avoiding being trapped in a local region of the phase space. Results presented in the paper for two system-bath models demonstrate the efficiency of this new importance sampling method for full SC-IVR calculations.

  4. The dimension split element-free Galerkin method for three-dimensional potential problems

    NASA Astrophysics Data System (ADS)

    Meng, Z. J.; Cheng, H.; Ma, L. D.; Cheng, Y. M.

    2018-06-01

    This paper presents the dimension split element-free Galerkin (DSEFG) method for three-dimensional potential problems, and the corresponding formulae are obtained. The main idea of the DSEFG method is that a three-dimensional potential problem can be transformed into a series of two-dimensional problems. For these two-dimensional problems, the improved moving least-squares (IMLS) approximation is applied to construct the shape function, which uses an orthogonal function system with a weight function as the basis functions. The Galerkin weak form is applied to obtain a discretized system equation, and the penalty method is employed to impose the essential boundary condition. The finite difference method is selected in the splitting direction. For the purposes of demonstration, some selected numerical examples are solved using the DSEFG method. The convergence study and error analysis of the DSEFG method are presented. The numerical examples show that the DSEFG method has greater computational precision and computational efficiency than the IEFG method.

  5. Fullerenic structures and such structures tethered to carbon materials

    DOEpatents

    Goel, Anish; Howard, Jack B.; Vander Sande, John B.

    2010-01-05

    The fullerenic structures include fullerenes having molecular weights less than that of C.sub.60 with the exception of C.sub.36 and fullerenes having molecular weights greater than C.sub.60. Examples include fullerenes C.sub.50, C.sub.58, C.sub.130, and C.sub.176. Fullerenic structure chemically bonded to a carbon surface is also disclosed along with a method for tethering fullerenes to a carbon material. The method includes adding functionalized fullerene to a liquid suspension containing carbon material, drying the suspension to produce a powder, and heat treating the powder.

  6. Fullerenic structures and such structures tethered to carbon materials

    DOEpatents

    Goel, Anish; Howard, Jack B.; Vander Sande, John B.

    2012-10-09

    The fullerenic structures include fullerenes having molecular weights less than that of C.sub.60 with the exception of C.sub.36 and fullerenes having molecular weights greater than C.sub.60. Examples include fullerenes C.sub.50, C.sub.58, C.sub.130, and C.sub.176. Fullerenic structure chemically bonded to a carbon surface is also disclosed along with a method for tethering fullerenes to a carbon material. The method includes adding functionalized fullerene to a liquid suspension containing carbon material, drying the suspension to produce a powder, and heat treating the powder.

  7. How to deal with the high condition number of the noise covariance matrix of gravity field functionals synthesised from a satellite-only global gravity field model?

    NASA Astrophysics Data System (ADS)

    Klees, R.; Slobbe, D. C.; Farahani, H. H.

    2018-03-01

    The posed question arises for instance in regional gravity field modelling using weighted least-squares techniques if the gravity field functionals are synthesised from the spherical harmonic coefficients of a satellite-only global gravity model (GGM), and are used as one of the noisy datasets. The associated noise covariance matrix, appeared to be extremely ill-conditioned with a singular value spectrum that decayed gradually to zero without any noticeable gap. We analysed three methods to deal with the ill-conditioned noise covariance matrix: Tihonov regularisation of the noise covariance matrix in combination with the standard formula for the weighted least-squares estimator, a formula of the weighted least-squares estimator, which does not involve the inverse noise covariance matrix, and an estimator based on Rao's unified theory of least-squares. Our analysis was based on a numerical experiment involving a set of height anomalies synthesised from the GGM GOCO05s, which is provided with a full noise covariance matrix. We showed that the three estimators perform similar, provided that the two regularisation parameters each method knows were chosen properly. As standard regularisation parameter choice rules do not apply here, we suggested a new parameter choice rule, and demonstrated its performance. Using this rule, we found that the differences between the three least-squares estimates were within noise. For the standard formulation of the weighted least-squares estimator with regularised noise covariance matrix, this required an exceptionally strong regularisation, much larger than one expected from the condition number of the noise covariance matrix. The preferred method is the inversion-free formulation of the weighted least-squares estimator, because of its simplicity with respect to the choice of the two regularisation parameters.

  8. A semisupervised support vector regression method to estimate biophysical parameters from remotely sensed images

    NASA Astrophysics Data System (ADS)

    Castelletti, Davide; Demir, Begüm; Bruzzone, Lorenzo

    2014-10-01

    This paper presents a novel semisupervised learning (SSL) technique defined in the context of ɛ-insensitive support vector regression (SVR) to estimate biophysical parameters from remotely sensed images. The proposed SSL method aims to mitigate the problems of small-sized biased training sets without collecting any additional samples with reference measures. This is achieved on the basis of two consecutive steps. The first step is devoted to inject additional priors information in the learning phase of the SVR in order to adapt the importance of each training sample according to distribution of the unlabeled samples. To this end, a weight is initially associated to each training sample based on a novel strategy that defines higher weights for the samples located in the high density regions of the feature space while giving reduced weights to those that fall into the low density regions of the feature space. Then, in order to exploit different weights for training samples in the learning phase of the SVR, we introduce a weighted SVR (WSVR) algorithm. The second step is devoted to jointly exploit labeled and informative unlabeled samples for further improving the definition of the WSVR learning function. To this end, the most informative unlabeled samples that have an expected accurate target values are initially selected according to a novel strategy that relies on the distribution of the unlabeled samples in the feature space and on the WSVR function estimated at the first step. Then, we introduce a restructured WSVR algorithm that jointly uses labeled and unlabeled samples in the learning phase of the WSVR algorithm and tunes their importance by different values of regularization parameters. Experimental results obtained for the estimation of single-tree stem volume show the effectiveness of the proposed SSL method.

  9. Weight-adapted iodinated contrast media administration in abdomino-pelvic CT: Can image quality be maintained?

    PubMed

    Perrin, E; Jackson, M; Grant, R; Lloyd, C; Chinaka, F; Goh, V

    2018-02-01

    In many centres, a fixed method of contrast-media administration is used for CT regardless of patient body habitus. The aim of this trial was to assess contrast enhancement of the aorta, portal vein, liver and spleen during abdomino-pelvic CT imaging using a weight-adapted contrast media protocol compared to the current fixed dose method. Thirty-nine oncology patients, who had previously undergone CT abdomino-pelvic imaging at the institution using a fixed contrast media dose, were prospectively imaged using a weight-adapted contrast media dose (1.4 ml/kg). The two sets of images were assessed for contrast enhancement levels (HU) at locations in the liver, aorta, portal vein and spleen during portal-venous enhancement phase. The t-test was used to compare the difference in results using a non-inferiority margin of 10 HU. When the contrast dose was tailored to patient weight, contrast enhancement levels were shown to be non-inferior to the fixed dose method (liver p < 0.001; portal vein p = 0.003; aorta p = 0.001; spleen p = 0.001). As a group, patients received a total contrast dose reduction of 165 ml using the weight-adapted method compared to the fixed dose method, with a mean cost per patient of £6.81 and £7.19 respectively. Using a weight-adapted method of contrast media administration was shown to be non-inferior to a fixed dose method of contrast media administration. Patients weighing 76 kg, or less, received a lower contrast dose which may have associated cost savings. A weight-adapted contrast media protocol should be implemented for portal-venous phase abdomino-pelvic CT for oncology patients with adequate renal function (>70 ml/min/1.73 m 2 ). Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  10. Variance approach for multi-objective linear programming with fuzzy random of objective function coefficients

    NASA Astrophysics Data System (ADS)

    Indarsih, Indrati, Ch. Rini

    2016-02-01

    In this paper, we define variance of the fuzzy random variables through alpha level. We have a theorem that can be used to know that the variance of fuzzy random variables is a fuzzy number. We have a multi-objective linear programming (MOLP) with fuzzy random of objective function coefficients. We will solve the problem by variance approach. The approach transform the MOLP with fuzzy random of objective function coefficients into MOLP with fuzzy of objective function coefficients. By weighted methods, we have linear programming with fuzzy coefficients and we solve by simplex method for fuzzy linear programming.

  11. A rapid learning and dynamic stepwise updating algorithm for flat neural networks and the application to time-series prediction.

    PubMed

    Chen, C P; Wan, J Z

    1999-01-01

    A fast learning algorithm is proposed to find an optimal weights of the flat neural networks (especially, the functional-link network). Although the flat networks are used for nonlinear function approximation, they can be formulated as linear systems. Thus, the weights of the networks can be solved easily using a linear least-square method. This formulation makes it easier to update the weights instantly for both a new added pattern and a new added enhancement node. A dynamic stepwise updating algorithm is proposed to update the weights of the system on-the-fly. The model is tested on several time-series data including an infrared laser data set, a chaotic time-series, a monthly flour price data set, and a nonlinear system identification problem. The simulation results are compared to existing models in which more complex architectures and more costly training are needed. The results indicate that the proposed model is very attractive to real-time processes.

  12. An extensive analysis of disease-gene associations using network integration and fast kernel-based gene prioritization methods.

    PubMed

    Valentini, Giorgio; Paccanaro, Alberto; Caniza, Horacio; Romero, Alfonso E; Re, Matteo

    2014-06-01

    In the context of "network medicine", gene prioritization methods represent one of the main tools to discover candidate disease genes by exploiting the large amount of data covering different types of functional relationships between genes. Several works proposed to integrate multiple sources of data to improve disease gene prioritization, but to our knowledge no systematic studies focused on the quantitative evaluation of the impact of network integration on gene prioritization. In this paper, we aim at providing an extensive analysis of gene-disease associations not limited to genetic disorders, and a systematic comparison of different network integration methods for gene prioritization. We collected nine different functional networks representing different functional relationships between genes, and we combined them through both unweighted and weighted network integration methods. We then prioritized genes with respect to each of the considered 708 medical subject headings (MeSH) diseases by applying classical guilt-by-association, random walk and random walk with restart algorithms, and the recently proposed kernelized score functions. The results obtained with classical random walk algorithms and the best single network achieved an average area under the curve (AUC) across the 708 MeSH diseases of about 0.82, while kernelized score functions and network integration boosted the average AUC to about 0.89. Weighted integration, by exploiting the different "informativeness" embedded in different functional networks, outperforms unweighted integration at 0.01 significance level, according to the Wilcoxon signed rank sum test. For each MeSH disease we provide the top-ranked unannotated candidate genes, available for further bio-medical investigation. Network integration is necessary to boost the performances of gene prioritization methods. Moreover the methods based on kernelized score functions can further enhance disease gene ranking results, by adopting both local and global learning strategies, able to exploit the overall topology of the network. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  13. Assessing the performance of the generalized propensity score for estimating the effect of quantitative or continuous exposures on survival or time-to-event outcomes.

    PubMed

    Austin, Peter C

    2018-01-01

    Propensity score methods are frequently used to estimate the effects of interventions using observational data. The propensity score was originally developed for use with binary exposures. The generalized propensity score (GPS) is an extension of the propensity score for use with quantitative or continuous exposures (e.g. pack-years of cigarettes smoked, dose of medication, or years of education). We describe how the GPS can be used to estimate the effect of continuous exposures on survival or time-to-event outcomes. To do so we modified the concept of the dose-response function for use with time-to-event outcomes. We used Monte Carlo simulations to examine the performance of different methods of using the GPS to estimate the effect of quantitative exposures on survival or time-to-event outcomes. We examined covariate adjustment using the GPS and weighting using weights based on the inverse of the GPS. The use of methods based on the GPS was compared with the use of conventional G-computation and weighted G-computation. Conventional G-computation resulted in estimates of the dose-response function that displayed the lowest bias and the lowest variability. Amongst the two GPS-based methods, covariate adjustment using the GPS tended to have the better performance. We illustrate the application of these methods by estimating the effect of average neighbourhood income on the probability of survival following hospitalization for an acute myocardial infarction.

  14. Inference of reactive transport model parameters using a Bayesian multivariate approach

    NASA Astrophysics Data System (ADS)

    Carniato, Luca; Schoups, Gerrit; van de Giesen, Nick

    2014-08-01

    Parameter estimation of subsurface transport models from multispecies data requires the definition of an objective function that includes different types of measurements. Common approaches are weighted least squares (WLS), where weights are specified a priori for each measurement, and weighted least squares with weight estimation (WLS(we)) where weights are estimated from the data together with the parameters. In this study, we formulate the parameter estimation task as a multivariate Bayesian inference problem. The WLS and WLS(we) methods are special cases in this framework, corresponding to specific prior assumptions about the residual covariance matrix. The Bayesian perspective allows for generalizations to cases where residual correlation is important and for efficient inference by analytically integrating out the variances (weights) and selected covariances from the joint posterior. Specifically, the WLS and WLS(we) methods are compared to a multivariate (MV) approach that accounts for specific residual correlations without the need for explicit estimation of the error parameters. When applied to inference of reactive transport model parameters from column-scale data on dissolved species concentrations, the following results were obtained: (1) accounting for residual correlation between species provides more accurate parameter estimation for high residual correlation levels whereas its influence for predictive uncertainty is negligible, (2) integrating out the (co)variances leads to an efficient estimation of the full joint posterior with a reduced computational effort compared to the WLS(we) method, and (3) in the presence of model structural errors, none of the methods is able to identify the correct parameter values.

  15. Optimal weight based on energy imbalance and utility maximization

    NASA Astrophysics Data System (ADS)

    Sun, Ruoyan

    2016-01-01

    This paper investigates the optimal weight for both male and female using energy imbalance and utility maximization. Based on the difference of energy intake and expenditure, we develop a state equation that reveals the weight gain from this energy gap. We ​construct an objective function considering food consumption, eating habits and survival rate to measure utility. Through applying mathematical tools from optimal control methods and qualitative theory of differential equations, we obtain some results. For both male and female, the optimal weight is larger than the physiologically optimal weight calculated by the Body Mass Index (BMI). We also study the corresponding trajectories to steady state weight respectively. Depending on the value of a few parameters, the steady state can either be a saddle point with a monotonic trajectory or a focus with dampened oscillations.

  16. Gene regulatory network identification from the yeast cell cycle based on a neuro-fuzzy system.

    PubMed

    Wang, B H; Lim, J W; Lim, J S

    2016-08-30

    Many studies exist for reconstructing gene regulatory networks (GRNs). In this paper, we propose a method based on an advanced neuro-fuzzy system, for gene regulatory network reconstruction from microarray time-series data. This approach uses a neural network with a weighted fuzzy function to model the relationships between genes. Fuzzy rules, which determine the regulators of genes, are very simplified through this method. Additionally, a regulator selection procedure is proposed, which extracts the exact dynamic relationship between genes, using the information obtained from the weighted fuzzy function. Time-series related features are extracted from the original data to employ the characteristics of temporal data that are useful for accurate GRN reconstruction. The microarray dataset of the yeast cell cycle was used for our study. We measured the mean squared prediction error for the efficiency of the proposed approach and evaluated the accuracy in terms of precision, sensitivity, and F-score. The proposed method outperformed the other existing approaches.

  17. Linearized Programming of Memristors for Artificial Neuro-Sensor Signal Processing

    PubMed Central

    Yang, Changju; Kim, Hyongsuk

    2016-01-01

    A linearized programming method of memristor-based neural weights is proposed. Memristor is known as an ideal element to implement a neural synapse due to its embedded functions of analog memory and analog multiplication. Its resistance variation with a voltage input is generally a nonlinear function of time. Linearization of memristance variation about time is very important for the easiness of memristor programming. In this paper, a method utilizing an anti-serial architecture for linear programming is proposed. The anti-serial architecture is composed of two memristors with opposite polarities. It linearizes the variation of memristance due to complimentary actions of two memristors. For programming a memristor, additional memristor with opposite polarity is employed. The linearization effect of weight programming of an anti-serial architecture is investigated and memristor bridge synapse which is built with two sets of anti-serial memristor architecture is taken as an application example of the proposed method. Simulations are performed with memristors of both linear drift model and nonlinear model. PMID:27548186

  18. Linearized Programming of Memristors for Artificial Neuro-Sensor Signal Processing.

    PubMed

    Yang, Changju; Kim, Hyongsuk

    2016-08-19

    A linearized programming method of memristor-based neural weights is proposed. Memristor is known as an ideal element to implement a neural synapse due to its embedded functions of analog memory and analog multiplication. Its resistance variation with a voltage input is generally a nonlinear function of time. Linearization of memristance variation about time is very important for the easiness of memristor programming. In this paper, a method utilizing an anti-serial architecture for linear programming is proposed. The anti-serial architecture is composed of two memristors with opposite polarities. It linearizes the variation of memristance due to complimentary actions of two memristors. For programming a memristor, additional memristor with opposite polarity is employed. The linearization effect of weight programming of an anti-serial architecture is investigated and memristor bridge synapse which is built with two sets of anti-serial memristor architecture is taken as an application example of the proposed method. Simulations are performed with memristors of both linear drift model and nonlinear model.

  19. Feeding method and health outcomes of children with cerebral palsy.

    PubMed

    Rogers, Brian

    2004-08-01

    Disorders of feeding and swallowing are common in children with cerebral palsy. Feeding and swallowing disorders have significant implications for development, growth and nutrition, respiratory health, gastrointestinal function, parent-child interaction, and overall family life. Assessments need to be comprehensive in scope and centered around the medical home. Oral feeding interventions for children with cerebral palsy may be effective in promoting oral motor function, but have not been shown to be effective in promoting feeding efficiency or weight gain. Feeding gastrostomy tubes are a reasonable alternative for children with severe feeding and swallowing problems who have had poor weight gain. Copyright 2004 Elsevier Inc.

  20. Long-Lasting Improvements in Liver Fat and Metabolism Despite Body Weight Regain After Dietary Weight Loss

    PubMed Central

    Haufe, Sven; Haas, Verena; Utz, Wolfgang; Birkenfeld, Andreas L.; Jeran, Stephanie; Böhnke, Jana; Mähler, Anja; Luft, Friedrich C.; Schulz-Menger, Jeanette; Boschmann, Michael; Jordan, Jens; Engeli, Stefan

    2013-01-01

    OBJECTIVE Weight loss reduces abdominal and intrahepatic fat, thereby improving metabolic and cardiovascular risk. Yet, many patients regain weight after successful diet-induced weight loss. Long-term changes in abdominal and liver fat, along with liver test results and insulin resistance, are not known. RESEARCH DESIGN AND METHODS We analyzed 50 overweight to obese subjects (46 ± 9 years of age; BMI, 32.5 ± 3.3 kg/m2; women, 77%) who had participated in a 6-month hypocaloric diet and were randomized to either reduced carbohydrates or reduced fat content. Before, directly after diet, and at an average of 24 (range, 17–36) months follow-up, we assessed body fat distribution by magnetic resonance imaging and markers of liver function and insulin resistance. RESULTS Body weight decreased with diet but had increased again at follow-up. Subjects also partially regained abdominal subcutaneous and visceral adipose tissue. In contrast, intrahepatic fat decreased with diet and remained reduced at follow-up (7.8 ± 9.8% [baseline], 4.5 ± 5.9% [6 months], and 4.7 ± 5.9% [follow-up]). Similar patterns were observed for markers of liver function, whole-body insulin sensitivity, and hepatic insulin resistance. Changes in intrahepatic fat und intrahepatic function were independent of macronutrient composition during intervention and were most effective in subjects with nonalcoholic fatty liver disease at baseline. CONCLUSIONS A 6-month hypocaloric diet induced improvements in hepatic fat, liver test results, and insulin resistance despite regaining of weight up to 2 years after the active intervention. Body weight and adiposity measurements may underestimate beneficial long-term effects of dietary interventions. PMID:23963894

  1. Description of quasiparticle and satellite properties via cumulant expansions of the retarded one-particle Green's function

    DOE PAGES

    Mayers, Matthew Z.; Hybertsen, Mark S.; Reichman, David R.

    2016-08-22

    A cumulant-based GW approximation for the retarded one-particle Green's function is proposed, motivated by an exact relation between the improper Dyson self-energy and the cumulant generating function. We explore qualitative aspects of this method within a simple one-electron independent phonon model, where it is seen that the method preserves the energy moment of the spectral weight while also reproducing the exact Green's function in the weak-coupling limit. For the three-dimensional electron gas, this method predicts multiple satellites at the bottom of the band, albeit with inaccurate peak spacing. But, its quasiparticle properties and correlation energies are more accurate than bothmore » previous cumulant methods and standard G0W0. These results point to features that may be exploited within the framework of cumulant-based methods and suggest promising directions for future exploration and improvements of cumulant-based GW approaches.« less

  2. Numerical solution of the nonlinear Schrodinger equation by feedforward neural networks

    NASA Astrophysics Data System (ADS)

    Shirvany, Yazdan; Hayati, Mohsen; Moradian, Rostam

    2008-12-01

    We present a method to solve boundary value problems using artificial neural networks (ANN). A trial solution of the differential equation is written as a feed-forward neural network containing adjustable parameters (the weights and biases). From the differential equation and its boundary conditions we prepare the energy function which is used in the back-propagation method with momentum term to update the network parameters. We improved energy function of ANN which is derived from Schrodinger equation and the boundary conditions. With this improvement of energy function we can use unsupervised training method in the ANN for solving the equation. Unsupervised training aims to minimize a non-negative energy function. We used the ANN method to solve Schrodinger equation for few quantum systems. Eigenfunctions and energy eigenvalues are calculated. Our numerical results are in agreement with their corresponding analytical solution and show the efficiency of ANN method for solving eigenvalue problems.

  3. Association Between Dietary Intake and Function in Amyotrophic Lateral Sclerosis

    PubMed Central

    Nieves, Jeri W.; Gennings, Chris; Factor-Litvak, Pam; Hupf, Jonathan; Singleton, Jessica; Sharf, Valerie; Oskarsson, Björn; Fernandes Filho, J. Americo M.; Sorenson, Eric J.; D’Amico, Emanuele; Goetz, Ray; Mitsumoto, Hiroshi

    2017-01-01

    IMPORTANCE There is growing interest in the role of nutrition in the pathogenesis and progression of amyotrophic lateral sclerosis (ALS). OBJECTIVE To evaluate the associations between nutrients, individually and in groups, and ALS function and respiratory function at diagnosis. DESIGN, SETTING, AND PARTICIPANTS A cross-sectional baseline analysis of the Amyotrophic Lateral Sclerosis Multicenter Cohort Study of Oxidative Stress study was conducted from March 14, 2008, to February 27, 2013, at 16 ALS clinics throughout the United States among 302 patients with ALS symptom duration of 18 months or less. EXPOSURES Nutrient intake, measured using a modified Block Food Frequency Questionnaire (FFQ). MAIN OUTCOMES AND MEASURES Amyotrophic lateral sclerosis function, measured using the ALS Functional Rating Scale–Revised (ALSFRS-R), and respiratory function, measured using percentage of predicted forced vital capacity (FVC). RESULTS Baseline data were available on 302 patients with ALS (median age, 63.2 years [interquartile range, 55.5–68.0 years]; 178 men and 124 women). Regression analysis of nutrients found that higher intakes of antioxidants and carotenes from vegetables were associated with higher ALSFRS-R scores or percentage FVC. Empirically weighted indices using the weighted quantile sum regression method of “good” micronutrients and “good” food groups were positively associated with ALSFRS-R scores (β [SE], 2.7 [0.69] and 2.9 [0.9], respectively) and percentage FVC (β [SE], 12.1 [2.8] and 11.5 [3.4], respectively) (all P < .001). Positive and significant associations with ALSFRS-R scores (β [SE], 1.5 [0.61]; P = .02) and percentage FVC (β [SE], 5.2 [2.2]; P = .02) for selected vitamins were found in exploratory analyses. CONCLUSIONS AND RELEVANCE Antioxidants, carotenes, fruits, and vegetables were associated with higher ALS function at baseline by regression of nutrient indices and weighted quantile sum regression analysis. We also demonstrated the usefulness of the weighted quantile sum regression method in the evaluation of diet. Those responsible for nutritional care of the patient with ALS should consider promoting fruit and vegetable intake since they are high in antioxidants and carotenes. PMID:27775751

  4. Ordinary Least Squares and Quantile Regression: An Inquiry-Based Learning Approach to a Comparison of Regression Methods

    ERIC Educational Resources Information Center

    Helmreich, James E.; Krog, K. Peter

    2018-01-01

    We present a short, inquiry-based learning course on concepts and methods underlying ordinary least squares (OLS), least absolute deviation (LAD), and quantile regression (QR). Students investigate squared, absolute, and weighted absolute distance functions (metrics) as location measures. Using differential calculus and properties of convex…

  5. Signal Processing for Time-Series Functions on a Graph

    DTIC Science & Technology

    2018-02-01

    as filtering to functions supported on graphs. These methods can be applied to scalar functions with a domain that can be described by a fixed...classical signal processing such as filtering to account for the graph domain. This work essentially divides into 2 basic approaches: graph Laplcian...based filtering and weighted adjacency matrix-based filtering . In Shuman et al.,11 and elaborated in Bronstein et al.,13 filtering operators are

  6. Estimating individual influences of behavioral intentions: an application of random-effects modeling to the theory of reasoned action.

    PubMed

    Hedeker, D; Flay, B R; Petraitis, J

    1996-02-01

    Methods are proposed and described for estimating the degree to which relations among variables vary at the individual level. As an example of the methods, M. Fishbein and I. Ajzen's (1975; I. Ajzen & M. Fishbein, 1980) theory of reasoned action is examined, which posits first that an individual's behavioral intentions are a function of 2 components: the individual's attitudes toward the behavior and the subjective norms as perceived by the individual. A second component of their theory is that individuals may weight these 2 components differently in assessing their behavioral intentions. This article illustrates the use of empirical Bayes methods based on a random-effects regression model to estimate these individual influences, estimating an individual's weighting of both of these components (attitudes toward the behavior and subjective norms) in relation to their behavioral intentions. This method can be used when an individual's behavioral intentions, subjective norms, and attitudes toward the behavior are all repeatedly measured. In this case, the empirical Bayes estimates are derived as a function of the data from the individual, strengthened by the overall sample data.

  7. On the log-normality of historical magnetic-storm intensity statistics: implications for extreme-event probabilities

    USGS Publications Warehouse

    Love, Jeffrey J.; Rigler, E. Joshua; Pulkkinen, Antti; Riley, Pete

    2015-01-01

    An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to −Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, −Dst≥850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42,2.41] times per century; a 100-yr magnetic storm is identified as having a −Dst≥880 nT (greater than Carrington) but a wide 95% confidence interval of [490,1187] nT.

  8. Ways to improve your correlation functions

    NASA Technical Reports Server (NTRS)

    Hamilton, A. J. S.

    1993-01-01

    This paper describes a number of ways to improve on the standard method for measuring the two-point correlation function of large scale structure in the Universe. Issues addressed are: (1) the problem of the mean density, and how to solve it; (2) how to estimate the uncertainty in a measured correlation function; (3) minimum variance pair weighting; (4) unbiased estimation of the selection function when magnitudes are discrete; and (5) analytic computation of angular integrals in background pair counts.

  9. A new EEG synchronization strength analysis method: S-estimator based normalized weighted-permutation mutual information.

    PubMed

    Cui, Dong; Pu, Weiting; Liu, Jing; Bian, Zhijie; Li, Qiuli; Wang, Lei; Gu, Guanghua

    2016-10-01

    Synchronization is an important mechanism for understanding information processing in normal or abnormal brains. In this paper, we propose a new method called normalized weighted-permutation mutual information (NWPMI) for double variable signal synchronization analysis and combine NWPMI with S-estimator measure to generate a new method named S-estimator based normalized weighted-permutation mutual information (SNWPMI) for analyzing multi-channel electroencephalographic (EEG) synchronization strength. The performances including the effects of time delay, embedding dimension, coupling coefficients, signal to noise ratios (SNRs) and data length of the NWPMI are evaluated by using Coupled Henon mapping model. The results show that the NWPMI is superior in describing the synchronization compared with the normalized permutation mutual information (NPMI). Furthermore, the proposed SNWPMI method is applied to analyze scalp EEG data from 26 amnestic mild cognitive impairment (aMCI) subjects and 20 age-matched controls with normal cognitive function, who both suffer from type 2 diabetes mellitus (T2DM). The proposed methods NWPMI and SNWPMI are suggested to be an effective index to estimate the synchronization strength. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Resolution Enhancement Algorithm for Spaceborn SAR Based on Hanning Function Weighted Sidelobe Suppression

    NASA Astrophysics Data System (ADS)

    Li, C.; Zhou, X.; Tang, D.; Zhu, Z.

    2018-04-01

    Resolution and sidelobe are mutual restrict for SAR image. Usually sidelobe suppression is based on resolution reduction. This paper provide a method for resolution enchancement using sidelobe opposition speciality of hanning window and SAR image. The method can keep high resolution on the condition of sidelobe suppression. Compare to traditional method, this method can enchance 50 % resolution when sidelobe is -30dB.

  11. The solitary wave solution of coupled Klein-Gordon-Zakharov equations via two different numerical methods

    NASA Astrophysics Data System (ADS)

    Dehghan, Mehdi; Nikpour, Ahmad

    2013-09-01

    In this research, we propose two different methods to solve the coupled Klein-Gordon-Zakharov (KGZ) equations: the Differential Quadrature (DQ) and Globally Radial Basis Functions (GRBFs) methods. In the DQ method, the derivative value of a function with respect to a point is directly approximated by a linear combination of all functional values in the global domain. The principal work in this method is the determination of weight coefficients. We use two ways for obtaining these coefficients: cosine expansion (CDQ) and radial basis functions (RBFs-DQ), the former is a mesh-based method and the latter categorizes in the set of meshless methods. Unlike the DQ method, the GRBF method directly substitutes the expression of the function approximation by RBFs into the partial differential equation. The main problem in the GRBFs method is ill-conditioning of the interpolation matrix. Avoiding this problem, we study the bases introduced in Pazouki and Schaback (2011) [44]. Some examples are presented to compare the accuracy and easy implementation of the proposed methods. In numerical examples, we concentrate on Inverse Multiquadric (IMQ) and second-order Thin Plate Spline (TPS) radial basis functions. The variable shape parameter (exponentially and random) strategies are applied in the IMQ function and the results are compared with the constant shape parameter.

  12. Chain-Growth Methods for the Synthesis of High Molecular Weight Conducting and Semiconducting Polymers

    DTIC Science & Technology

    2013-08-25

    to produce the desired polymerization in analogy to the well-known “super glue ” anionic polymerization. Although there are abundant examples of...light (a) and UV light (b). 5 are further functionalized and block polymers formed with polynorborene have elastomeric properties. The...top) and UV (bottom) light show the evolution of the band gap of the polymer with increasing molecular weight. The plot on the right shows the

  13. Assessing the performance of the generalized propensity score for estimating the effect of quantitative or continuous exposures on binary outcomes

    PubMed Central

    2018-01-01

    Propensity score methods are increasingly being used to estimate the effects of treatments and exposures when using observational data. The propensity score was initially developed for use with binary exposures. The generalized propensity score (GPS) is an extension of the propensity score for use with quantitative or continuous exposures (eg, dose or quantity of medication, income, or years of education). We used Monte Carlo simulations to examine the performance of different methods of using the GPS to estimate the effect of continuous exposures on binary outcomes. We examined covariate adjustment using the GPS and weighting using weights based on the inverse of the GPS. We examined both the use of ordinary least squares to estimate the propensity function and the use of the covariate balancing propensity score algorithm. The use of methods based on the GPS was compared with the use of G‐computation. All methods resulted in essentially unbiased estimation of the population dose‐response function. However, GPS‐based weighting tended to result in estimates that displayed greater variability and had higher mean squared error when the magnitude of confounding was strong. Of the methods based on the GPS, covariate adjustment using the GPS tended to result in estimates with lower variability and mean squared error when the magnitude of confounding was strong. We illustrate the application of these methods by estimating the effect of average neighborhood income on the probability of death within 1 year of hospitalization for an acute myocardial infarction. PMID:29508424

  14. Assessing the performance of the generalized propensity score for estimating the effect of quantitative or continuous exposures on binary outcomes.

    PubMed

    Austin, Peter C

    2018-05-20

    Propensity score methods are increasingly being used to estimate the effects of treatments and exposures when using observational data. The propensity score was initially developed for use with binary exposures. The generalized propensity score (GPS) is an extension of the propensity score for use with quantitative or continuous exposures (eg, dose or quantity of medication, income, or years of education). We used Monte Carlo simulations to examine the performance of different methods of using the GPS to estimate the effect of continuous exposures on binary outcomes. We examined covariate adjustment using the GPS and weighting using weights based on the inverse of the GPS. We examined both the use of ordinary least squares to estimate the propensity function and the use of the covariate balancing propensity score algorithm. The use of methods based on the GPS was compared with the use of G-computation. All methods resulted in essentially unbiased estimation of the population dose-response function. However, GPS-based weighting tended to result in estimates that displayed greater variability and had higher mean squared error when the magnitude of confounding was strong. Of the methods based on the GPS, covariate adjustment using the GPS tended to result in estimates with lower variability and mean squared error when the magnitude of confounding was strong. We illustrate the application of these methods by estimating the effect of average neighborhood income on the probability of death within 1 year of hospitalization for an acute myocardial infarction. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  15. Weighted Flow Algorithms (WFA) for stochastic particle coagulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeVille, R.E.L., E-mail: rdeville@illinois.edu; Riemer, N., E-mail: nriemer@illinois.edu; West, M., E-mail: mwest@illinois.edu

    2011-09-20

    Stochastic particle-resolved methods are a useful way to compute the time evolution of the multi-dimensional size distribution of atmospheric aerosol particles. An effective approach to improve the efficiency of such models is the use of weighted computational particles. Here we introduce particle weighting functions that are power laws in particle size to the recently-developed particle-resolved model PartMC-MOSAIC and present the mathematical formalism of these Weighted Flow Algorithms (WFA) for particle coagulation and growth. We apply this to an urban plume scenario that simulates a particle population undergoing emission of different particle types, dilution, coagulation and aerosol chemistry along a Lagrangianmore » trajectory. We quantify the performance of the Weighted Flow Algorithm for number and mass-based quantities of relevance for atmospheric sciences applications.« less

  16. Weighted Flow Algorithms (WFA) for stochastic particle coagulation

    NASA Astrophysics Data System (ADS)

    DeVille, R. E. L.; Riemer, N.; West, M.

    2011-09-01

    Stochastic particle-resolved methods are a useful way to compute the time evolution of the multi-dimensional size distribution of atmospheric aerosol particles. An effective approach to improve the efficiency of such models is the use of weighted computational particles. Here we introduce particle weighting functions that are power laws in particle size to the recently-developed particle-resolved model PartMC-MOSAIC and present the mathematical formalism of these Weighted Flow Algorithms (WFA) for particle coagulation and growth. We apply this to an urban plume scenario that simulates a particle population undergoing emission of different particle types, dilution, coagulation and aerosol chemistry along a Lagrangian trajectory. We quantify the performance of the Weighted Flow Algorithm for number and mass-based quantities of relevance for atmospheric sciences applications.

  17. A method for estimating both the solubility parameters and molar volumes of liquids

    NASA Technical Reports Server (NTRS)

    Fedors, R. F.

    1974-01-01

    Development of an indirect method of estimating the solubility parameter of high molecular weight polymers. The proposed method of estimating the solubility parameter, like Small's method, is based on group additive constants, but is believed to be superior to Small's method for two reasons: (1) the contribution of a much larger number of functional groups have been evaluated, and (2) the method requires only a knowledge of structural formula of the compound.

  18. Current dosing of low-molecular-weight heparins does not reflect licensed product labels: an international survey.

    PubMed

    Barras, Michael A; Kirkpatrick, Carl M J; Green, Bruce

    2010-05-01

    Low-molecular-weight heparins (LMWHs) are used globally to treat thromboembolic diseases; however, there is much debate on how to prescribe effectively for patients who have renal impairment and/or obesity. We aimed to investigate the strategies used to dose-individualize LMWH therapy. We conducted an online survey of selected hospitals in Australia, New Zealand (NZ), United Kingdom (UK) and the United States (US). Outcome measures included: the percentage of hospitals which recommended that LMWHs were prescribed according to the product label (PL), the percentage of hospitals that dose-individualized LMWHs outside the PL based on renal function, body weight and anti-Xa activity and a summary of methods used to dose-individualize therapy. A total of 257 surveys were suitable for analysis: 84 (33%) from Australia, 79 (31%) from the UK, 73 (28%) from the US and 21 (8%) from NZ. Formal dosing protocols were used in 207 (81%) hospitals, of which 198 (96%) did not adhere to the PL. Of these 198 hospitals, 175 (87%) preferred to dose-individualize based on renal function, 128 (62%) on body weight and 48 (23%) by monitoring anti-Xa activity. All three of these variables were used in 29 (14%) hospitals, 98 (47%) used two variables and 71 (34%) used only one variable. Dose-individualization strategies for LMWHs, which contravene the PL, were present in 96% of surveyed hospitals. Common individualization methods included dose-capping, use of lean body size descriptors to calculate renal function and the starting dose, followed by post dose anti-Xa monitoring.

  19. Negative Social Evaluation Impairs Executive Functions in Adolescents With Excess Weight: Associations With Autonomic Responses.

    PubMed

    Padilla, María Moreno; Fernández-Serrano, María J; Verdejo García, Antonio; Reyes Del Paso, Gustavo A

    2018-06-22

    Adolescents with excess weight suffer social stress more frequently than their peers with normal weight. To examine the impact of social stress, specifically negative social evaluation, on executive functions in adolescents with excess weight. We also examined associations between subjective stress, autonomic reactivity, and executive functioning. Sixty adolescents (aged 13-18 years) classified into excess weight or normal weight groups participated. We assessed executive functioning (working memory, inhibition, and shifting) and subjective stress levels before and after the Trier Social Stress Task (TSST). The TSST was divided into two phases according to the feedback of the audience: positive and negative social evaluation. Heart rate and skin conductance were recorded. Adolescents with excess weight showed poorer executive functioning after exposure to TSST compared with adolescents with normal weight. Subjective stress and autonomic reactivity were also greater in adolescents with excess weight than adolescents with normal weight. Negative social evaluation was associated with worse executive functioning and increased autonomic reactivity in adolescents with excess weight. The findings suggest that adolescents with excess weight are more sensitive to social stress triggered by negative evaluations. Social stress elicited deterioration of executive functioning in adolescents with excess weight. Evoked increases in subjective stress and autonomic responses predicted decreased executive function. Deficits in executive skills could reduce cognitive control abilities and lead to overeating in adolescents with excess weight. Strategies to cope with social stress to prevent executive deficits could be useful to prevent future obesity in this population.

  20. Amelioration of estrogen-deficiency-induced obesity by Ocimum gratissimum

    PubMed Central

    Chao, Pei-Yu; Chiang, Tsay-I; Chang, I-Chang; Tsai, Fang-Ling; Lee, Hsueh-Hui; Hsieh, Kuanghui; Chiu, Yung-Wei; Lai, Te-Jen; Liu, Jer-Yuh; Hsu, Li-Sung; Shih, Yang-Chia

    2017-01-01

    Objectives: Menopausal transition in women initiates with declining estrogen levels and is followed by significant changes in their physiological characteristics. These changes often lead to medical conditions, such as obesity, which is correlated with chronic low-grade/subclinical inflammation. Ocimum gratissimum L. is a food spice or traditional herb in many countries; the plant is rich in antioxidants, which possess anti-inflammation activities and multitude of other therapeutic functions. Methods: In this study, we evaluated effects of O. gratissimum extract (OGE) in preventing obesity by using ovariectomized (OVX) animal models to mimic menopausal women. Methods: OVX rats showed increase in body weight and in adipocyte size in perigonadal adipose tissue (p <0.05) and decrease in uterus weight. By contrast, OGE (0.2 mg/ml) significantly reduced body weight gain and adipocyte in OVX rats and showed insignificant changes in uterus weight. Further investigation indicated that OGE exerted no influence on levels of dorsal fat, serum total cholesterol, and serum triacylglycerol and on serum biochemical factors, calcium, phosphorus, and glucose. Conclusion: These findings suggested that OGE dietary supplements may be useful in controlling body weight of menopausal women. PMID:28824328

  1. Modeling Fetal Weight for Gestational Age: A Comparison of a Flexible Multi-level Spline-based Model with Other Approaches

    PubMed Central

    Villandré, Luc; Hutcheon, Jennifer A; Perez Trejo, Maria Esther; Abenhaim, Haim; Jacobsen, Geir; Platt, Robert W

    2011-01-01

    We present a model for longitudinal measures of fetal weight as a function of gestational age. We use a linear mixed model, with a Box-Cox transformation of fetal weight values, and restricted cubic splines, in order to flexibly but parsimoniously model median fetal weight. We systematically compare our model to other proposed approaches. All proposed methods are shown to yield similar median estimates, as evidenced by overlapping pointwise confidence bands, except after 40 completed weeks, where our method seems to produce estimates more consistent with observed data. Sex-based stratification affects the estimates of the random effects variance-covariance structure, without significantly changing sex-specific fitted median values. We illustrate the benefits of including sex-gestational age interaction terms in the model over stratification. The comparison leads to the conclusion that the selection of a model for fetal weight for gestational age can be based on the specific goals and configuration of a given study without affecting the precision or value of median estimates for most gestational ages of interest. PMID:21931571

  2. Amelioration of estrogen-deficiency-induced obesity by Ocimum gratissimum.

    PubMed

    Chao, Pei-Yu; Chiang, Tsay-I; Chang, I-Chang; Tsai, Fang-Ling; Lee, Hsueh-Hui; Hsieh, Kuanghui; Chiu, Yung-Wei; Lai, Te-Jen; Liu, Jer-Yuh; Hsu, Li-Sung; Shih, Yang-Chia

    2017-01-01

    Objectives: Menopausal transition in women initiates with declining estrogen levels and is followed by significant changes in their physiological characteristics. These changes often lead to medical conditions, such as obesity, which is correlated with chronic low-grade/subclinical inflammation. Ocimum gratissimum L. is a food spice or traditional herb in many countries; the plant is rich in antioxidants, which possess anti-inflammation activities and multitude of other therapeutic functions. Methods: In this study, we evaluated effects of O . gratissimum extract (OGE) in preventing obesity by using ovariectomized (OVX) animal models to mimic menopausal women. Methods: OVX rats showed increase in body weight and in adipocyte size in perigonadal adipose tissue ( p <0.05) and decrease in uterus weight. By contrast, OGE (0.2 mg/ml) significantly reduced body weight gain and adipocyte in OVX rats and showed insignificant changes in uterus weight. Further investigation indicated that OGE exerted no influence on levels of dorsal fat, serum total cholesterol, and serum triacylglycerol and on serum biochemical factors, calcium, phosphorus, and glucose. Conclusion: These findings suggested that OGE dietary supplements may be useful in controlling body weight of menopausal women.

  3. A Truncated Nuclear Norm Regularization Method Based on Weighted Residual Error for Matrix Completion.

    PubMed

    Qing Liu; Zhihui Lai; Zongwei Zhou; Fangjun Kuang; Zhong Jin

    2016-01-01

    Low-rank matrix completion aims to recover a matrix from a small subset of its entries and has received much attention in the field of computer vision. Most existing methods formulate the task as a low-rank matrix approximation problem. A truncated nuclear norm has recently been proposed as a better approximation to the rank of matrix than a nuclear norm. The corresponding optimization method, truncated nuclear norm regularization (TNNR), converges better than the nuclear norm minimization-based methods. However, it is not robust to the number of subtracted singular values and requires a large number of iterations to converge. In this paper, a TNNR method based on weighted residual error (TNNR-WRE) for matrix completion and its extension model (ETNNR-WRE) are proposed. TNNR-WRE assigns different weights to the rows of the residual error matrix in an augmented Lagrange function to accelerate the convergence of the TNNR method. The ETNNR-WRE is much more robust to the number of subtracted singular values than the TNNR-WRE, TNNR alternating direction method of multipliers, and TNNR accelerated proximal gradient with Line search methods. Experimental results using both synthetic and real visual data sets show that the proposed TNNR-WRE and ETNNR-WRE methods perform better than TNNR and Iteratively Reweighted Nuclear Norm (IRNN) methods.

  4. Generalized effective-mass theory of subsurface scanning tunneling microscopy: Application to cleaved quantum dots

    NASA Astrophysics Data System (ADS)

    Roy, M.; Maksym, P. A.; Bruls, D.; Offermans, P.; Koenraad, P. M.

    2010-11-01

    An effective-mass theory of subsurface scanning tunneling microscopy (STM) is developed. Subsurface structures such as quantum dots embedded into a semiconductor slab are considered. States localized around subsurface structures match on to a tail that decays into the vacuum above the surface. It is shown that the lateral variation in this tail may be found from a surface envelope function provided that the effects of the slab surfaces and the subsurface structure decouple approximately. The surface envelope function is given by a weighted integral of a bulk envelope function that satisfies boundary conditions appropriate to the slab. The weight function decays into the slab inversely with distance and this slow decay explains the subsurface sensitivity of STM. These results enable STM images to be computed simply and economically from the bulk envelope function. The method is used to compute wave-function images of cleaved quantum dots and the computed images agree very well with experiment.

  5. Censored quantile regression with recursive partitioning-based weights

    PubMed Central

    Wey, Andrew; Wang, Lan; Rudser, Kyle

    2014-01-01

    Censored quantile regression provides a useful alternative to the Cox proportional hazards model for analyzing survival data. It directly models the conditional quantile of the survival time and hence is easy to interpret. Moreover, it relaxes the proportionality constraint on the hazard function associated with the popular Cox model and is natural for modeling heterogeneity of the data. Recently, Wang and Wang (2009. Locally weighted censored quantile regression. Journal of the American Statistical Association 103, 1117–1128) proposed a locally weighted censored quantile regression approach that allows for covariate-dependent censoring and is less restrictive than other censored quantile regression methods. However, their kernel smoothing-based weighting scheme requires all covariates to be continuous and encounters practical difficulty with even a moderate number of covariates. We propose a new weighting approach that uses recursive partitioning, e.g. survival trees, that offers greater flexibility in handling covariate-dependent censoring in moderately high dimensions and can incorporate both continuous and discrete covariates. We prove that this new weighting scheme leads to consistent estimation of the quantile regression coefficients and demonstrate its effectiveness via Monte Carlo simulations. We also illustrate the new method using a widely recognized data set from a clinical trial on primary biliary cirrhosis. PMID:23975800

  6. An Intuitionistic Multiplicative ORESTE Method for Patients’ Prioritization of Hospitalization

    PubMed Central

    Zhang, Cheng; Wu, Xingli; Wu, Di; Luo, Li; Herrera-Viedma, Enrique

    2018-01-01

    The tension brought about by sickbeds is a common and intractable issue in public hospitals in China due to the large population. Assigning the order of hospitalization of patients is difficult because of complex patient information such as disease type, emergency degree, and severity. It is critical to rank the patients taking full account of various factors. However, most of the evaluation criteria for hospitalization are qualitative, and the classical ranking method cannot derive the detailed relations between patients based on these criteria. Motivated by this, a comprehensive multiple criteria decision making method named the intuitionistic multiplicative ORESTE (organísation, rangement et Synthèse dedonnées relarionnelles, in French) was proposed to handle the problem. The subjective and objective weights of criteria were considered in the proposed method. To do so, first, considering the vagueness of human perceptions towards the alternatives, an intuitionistic multiplicative preference relation model is applied to represent the experts’ preferences over the pairwise alternatives with respect to the predetermined criteria. Then, a correlation coefficient-based weight determining method is developed to derive the objective weights of criteria. This method can overcome the biased results caused by highly-related criteria. Afterwards, we improved the general ranking method, ORESTE, by introducing a new score function which considers both the subjective and objective weights of criteria. An intuitionistic multiplicative ORESTE method was then developed and further highlighted by a case study concerning the patients’ prioritization. PMID:29673212

  7. Landsat D Thematic Mapper image dimensionality reduction and geometric correction accuracy

    NASA Technical Reports Server (NTRS)

    Ford, G. E.

    1986-01-01

    To characterize and quantify the performance of the Landsat thematic mapper (TM), techniques for dimensionality reduction by linear transformation have been studied and evaluated and the accuracy of the correction of geometric errors in TM images analyzed. Theoretical evaluations and comparisons for existing methods for the design of linear transformation for dimensionality reduction are presented. These methods include the discrete Karhunen Loeve (KL) expansion, Multiple Discriminant Analysis (MDA), Thematic Mapper (TM)-Tasseled Cap Linear Transformation and Singular Value Decomposition (SVD). A unified approach to these design problems is presented in which each method involves optimizing an objective function with respect to the linear transformation matrix. From these studies, four modified methods are proposed. They are referred to as the Space Variant Linear Transformation, the KL Transform-MDA hybrid method, and the First and Second Version of the Weighted MDA method. The modifications involve the assignment of weights to classes to achieve improvements in the class conditional probability of error for classes with high weights. Experimental evaluations of the existing and proposed methods have been performed using the six reflective bands of the TM data. It is shown that in terms of probability of classification error and the percentage of the cumulative eigenvalues, the six reflective bands of the TM data require only a three dimensional feature space. It is shown experimentally as well that for the proposed methods, the classes with high weights have improvements in class conditional probability of error estimates as expected.

  8. ANTONIA perfusion and stroke. A software tool for the multi-purpose analysis of MR perfusion-weighted datasets and quantitative ischemic stroke assessment.

    PubMed

    Forkert, N D; Cheng, B; Kemmling, A; Thomalla, G; Fiehler, J

    2014-01-01

    The objective of this work is to present the software tool ANTONIA, which has been developed to facilitate a quantitative analysis of perfusion-weighted MRI (PWI) datasets in general as well as the subsequent multi-parametric analysis of additional datasets for the specific purpose of acute ischemic stroke patient dataset evaluation. Three different methods for the analysis of DSC or DCE PWI datasets are currently implemented in ANTONIA, which can be case-specifically selected based on the study protocol. These methods comprise a curve fitting method as well as a deconvolution-based and deconvolution-free method integrating a previously defined arterial input function. The perfusion analysis is extended for the purpose of acute ischemic stroke analysis by additional methods that enable an automatic atlas-based selection of the arterial input function, an analysis of the perfusion-diffusion and DWI-FLAIR mismatch as well as segmentation-based volumetric analyses. For reliability evaluation, the described software tool was used by two observers for quantitative analysis of 15 datasets from acute ischemic stroke patients to extract the acute lesion core volume, FLAIR ratio, perfusion-diffusion mismatch volume with manually as well as automatically selected arterial input functions, and follow-up lesion volume. The results of this evaluation revealed that the described software tool leads to highly reproducible results for all parameters if the automatic arterial input function selection method is used. Due to the broad selection of processing methods that are available in the software tool, ANTONIA is especially helpful to support image-based perfusion and acute ischemic stroke research projects.

  9. Physical Activity and Physical Function in Individuals Post-bariatric Surgery

    PubMed Central

    Josbeno, Deborah A.; Kalarchian, Melissa; Sparto, Patrick J.; Otto, Amy D.; Jakicic, John M.

    2016-01-01

    Background A better understanding of the physical activity behavior of individuals who undergo bariatric surgery will enable the development of effective post-surgical exercise guidelines and interventions to enhance weight loss outcomes. This study characterized the physical activity profile and physical function of 40 subjects 2–5 years post-bariatric surgery and examined the association between physical activity, physical function, and weight loss after surgery. Methods Moderate-to-vigorous intensity physical activity (MVPA) was assessed with the BodyMedia SenseWear® Pro (SWPro) armband, and physical function (PF) was measured using the physical function subscale of the 36-Item Short Form Health Survey instrument (SF-36PF). Height and weight were measured. Results Percent of excess weight loss (%EWL) was associated with MVPA (r = 0.44, p = 0.01) and PF (r = 0.38, p = 0.02); MVPA was not associated with PF (r = 0.24, p = 0.14). Regression analysis demonstrated that MVPA was associated with %EWL (β = 0.38, t = 2.43, p = 0.02). Subjects who participated in ≥150 min/week of MVPA had a greater %EWL (68.2 ± 19, p = 0.01) than those who participated in <150 min/week (52.5 ± 17.4). Conclusions Results suggest that subjects are capable of performing most mobility activities. However, the lack of an association between PF and MVPA suggests that a higher level of PF does not necessarily correspond to a higher level of MVPA participation. Thus, the barriers to adoption of a more physically active lifestyle may not be fully explained by the subjects’ physical limitations. Further understanding of this relationship is needed for the development of post-surgical weight loss guidelines and interventions. PMID:21153567

  10. A comparative analysis of 9 multi-model averaging approaches in hydrological continuous streamflow simulation

    NASA Astrophysics Data System (ADS)

    Arsenault, Richard; Gatien, Philippe; Renaud, Benoit; Brissette, François; Martel, Jean-Luc

    2015-10-01

    This study aims to test whether a weighted combination of several hydrological models can simulate flows more accurately than the models taken individually. In addition, the project attempts to identify the most efficient model averaging method and the optimal number of models to include in the weighting scheme. In order to address the first objective, streamflow was simulated using four lumped hydrological models (HSAMI, HMETS, MOHYSE and GR4J-6), each of which were calibrated with three different objective functions on 429 watersheds. The resulting 12 hydrographs (4 models × 3 metrics) were weighted and combined with the help of 9 averaging methods which are the simple arithmetic mean (SAM), Akaike information criterion (AICA), Bates-Granger (BGA), Bayes information criterion (BICA), Bayesian model averaging (BMA), Granger-Ramanathan average variant A, B and C (GRA, GRB and GRC) and the average by SCE-UA optimization (SCA). The same weights were then applied to the hydrographs in validation mode, and the Nash-Sutcliffe Efficiency metric was measured between the averaged and observed hydrographs. Statistical analyses were performed to compare the accuracy of weighted methods to that of individual models. A Kruskal-Wallis test and a multi-objective optimization algorithm were then used to identify the most efficient weighted method and the optimal number of models to integrate. Results suggest that the GRA, GRB, GRC and SCA weighted methods perform better than the individual members. Model averaging from these four methods were superior to the best of the individual members in 76% of the cases. Optimal combinations on all watersheds included at least one of each of the four hydrological models. None of the optimal combinations included all members of the ensemble of 12 hydrographs. The Granger-Ramanathan average variant C (GRC) is recommended as the best compromise between accuracy, speed of execution, and simplicity.

  11. Dwell time-based stabilisation of switched delay systems using free-weighting matrices

    NASA Astrophysics Data System (ADS)

    Koru, Ahmet Taha; Delibaşı, Akın; Özbay, Hitay

    2018-01-01

    In this paper, we present a quasi-convex optimisation method to minimise an upper bound of the dwell time for stability of switched delay systems. Piecewise Lyapunov-Krasovskii functionals are introduced and the upper bound for the derivative of Lyapunov functionals is estimated by free-weighting matrices method to investigate non-switching stability of each candidate subsystems. Then, a sufficient condition for the dwell time is derived to guarantee the asymptotic stability of the switched delay system. Once these conditions are represented by a set of linear matrix inequalities , dwell time optimisation problem can be formulated as a standard quasi-convex optimisation problem. Numerical examples are given to illustrate the improvements over previously obtained dwell time bounds. Using the results obtained in the stability case, we present a nonlinear minimisation algorithm to synthesise the dwell time minimiser controllers. The algorithm solves the problem with successive linearisation of nonlinear conditions.

  12. Foot Pain and Pronated Foot Type are Associated with Self-Reported Mobility Limitations in Older Adults: the Framingham Foot Study

    PubMed Central

    Menz, Hylton B.; Dufour, Alyssa B.; Katz, Patricia; Hannan, Marian T.

    2015-01-01

    Background The foot plays an important role in supporting the body when undertaking weight bearing activities. Aging is associated with an increased prevalence of foot pain and a lowering of the arch of the foot, both of which may impair mobility. Objective To examine the associations of foot pain, foot posture and dynamic foot function with self-reported mobility limitations in community-dwelling older adults. Methods Foot examinations were conducted on 1,860 members of the Framingham Study in 2002–2005. Foot posture was categorized as normal, planus or cavus using static pressure measurements, and foot function was categorized as normal, pronated or supinated using dynamic pressure measurements. Participants were asked whether they had foot pain and any difficulty performing a list of nine weight bearing tasks. Multivariate logistic regression and linear regression models were used to examine the associations of foot pain, posture, function and ability to perform these activities. Results After adjusting for age, sex, height and weight, foot pain was significantly associated with difficulty performing all nine weight bearing activities. Compared to those with normal foot posture and function, participants with planus foot posture were more likely to report difficulty remaining balanced (odds ratio [OR] = 1.40, 95% confidence interval [CI] 1.06 to 1.85; p=0.018) and individuals with pronated foot function were more likely to report difficulty walking across a small room (OR = 2.07, 95% CI 1.02 to 4.22; p=0.045). Foot pain and planus foot posture were associated with an overall mobility limitation score combining performances on each measure. Conclusion Foot pain, planus foot posture and pronated foot function are associated with self-reported difficulty undertaking common weight bearing tasks. Interventions to reduce foot pain and improve foot posture and function may therefore have a role in improving mobility in older adults. PMID:26645379

  13. Interpolation of orientation distribution functions in diffusion weighted imaging using multi-tensor model.

    PubMed

    Afzali, Maryam; Fatemizadeh, Emad; Soltanian-Zadeh, Hamid

    2015-09-30

    Diffusion weighted imaging (DWI) is a non-invasive method for investigating the brain white matter structure and can be used to evaluate fiber bundles. However, due to practical constraints, DWI data acquired in clinics are low resolution. This paper proposes a method for interpolation of orientation distribution functions (ODFs). To this end, fuzzy clustering is applied to segment ODFs based on the principal diffusion directions (PDDs). Next, a cluster is modeled by a tensor so that an ODF is represented by a mixture of tensors. For interpolation, each tensor is rotated separately. The method is applied on the synthetic and real DWI data of control and epileptic subjects. Both experiments illustrate capability of the method in increasing spatial resolution of the data in the ODF field properly. The real dataset show that the method is capable of reliable identification of differences between temporal lobe epilepsy (TLE) patients and normal subjects. The method is compared to existing methods. Comparison studies show that the proposed method generates smaller angular errors relative to the existing methods. Another advantage of the method is that it does not require an iterative algorithm to find the tensors. The proposed method is appropriate for increasing resolution in the ODF field and can be applied to clinical data to improve evaluation of white matter fibers in the brain. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Prediction of Heterodimeric Protein Complexes from Weighted Protein-Protein Interaction Networks Using Novel Features and Kernel Functions

    PubMed Central

    Ruan, Peiying; Hayashida, Morihiro; Maruyama, Osamu; Akutsu, Tatsuya

    2013-01-01

    Since many proteins express their functional activity by interacting with other proteins and forming protein complexes, it is very useful to identify sets of proteins that form complexes. For that purpose, many prediction methods for protein complexes from protein-protein interactions have been developed such as MCL, MCODE, RNSC, PCP, RRW, and NWE. These methods have dealt with only complexes with size of more than three because the methods often are based on some density of subgraphs. However, heterodimeric protein complexes that consist of two distinct proteins occupy a large part according to several comprehensive databases of known complexes. In this paper, we propose several feature space mappings from protein-protein interaction data, in which each interaction is weighted based on reliability. Furthermore, we make use of prior knowledge on protein domains to develop feature space mappings, domain composition kernel and its combination kernel with our proposed features. We perform ten-fold cross-validation computational experiments. These results suggest that our proposed kernel considerably outperforms the naive Bayes-based method, which is the best existing method for predicting heterodimeric protein complexes. PMID:23776458

  15. Obesity and Overweight Associated With Increased Carotid Diameter and Decreased Arterial Function in Young Otherwise Healthy Men

    PubMed Central

    2014-01-01

    BACKGROUND Obesity is linked to cardiovascular disease, stroke, increased mortality and vascular remodeling. Although increased arterial diameter is associated with multiple cardiovascular risk factors and obesity, it is unknown whether lumen enlargement is accompanied by unfavorable vascular changes in young and otherwise healthy obese individuals. The purpose of this study was to compare carotid and brachial artery diameter, blood pressure, arterial stiffness, and endothelial function in young, apparently healthy, normal-weight, overweight, and obese male subjects. METHODS One hundred sixty-five male subjects (27.39±0.59 years) were divided into 3 groups (normal weight, overweight, and obese) according to body mass index. Subjects underwent cardiovascular measurements to determine arterial diameter, function, and stiffness. RESULTS After adjusting for age, the obese group had significantly greater brachial, carotid, and aortic pressures, brachial pulse wave velocity, carotid intima media thickness, and carotid arterial diameter compared with both the overweight and normal-weight groups. CONCLUSIONS Obesity is associated with a much worse arterial profile, as an increased carotid lumen size was accompanied by higher blood pressure, greater arterial stiffness, and greater carotid intima media thickness in obese compared with overweight or normal-weight individuals. These data suggest that although obesity may be a factor in arterial remodeling, such remodeling is also accompanied by other hemodynamic and arterial changes consistent with reduced arterial function and increased cardiovascular risk. PMID:24048148

  16. Bovine colostrum to children with short bowel syndrome: a randomized, double-blind, crossover pilot study.

    PubMed

    Aunsholt, Lise; Jeppesen, Palle Bekker; Lund, Pernille; Sangild, Per Torp; Ifaoui, Inge Bøtker Rasmussen; Qvist, Niels; Husby, Steffen

    2014-01-01

    Management of short bowel syndrome (SBS) aims to achieve intestinal autonomy to prevent fluid, electrolyte, and nutrient deficiencies and maintain adequate development. Remnant intestinal adaptation is required to obtain autonomy. In the newborn pig, colostrum has been shown to support intestinal development and hence adaptive processes. The efficacy of bovine colostrum to improve intestinal function in children with SBS was evaluated by metabolic balance studies. Nine children with SBS were included in a randomized, double-blind, crossover study. Twenty percent of enteral fluid intake was replaced with bovine colostrum or a mixed milk diet for 4 weeks, separated by a 4-week washout period. Intestinal absorption of energy and wet weight was used to assess intestinal function and the efficacy of colostrum. Colostrum did not improve energy or wet weight absorption compared with the mixed milk diet (P = 1.00 and P = .93, respectively). Growth as measured by weight and knemometry did not differ between diets (P = .93 and P = .28). In these patients, <150% enteral energy absorption of basal metabolic rate and 50% enteral fluid absorption of basal fluid requirement suggested intestinal failure and a need for parenteral nutrition (PN). Inclusion of bovine colostrum to the diet did not improve intestinal function. Metabolic nutrient and wet weight balance studies successfully assessed intestinal function, and this method may distinguish between intestinal insufficiency (non-PN-dependent) and intestinal failure (PN-dependent) patients.

  17. Connectivity strength-weighted sparse group representation-based brain network construction for MCI classification.

    PubMed

    Yu, Renping; Zhang, Han; An, Le; Chen, Xiaobo; Wei, Zhihui; Shen, Dinggang

    2017-05-01

    Brain functional network analysis has shown great potential in understanding brain functions and also in identifying biomarkers for brain diseases, such as Alzheimer's disease (AD) and its early stage, mild cognitive impairment (MCI). In these applications, accurate construction of biologically meaningful brain network is critical. Sparse learning has been widely used for brain network construction; however, its l 1 -norm penalty simply penalizes each edge of a brain network equally, without considering the original connectivity strength which is one of the most important inherent linkwise characters. Besides, based on the similarity of the linkwise connectivity, brain network shows prominent group structure (i.e., a set of edges sharing similar attributes). In this article, we propose a novel brain functional network modeling framework with a "connectivity strength-weighted sparse group constraint." In particular, the network modeling can be optimized by considering both raw connectivity strength and its group structure, without losing the merit of sparsity. Our proposed method is applied to MCI classification, a challenging task for early AD diagnosis. Experimental results based on the resting-state functional MRI, from 50 MCI patients and 49 healthy controls, show that our proposed method is more effective (i.e., achieving a significantly higher classification accuracy, 84.8%) than other competing methods (e.g., sparse representation, accuracy = 65.6%). Post hoc inspection of the informative features further shows more biologically meaningful brain functional connectivities obtained by our proposed method. Hum Brain Mapp 38:2370-2383, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  18. Do Portuguese and UK health state values differ across valuation methods?

    PubMed

    Ferreira, Lara N; Ferreira, Pedro L; Rowen, Donna; Brazier, John E

    2011-05-01

    There has been an increasing interest in developing country-specific preference weights for widely used measures of health-related quality of life. The valuation of health states has usually been done using cardinal preference elicitation techniques of standard gamble (SG) or time trade-off (TTO). Yet there is increasing interest in the use of ordinal methods to elicit health state utility values as an alternative to the more conventional cardinal techniques.This raises the issue of firstly whether ordinal and cardinal methods of preference elicitation provide similar results and secondly whether this relationship is robust across different valuation studies and different populations. This study examines SG and rank preference weights for the SF-6D derived from samples of the UK and Portuguese general population. The preference weights for the Portuguese sample (n = 140) using rank data are estimated here with 810 health state valuations. The study further examines whether the use of these different preference weights has an impact when comparing the health of different age and severity groups in the Portuguese working population (n = 2,459). The rank model performed well across the majority of measures of goodness of fit used. The preference weights for the Portuguese sample using rank data are systematically lower than the UK weights for physical functioning and pain. Yet our results suggest higher similarity between preference weights derived using rank data than using standard gamble across the UK and Portuguese samples. Our results further suggest that the SF-6D values for a sample of the Portuguese working-age population and differences across groups are affected by the use of different preference weights. We suggest that the use of a Portuguese SF-6D weighting system is preferred for studies aiming to reflect the health state preferences of the Portuguese population.

  19. FUNCTIONAL OUTCOMES OF HIP ARTHROSCOPY IN AN ACTIVE DUTY MILITARY POPULATION UTILIZING A CRITERION-BASED EARLY WEIGHT BEARING PROGRESSION

    PubMed Central

    Jacobs, Jeremy M.; Evanson, J. Richard; Pniewski, Josh; Dickston, Michelle L.; Mueller, Terry; Bojescul, John A.

    2017-01-01

    Introduction Hip arthroscopy allows surgeons to address intra-articular pathology of the hip while avoiding more invasive open surgical dislocation. However the post-operative rehabilitation protocols have varied greatly in the literature, with many having prolonged periods of limited motion and weight bearing. Purpose The purpose of this study was to describe a criterion-based early weight bearing protocol following hip arthroscopy and investigate functional outcomes in the subjects who were active duty military. Methods Active duty personnel undergoing hip arthroscopy for symptomatic femoroacetabular impingement were prospectively assessed in a controlled environment for the ability to incorporate early postoperative weight-bearing with the following criteria: no increased pain complaint with weight bearing and normalized gait pattern. Modified Harris Hip (HHS) and Hip Outcome score (HOS) were performed preoperatively and at six months post-op. Participants were progressed with a standard hip arthroscopy protocol. Hip flexion was limited to not exceed 90 degrees for the first three weeks post-op, with progression back to running beginning at three months. Final discharge was dependent upon the ability to run two miles at military specified pace and do a single leg broad jump within six inches of the contralateral leg without an increase in pain. Results Eleven participants met inclusion criteria over the study period. Crutch use was discontinued at an average of five days following surgery based on established weight bearing criteria. Only one participant required continued crutch use at 15 days. Participants’ functional outcome was improved postoperatively, as demonstrated by significant increases in HOS and HHS. At the six month follow up, eight of 11 participants were able to take and complete a full Army Physical Fitness Test. Conclusions Following completion of the early weight bearing rehabilitation protocol, 81% of participants were able to progress to full weight bearing by four days post-operative, with normalized pain-free gait patterns. Active duty personnel utilizing an early weight bearing protocol following hip arthroscopy demonstrated significant functional improvement at six months. Level of Evidence Level 4, Case-series PMID:29181261

  20. Do Knee Bracing and Delayed Weight Bearing Affect Mid-Term Functional Outcome after Anterior Cruciate Ligament Reconstruction?

    PubMed

    Di Miceli, Riccardo; Marambio, Carlotta Bustos; Zati, Alessandro; Monesi, Roberta; Benedetti, Maria Grazia

    2017-12-01

    Purpose  The aim of this study was to assess the effect of knee bracing and timing of full weight bearing after anterior cruciate ligament reconstruction (ACLR) on functional outcomes at mid-term follow-up. Methods  We performed a retrospective study on 41 patients with ACLR. Patients were divided in two groups: ACLR group, who received isolated ACL reconstruction and ACLR-OI group who received ACL reconstruction and adjunctive surgery. Information about age at surgery, bracing, full or progressive weight bearing permission after surgery were collected for the two groups. Subjective IKDC score was obtained at follow-up. Statistical analysis was performed to compare the two groups for IKDC score. Subgroup analysis was performed to assess the effect of postoperative regimen (knee bracing and weight bearing) on functional outcomes. Results  The mean age of patients was 30.8 ± 10.6 years. Mean IKDC score was 87.4 ± 13.9. The mean follow-up was 3.5 ± 1.8 years. Twenty-two (53.7%) patients underwent ACLR only, while 19 (46.3%) also received other interventions, such as meniscal repair and/or collateral ligament suture. Analysis of overall data showed no differences between the groups for IKDC score. Patients in the ACLR group exhibited a significantly better IKDC score when no brace and full weight bearing after 4 weeks from surgery was prescribed in comparison with patients who worn a brace and had delayed full weight bearing. No differences were found with respect to the use of brace and postoperative weight bearing regimen in the ACLR-OI group. Conclusion  Brace and delayed weight bearing after ACLR have a negative influence on long-term functional outcomes. Further research is required to explore possible differences in the patients operated on ACLR and other intervention with respect to the use of a brace and the timing of full weight bearing to identify optimal recovery strategies. Level of Evidence  Level III, retrospective observational study.

  1. Intra-retinal segmentation of optical coherence tomography images using active contours with a dynamic programming initialization and an adaptive weighting strategy

    NASA Astrophysics Data System (ADS)

    Gholami, Peyman; Roy, Priyanka; Kuppuswamy Parthasarathy, Mohana; Ommani, Abbas; Zelek, John; Lakshminarayanan, Vasudevan

    2018-02-01

    Retinal layer shape and thickness are one of the main indicators in the diagnosis of ocular diseases. We present an active contour approach to localize intra-retinal boundaries of eight retinal layers from OCT images. The initial locations of the active contour curves are determined using a Viterbi dynamic programming method. The main energy function is a Chan-Vese active contour model without edges. A boundary term is added to the energy function using an adaptive weighting method to help curves converge to the retinal layer edges more precisely, after evolving of curves towards boundaries, in final iterations. A wavelet-based denoising method is used to remove speckle from OCT images while preserving important details and edges. The performance of the proposed method was tested on a set of healthy and diseased eye SD-OCT images. The experimental results, compared between the proposed method and the manual segmentation, which was determined by an optometrist, indicate that our method has obtained an average of 95.29%, 92.78%, 95.86%, 87.93%, 82.67%, and 90.25% respectively, for accuracy, sensitivity, specificity, precision, Jaccard Index, and Dice Similarity Coefficient over all segmented layers. These results justify the robustness of the proposed method in determining the location of different retinal layers.

  2. Query-Adaptive Reciprocal Hash Tables for Nearest Neighbor Search.

    PubMed

    Liu, Xianglong; Deng, Cheng; Lang, Bo; Tao, Dacheng; Li, Xuelong

    2016-02-01

    Recent years have witnessed the success of binary hashing techniques in approximate nearest neighbor search. In practice, multiple hash tables are usually built using hashing to cover more desired results in the hit buckets of each table. However, rare work studies the unified approach to constructing multiple informative hash tables using any type of hashing algorithms. Meanwhile, for multiple table search, it also lacks of a generic query-adaptive and fine-grained ranking scheme that can alleviate the binary quantization loss suffered in the standard hashing techniques. To solve the above problems, in this paper, we first regard the table construction as a selection problem over a set of candidate hash functions. With the graph representation of the function set, we propose an efficient solution that sequentially applies normalized dominant set to finding the most informative and independent hash functions for each table. To further reduce the redundancy between tables, we explore the reciprocal hash tables in a boosting manner, where the hash function graph is updated with high weights emphasized on the misclassified neighbor pairs of previous hash tables. To refine the ranking of the retrieved buckets within a certain Hamming radius from the query, we propose a query-adaptive bitwise weighting scheme to enable fine-grained bucket ranking in each hash table, exploiting the discriminative power of its hash functions and their complement for nearest neighbor search. Moreover, we integrate such scheme into the multiple table search using a fast, yet reciprocal table lookup algorithm within the adaptive weighted Hamming radius. In this paper, both the construction method and the query-adaptive search method are general and compatible with different types of hashing algorithms using different feature spaces and/or parameter settings. Our extensive experiments on several large-scale benchmarks demonstrate that the proposed techniques can significantly outperform both the naive construction methods and the state-of-the-art hashing algorithms.

  3. Computation of the Complex Probability Function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trainer, Amelia Jo; Ledwith, Patrick John

    The complex probability function is important in many areas of physics and many techniques have been developed in an attempt to compute it for some z quickly and e ciently. Most prominent are the methods that use Gauss-Hermite quadrature, which uses the roots of the n th degree Hermite polynomial and corresponding weights to approximate the complex probability function. This document serves as an overview and discussion of the use, shortcomings, and potential improvements on the Gauss-Hermite quadrature for the complex probability function.

  4. Particle swarm optimization-based local entropy weighted histogram equalization for infrared image enhancement

    NASA Astrophysics Data System (ADS)

    Wan, Minjie; Gu, Guohua; Qian, Weixian; Ren, Kan; Chen, Qian; Maldague, Xavier

    2018-06-01

    Infrared image enhancement plays a significant role in intelligent urban surveillance systems for smart city applications. Unlike existing methods only exaggerating the global contrast, we propose a particle swam optimization-based local entropy weighted histogram equalization which involves the enhancement of both local details and fore-and background contrast. First of all, a novel local entropy weighted histogram depicting the distribution of detail information is calculated based on a modified hyperbolic tangent function. Then, the histogram is divided into two parts via a threshold maximizing the inter-class variance in order to improve the contrasts of foreground and background, respectively. To avoid over-enhancement and noise amplification, double plateau thresholds of the presented histogram are formulated by means of particle swarm optimization algorithm. Lastly, each sub-image is equalized independently according to the constrained sub-local entropy weighted histogram. Comparative experiments implemented on real infrared images prove that our algorithm outperforms other state-of-the-art methods in terms of both visual and quantized evaluations.

  5. Auditory Weighting Functions and TTS/PTS Exposure Functions for Marine Mammals Exposed to Underwater Noise

    DTIC Science & Technology

    2016-12-01

    weighting functions utilized the “M-weighting” functions at lower frequencies, where no TTS existed at that time . Since derivation of the Phase 2...resulting shapes of the weighting functions (left) and exposure functions (right). The arrows indicate the direction of change when the designated parameter...thresholds are in dB re 1 μPa ..................................... iv 1. Species group designations for Navy Phase 3 auditory weighting functions

  6. Use of the maximum entropy method to retrieve the vertical atmospheric ozone profile and predict atmospheric ozone content

    NASA Technical Reports Server (NTRS)

    Turner, B. Curtis

    1992-01-01

    A method is developed for prediction of ozone levels in planetary atmospheres. This method is formulated in terms of error covariance matrices, and is associated with both direct measurements, a priori first guess profiles, and a weighting function matrix. This is described by the following linearized equation: y = A(matrix) x X + eta, where A is the weighting matrix and eta is noise. The problems to this approach are: (1) the A matrix is near singularity; (2) the number of unknowns in the profile exceeds the number of data points, therefore, the solution may not be unique; and (3) even if a unique solution exists, eta may cause the solution to be ill conditioned.

  7. Study on multimodal transport route under low carbon background

    NASA Astrophysics Data System (ADS)

    Liu, Lele; Liu, Jie

    2018-06-01

    Low-carbon environmental protection is the focus of attention around the world, scientists are constantly researching on production of carbon emissions and living carbon emissions. However, there is little literature about multimodal transportation based on carbon emission at home and abroad. Firstly, this paper introduces the theory of multimodal transportation, the multimodal transport models that didn't consider carbon emissions and consider carbon emissions are analyzed. On this basis, a multi-objective programming 0-1 programming model with minimum total transportation cost and minimum total carbon emission is proposed. The idea of weight is applied to Ideal point method for solving problem, multi-objective programming is transformed into a single objective function. The optimal solution of carbon emission to transportation cost under different weights is determined by a single objective function with variable weights. Based on the model and algorithm, an example is given and the results are analyzed.

  8. Nontangent, Developed Contour Bulkheads for a Single-Stage Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Wu, K. Chauncey; Lepsch, Roger A., Jr.

    2000-01-01

    Dry weights for single-stage launch vehicles that incorporate nontangent, developed contour bulkheads are estimated and compared to a baseline vehicle with 1.414 aspect ratio ellipsoidal bulkheads. Weights, volumes, and heights of optimized bulkhead designs are computed using a preliminary design bulkhead analysis code. The dry weights of vehicles that incorporate the optimized bulkheads are predicted using a vehicle weights and sizing code. Two optimization approaches are employed. A structural-level method, where the vehicle's three major bulkhead regions are optimized separately and then incorporated into a model for computation of the vehicle dry weight, predicts a reduction of4365 lb (2.2 %) from the 200,679-lb baseline vehicle dry weight. In the second, vehicle-level, approach, the vehicle dry weight is the objective function for the optimization. For the vehicle-level analysis, modified bulkhead designs are analyzed and incorporated into the weights model for computation of a dry weight. The optimizer simultaneously manipulates design variables for all three bulkheads to reduce the dry weight. The vehicle-level analysis predicts a dry weight reduction of 5129 lb, a 2.6% reduction from the baseline weight. Based on these results, nontangent, developed contour bulkheads may provide substantial weight savings for single stage vehicles.

  9. Local-learning-based neuron selection for grasping gesture prediction in motor brain machine interfaces

    NASA Astrophysics Data System (ADS)

    Xu, Kai; Wang, Yiwen; Wang, Yueming; Wang, Fang; Hao, Yaoyao; Zhang, Shaomin; Zhang, Qiaosheng; Chen, Weidong; Zheng, Xiaoxiang

    2013-04-01

    Objective. The high-dimensional neural recordings bring computational challenges to movement decoding in motor brain machine interfaces (mBMI), especially for portable applications. However, not all recorded neural activities relate to the execution of a certain movement task. This paper proposes to use a local-learning-based method to perform neuron selection for the gesture prediction in a reaching and grasping task. Approach. Nonlinear neural activities are decomposed into a set of linear ones in a weighted feature space. A margin is defined to measure the distance between inter-class and intra-class neural patterns. The weights, reflecting the importance of neurons, are obtained by minimizing a margin-based exponential error function. To find the most dominant neurons in the task, 1-norm regularization is introduced to the objective function for sparse weights, where near-zero weights indicate irrelevant neurons. Main results. The signals of only 10 neurons out of 70 selected by the proposed method could achieve over 95% of the full recording's decoding accuracy of gesture predictions, no matter which different decoding methods are used (support vector machine and K-nearest neighbor). The temporal activities of the selected neurons show visually distinguishable patterns associated with various hand states. Compared with other algorithms, the proposed method can better eliminate the irrelevant neurons with near-zero weights and provides the important neuron subset with the best decoding performance in statistics. The weights of important neurons converge usually within 10-20 iterations. In addition, we study the temporal and spatial variation of neuron importance along a period of one and a half months in the same task. A high decoding performance can be maintained by updating the neuron subset. Significance. The proposed algorithm effectively ascertains the neuronal importance without assuming any coding model and provides a high performance with different decoding models. It shows better robustness of identifying the important neurons with noisy signals presented. The low demand of computational resources which, reflected by the fast convergence, indicates the feasibility of the method applied in portable BMI systems. The ascertainment of the important neurons helps to inspect neural patterns visually associated with the movement task. The elimination of irrelevant neurons greatly reduces the computational burden of mBMI systems and maintains the performance with better robustness.

  10. Comparing Analytic Hierarchy Process and Discrete-Choice Experiment to Elicit Patient Preferences for Treatment Characteristics in Age-Related Macular Degeneration.

    PubMed

    Danner, Marion; Vennedey, Vera; Hiligsmann, Mickaël; Fauser, Sascha; Gross, Christian; Stock, Stephanie

    2017-09-01

    In this study, we conducted an analytic hierarchy process (AHP) and a discrete choice experiment (DCE) to elicit the preferences of patients with age-related macular degeneration using identical attributes and levels. To compare preference-based weights for age-related macular degeneration treatment attributes and levels generated by two elicitation methods. The properties of both methods were assessed, including ease of instrument use. A DCE and an AHP experiment were designed on the basis of five attributes. Preference-based weights were generated using the matrix multiplication method for attributes and levels in AHP and a mixed multinomial logit model for levels in the DCE. Attribute importance was further compared using coefficient (DCE) and weight (AHP) level ranges. The questionnaire difficulty was rated on a qualitative scale. Patients were asked to think aloud while providing their judgments. AHP and DCE generated similar results regarding levels, stressing a preference for visual improvement, frequent monitoring, on-demand and less frequent injection schemes, approved drugs, and mild side effects. Attribute weights derived on the basis of level ranges led to a ranking that was opposite to the AHP directly calculated attribute weights. For example, visual function ranked first in the AHP and last on the basis of level ranges. The results across the methods were similar, with one exception: the directly measured AHP attribute weights were different from the level-based interpretation of attribute importance in both DCE and AHP. The dependence/independence of attribute importance on level ranges in DCE and AHP, respectively, should be taken into account when choosing a method to support decision making. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  11. Policy Gradient Adaptive Dynamic Programming for Data-Based Optimal Control.

    PubMed

    Luo, Biao; Liu, Derong; Wu, Huai-Ning; Wang, Ding; Lewis, Frank L

    2017-10-01

    The model-free optimal control problem of general discrete-time nonlinear systems is considered in this paper, and a data-based policy gradient adaptive dynamic programming (PGADP) algorithm is developed to design an adaptive optimal controller method. By using offline and online data rather than the mathematical system model, the PGADP algorithm improves control policy with a gradient descent scheme. The convergence of the PGADP algorithm is proved by demonstrating that the constructed Q -function sequence converges to the optimal Q -function. Based on the PGADP algorithm, the adaptive control method is developed with an actor-critic structure and the method of weighted residuals. Its convergence properties are analyzed, where the approximate Q -function converges to its optimum. Computer simulation results demonstrate the effectiveness of the PGADP-based adaptive control method.

  12. The maximum entropy method of moments and Bayesian probability theory

    NASA Astrophysics Data System (ADS)

    Bretthorst, G. Larry

    2013-08-01

    The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.

  13. The Least-Squares Estimation of Latent Trait Variables.

    ERIC Educational Resources Information Center

    Tatsuoka, Kikumi

    This paper presents a new method for estimating a given latent trait variable by the least-squares approach. The beta weights are obtained recursively with the help of Fourier series and expressed as functions of item parameters of response curves. The values of the latent trait variable estimated by this method and by maximum likelihood method…

  14. Optimum target sizes for a sequential sawing process

    Treesearch

    H. Dean Claxton

    1972-01-01

    A method for solving a class of problems in random sequential processes is presented. Sawing cedar pencil blocks is used to illustrate the method. Equations are developed for the function representing loss from improper sizing of blocks. A weighted over-all distribution for sawing and drying operations is developed and graphed. Loss minimizing changes in the control...

  15. Apparatus and method for fabricating a microbattery

    DOEpatents

    Shul, Randy J.; Kravitz, Stanley H.; Christenson, Todd R.; Zipperian, Thomas E.; Ingersoll, David

    2002-01-01

    An apparatus and method for fabricating a microbattery that uses silicon as the structural component, packaging component, and semiconductor to reduce the weight, size, and cost of thin film battery technology is described. When combined with advanced semiconductor packaging techniques, such a silicon-based microbattery enables the fabrication of autonomous, highly functional, integrated microsystems having broad applicability.

  16. Airbreathing engine selection criteria for SSTO propulsion system

    NASA Astrophysics Data System (ADS)

    Ohkami, Yoshiaki; Maita, Masataka

    1995-02-01

    This paper presents airbreathing engine selection criteria to be applied to the propulsion system of a Single Stage To Orbit (SSTO). To establish the criteria, a relation among three major parameters, i.e., delta-V capability, weight penalty, and effective specific impulse of the engine subsystem, is derived as compared to these parameters of the LH2/LOX rocket engine. The effective specific impulse is a function of the engine I(sub sp) and vehicle thrust-to-drag ratio which is approximated by a function of the vehicle velocity. The weight penalty includes the engine dry weight, cooling subsystem weight. The delta-V capability is defined by the velocity region starting from the minimum operating velocity up to the maximum velocity. The vehicle feasibility is investigated in terms of the structural and propellant weights, which requires an iteration process adjusting the system parameters. The system parameters are computed by iteration based on the Newton-Raphson method. It has been concluded that performance in the higher velocity region is extremely important so that the airbreathing engines are required to operate beyond the velocity equivalent to the rocket engine exhaust velocity (approximately 4500 m/s).

  17. Flagellated bacterial motility in polymer solutions

    PubMed Central

    Martinez, Vincent A.; Schwarz-Linek, Jana; Reufer, Mathias; Wilson, Laurence G.; Morozov, Alexander N.; Poon, Wilson C. K.

    2014-01-01

    It is widely believed that the swimming speed, v, of many flagellated bacteria is a nonmonotonic function of the concentration, c, of high-molecular-weight linear polymers in aqueous solution, showing peaked v(c) curves. Pores in the polymer solution were suggested as the explanation. Quantifying this picture led to a theory that predicted peaked v(c) curves. Using high-throughput methods for characterizing motility, we measured v and the angular frequency of cell body rotation, Ω, of motile Escherichia coli as a function of polymer concentration in polyvinylpyrrolidone (PVP) and Ficoll solutions of different molecular weights. We find that nonmonotonic v(c) curves are typically due to low-molecular-weight impurities. After purification by dialysis, the measured v(c) and Ω(c) relations for all but the highest-molecular-weight PVP can be described in detail by Newtonian hydrodynamics. There is clear evidence for non-Newtonian effects in the highest-molecular-weight PVP solution. Calculations suggest that this is due to the fast-rotating flagella seeing a lower viscosity than the cell body, so that flagella can be seen as nano-rheometers for probing the non-Newtonian behavior of high polymer solutions on a molecular scale. PMID:25468981

  18. Computation and application of tissue-specific gene set weights.

    PubMed

    Frost, H Robert

    2018-04-06

    Gene set testing, or pathway analysis, has become a critical tool for the analysis of highdimensional genomic data. Although the function and activity of many genes and higher-level processes is tissue-specific, gene set testing is typically performed in a tissue agnostic fashion, which impacts statistical power and the interpretation and replication of results. To address this challenge, we have developed a bioinformatics approach to compute tissuespecific weights for individual gene sets using information on tissue-specific gene activity from the Human Protein Atlas (HPA). We used this approach to create a public repository of tissue-specific gene set weights for 37 different human tissue types from the HPA and all collections in the Molecular Signatures Database (MSigDB). To demonstrate the validity and utility of these weights, we explored three different applications: the functional characterization of human tissues, multi-tissue analysis for systemic diseases and tissue-specific gene set testing. All data used in the reported analyses is publicly available. An R implementation of the method and tissue-specific weights for MSigDB gene set collections can be downloaded at http://www.dartmouth.edu/∼hrfrost/TissueSpecificGeneSets. rob.frost@dartmouth.edu.

  19. Binding ligand prediction for proteins using partial matching of local surface patches.

    PubMed

    Sael, Lee; Kihara, Daisuke

    2010-01-01

    Functional elucidation of uncharacterized protein structures is an important task in bioinformatics. We report our new approach for structure-based function prediction which captures local surface features of ligand binding pockets. Function of proteins, specifically, binding ligands of proteins, can be predicted by finding similar local surface regions of known proteins. To enable partial comparison of binding sites in proteins, a weighted bipartite matching algorithm is used to match pairs of surface patches. The surface patches are encoded with the 3D Zernike descriptors. Unlike the existing methods which compare global characteristics of the protein fold or the global pocket shape, the local surface patch method can find functional similarity between non-homologous proteins and binding pockets for flexible ligand molecules. The proposed method improves prediction results over global pocket shape-based method which was previously developed by our group.

  20. Binding Ligand Prediction for Proteins Using Partial Matching of Local Surface Patches

    PubMed Central

    Sael, Lee; Kihara, Daisuke

    2010-01-01

    Functional elucidation of uncharacterized protein structures is an important task in bioinformatics. We report our new approach for structure-based function prediction which captures local surface features of ligand binding pockets. Function of proteins, specifically, binding ligands of proteins, can be predicted by finding similar local surface regions of known proteins. To enable partial comparison of binding sites in proteins, a weighted bipartite matching algorithm is used to match pairs of surface patches. The surface patches are encoded with the 3D Zernike descriptors. Unlike the existing methods which compare global characteristics of the protein fold or the global pocket shape, the local surface patch method can find functional similarity between non-homologous proteins and binding pockets for flexible ligand molecules. The proposed method improves prediction results over global pocket shape-based method which was previously developed by our group. PMID:21614188

  1. Diet-Induced Weight Loss Alters Functional Brain Responses during an Episodic Memory Task

    PubMed Central

    Boraxbekk, Carl-Johan; Stomby, Andreas; Ryberg, Mats; Lindahl, Bernt; Larsson, Christel; Nyberg, Lars; Olsson, Tommy

    2015-01-01

    Objective It has been suggested that overweight is negatively associated with cognitive functions. The aim of this study was to investigate whether a reduction in body weight by dietary interventions could improve episodic memory performance and alter associated functional brain responses in overweight and obese women. Methods 20 overweight postmenopausal women were randomized to either a modified paleolithic diet or a standard diet adhering to the Nordic Nutrition Recommendations for 6 months. We used functional magnetic resonance imaging to examine brain function during an episodic memory task as well as anthropometric and biochemical data before and after the interventions. Results Episodic memory performance improved significantly (p = 0.010) after the dietary interventions. Concomitantly, brain activity increased in the anterior part of the right hippocampus during memory encoding, without differences between diets. This was associated with decreased levels of plasma free fatty acids (FFA). Brain activity increased in pre-frontal cortex and superior/middle temporal gyri. The magnitude of increase correlated with waist circumference reduction. During episodic retrieval, brain activity decreased in inferior and middle frontal gyri, and increased in middle/superior temporal gyri. Conclusions Diet-induced weight loss, associated with decreased levels of plasma FFA, improves episodic memory linked to increased hippocampal activity. PMID:26139105

  2. Effects of complete water fasting and regeneration diet on kidney function, oxidative stress and antioxidants.

    PubMed

    Mojto, V; Gvozdjakova, A; Kucharska, J; Rausova, Z; Vancova, O; Valuch, J

    2018-01-01

    The aim of the study was to observe the influence of 11-days complete water fasting (WF) and regeneration diet (RD) on renal function, body weight, blood pressure and oxidative stress. Therapeutic WF is considered a healing method. Ten volunteers drank only water for 11 days, followed by RD for the next 11 days. Data on body weight, blood pressure, kidney functions, antioxidants, lipid peroxidation, cholesterols, triacylglycerols and selected biochemical parameters were obtained. WF increased uric acid and creatinine and decreased glomerular filtration rate. After RD, the parameters were comparable to baseline values. Urea was not affected. Lipid peroxidation (TBARS) decreased and maintained stable after RD. Fasting decreased α-tocopherol and increased γ-tocopherol, no significant changes were found after RD. Coenzyme Q10 decreased after RD. HDL-cholesterol decreased in WF. Total- and LDL-cholesterol decreased after RD. Other biochemical parameters were within the range of reference values. The effect of the complete fasting on kidney function was manifested by hyperuricemia. Renal function was slightly decreased, however maintained within the reference values. After RD, it returned to baseline values. The positive effect of the complete water fasting was in the reduction of oxidative stress, body weight and blood pressure (Tab. 3, Ref. 25).

  3. Psychosocial outcomes in a weight loss camp for overweight youth

    PubMed Central

    QUINLAN, NICOLE P.; KOLOTKIN, RONETTE L.; FUEMMELER, BERNARD F.; COSTANZO, PHILIP R.

    2015-01-01

    Objective There is good evidence that youth attending weight loss camps in the UK and US are successful at achieving weight loss. Limited research suggests improvement in body image and self-esteem as well. This study evaluated changes in eight psychosocial variables following participation in a weight loss camp and examined the role of gender, age, length of stay, and body mass index (BMI) in these changes. Methods This was an observational and self-report study of 130 participants (mean age=12.8; mean BMI=33.5; 70% female; 77% Caucasian). The program consisted of an 1 800 kcal/day diet, daily supervised physical activities, cooking/nutrition classes, and weekly psycho-educational/support groups led by psychology staff. Participants completed measures of anti-fat attitudes, values (e.g., value placed on appearance, athletic ability, popularity), body- and self-esteem, weight- and health-related quality of life, self-efficacy, and depressive symptoms. Results Participants experienced significant BMI reduction (average decrease of 7.5 kg [standard deviation, SD=4.2] and 2.9 BMI points [SD=1.4]). Participants also exhibited significant improvements in body esteem, self-esteem, self-efficacy, generic and weight-related quality of life, anti-fat attitudes, and the importance placed on appearance. Changes in self-efficacy, physical functioning and social functioning remained significant even after adjusting for initial zBMI, BMI change, and length of stay. Gender differences were found on changes in self-efficacy, depressive symptoms, and social functioning. Conclusion Participation in weight loss programs in a group setting, such as a camp, may have added benefit beyond BMI reduction. Greater attention to changes in psychosocial variables may be warranted when designing such programs for youth. PMID:19107660

  4. Shuttle user analysis (study 2.2). Volume 4: Standardized subsystem modules analysis

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The capability to analyze payloads constructed of standardized modules was provided for the planning of future mission models. An inventory of standardized module designs previously obtained was used as a starting point. Some of the conclusions and recommendations are: (1) the two growth factor synthesis methods provide logical configurations for satellite type selection; (2) the recommended method is the one that determines the growth factor as a function of the baseline subsystem weight, since it provides a larger growth factor for small subsystem weights and results in a greater overkill due to standardization; (3) the method that is not recommended is the one that depends upon a subsystem similarity selection, since care must be used in the subsystem similarity selection; (4) it is recommended that the application of standardized subsystem factors be limited to satellites with baseline dry weights between about 700 and 6,500 lbs; and (5) the standardized satellite design approach applies to satellites maintainable in orbit or retrieved for ground maintenance.

  5. Determination of the molecular weight of low-molecular-weight heparins by using high-pressure size exclusion chromatography on line with a triple detector array and conventional methods.

    PubMed

    Bisio, Antonella; Mantegazza, Alessandra; Vecchietti, Davide; Bensi, Donata; Coppa, Alessia; Torri, Giangiacomo; Bertini, Sabrina

    2015-03-19

    The evaluation of weight average molecular weight (Mw) and molecular weight distribution represents one of the most controversial aspects concerning the characterization of low molecular weight heparins (LMWHs). As the most commonly used method for the measurement of such parameters is high performance size exclusion chromatography (HP-SEC), the soundness of results mainly depends on the appropriate calibration of the chromatographic columns used. With the aim of meeting the requirement of proper Mw standards for LMWHs, in the present work the determination of molecular weight parameters (Mw and Mn) by HP-SEC combined with a triple detector array (TDA) was performed. The HP-SEC/TDA technique permits the evaluation of polymeric samples by exploiting the combined and simultaneous action of three on-line detectors: light scattering detectors (LALLS/RALLS); refractometer and viscometer. Three commercial LMWH samples, enoxaparin, tinzaparin and dalteparin, a γ-ray depolymerized heparin (γ-Hep) and its chromatographic fractions, and a synthetic pentasaccharide were analysed by HP-SEC/TDA. The same samples were analysed also with a conventional HP-SEC method employing refractive index (RI) and UV detectors and two different chromatographic column set, silica gel and polymeric gel columns. In both chromatographic systems, two different calibration curves were built up by using (i) γ-Hep chromatographic fractions and the corresponding Mw parameters obtained via HP-SEC/TDA; (ii) the whole γ-Hep preparation with broad Mw dispersion and the corresponding cumulative distribution function calculated via HP-SEC/TDA. In addition, also a chromatographic column calibration according to European Pharmacopoeia indication was built up. By comparing all the obtained results, some important differences among Mw and size distribution values of the three LMWHs were found with the five different calibration methods and with HP-SEC/TDA method. In particular, the detection of the lower molecular weight components turned out to be the most critical aspect. Whereas HP-SEC/TDA may underestimate species under 2 KDa when present in low concentration, other methods appeared to emphasize their content.

  6. Probabilistic inference using linear Gaussian importance sampling for hybrid Bayesian networks

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Chang, K. C.

    2005-05-01

    Probabilistic inference for Bayesian networks is in general NP-hard using either exact algorithms or approximate methods. However, for very complex networks, only the approximate methods such as stochastic sampling could be used to provide a solution given any time constraint. There are several simulation methods currently available. They include logic sampling (the first proposed stochastic method for Bayesian networks, the likelihood weighting algorithm) the most commonly used simulation method because of its simplicity and efficiency, the Markov blanket scoring method, and the importance sampling algorithm. In this paper, we first briefly review and compare these available simulation methods, then we propose an improved importance sampling algorithm called linear Gaussian importance sampling algorithm for general hybrid model (LGIS). LGIS is aimed for hybrid Bayesian networks consisting of both discrete and continuous random variables with arbitrary distributions. It uses linear function and Gaussian additive noise to approximate the true conditional probability distribution for continuous variable given both its parents and evidence in a Bayesian network. One of the most important features of the newly developed method is that it can adaptively learn the optimal important function from the previous samples. We test the inference performance of LGIS using a 16-node linear Gaussian model and a 6-node general hybrid model. The performance comparison with other well-known methods such as Junction tree (JT) and likelihood weighting (LW) shows that LGIS-GHM is very promising.

  7. Approximate analytical relationships for linear optimal aeroelastic flight control laws

    NASA Astrophysics Data System (ADS)

    Kassem, Ayman Hamdy

    1998-09-01

    This dissertation introduces new methods to uncover functional relationships between design parameters of a contemporary control design technique and the resulting closed-loop properties. Three new methods are developed for generating such relationships through analytical expressions: the Direct Eigen-Based Technique, the Order of Magnitude Technique, and the Cost Function Imbedding Technique. Efforts concentrated on the linear-quadratic state-feedback control-design technique applied to an aeroelastic flight control task. For this specific application, simple and accurate analytical expressions for the closed-loop eigenvalues and zeros in terms of basic parameters such as stability and control derivatives, structural vibration damping and natural frequency, and cost function weights are generated. These expressions explicitly indicate how the weights augment the short period and aeroelastic modes, as well as the closed-loop zeros, and by what physical mechanism. The analytical expressions are used to address topics such as damping, nonminimum phase behavior, stability, and performance with robustness considerations, and design modifications. This type of knowledge is invaluable to the flight control designer and would be more difficult to formulate when obtained from numerical-based sensitivity analysis.

  8. Cuckoo Search with Lévy Flights for Weighted Bayesian Energy Functional Optimization in Global-Support Curve Data Fitting

    PubMed Central

    Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis

    2014-01-01

    The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way. PMID:24977175

  9. Cuckoo search with Lévy flights for weighted Bayesian energy functional optimization in global-support curve data fitting.

    PubMed

    Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis

    2014-01-01

    The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way.

  10. Coupling Finite Element and Meshless Local Petrov-Galerkin Methods for Two-Dimensional Potential Problems

    NASA Technical Reports Server (NTRS)

    Chen, T.; Raju, I. S.

    2002-01-01

    A coupled finite element (FE) method and meshless local Petrov-Galerkin (MLPG) method for analyzing two-dimensional potential problems is presented in this paper. The analysis domain is subdivided into two regions, a finite element (FE) region and a meshless (MM) region. A single weighted residual form is written for the entire domain. Independent trial and test functions are assumed in the FE and MM regions. A transition region is created between the two regions. The transition region blends the trial and test functions of the FE and MM regions. The trial function blending is achieved using a technique similar to the 'Coons patch' method that is widely used in computer-aided geometric design. The test function blending is achieved by using either FE or MM test functions on the nodes in the transition element. The technique was evaluated by applying the coupled method to two potential problems governed by the Poisson equation. The coupled method passed all the patch test problems and gave accurate solutions for the problems studied.

  11. Measuring Work Functioning: Validity of a Weighted Composite Work Functioning Approach.

    PubMed

    Boezeman, Edwin J; Sluiter, Judith K; Nieuwenhuijsen, Karen

    2015-09-01

    To examine the construct validity of a weighted composite work functioning measurement approach. Workers (health-impaired/healthy) (n = 117) completed a composite measure survey that recorded four central work functioning aspects with existing scales: capacity to work, quality of work performance, quantity of work, and recovery from work. Previous derived weights reflecting the relative importance of these aspects of work functioning were used to calculate the composite weighted work functioning score of the workers. Work role functioning, productivity, and quality of life were used for validation. Correlations were calculated and norms applied to examine convergent and divergent construct validity. A t test was conducted and a norm applied to examine discriminative construct validity. Overall the weighted composite work functioning measure demonstrated construct validity. As predicted, the weighted composite score correlated (p < .001) strongly (r > .60) with work role functioning and productivity (convergent construct validity), and moderately (.30 < r < .60) with physical quality of life and less strongly than work role functioning and productivity with mental quality of life (divergent validity). Further, the weighted composite measure detected that health-impaired workers show with a large effect size (Cohen's d > .80) significantly worse work functioning than healthy workers (discriminative validity). The weighted composite work functioning measurement approach takes into account the relative importance of the different work functioning aspects and demonstrated good convergent, fair divergent, and good discriminative construct validity.

  12. Extracting surface waves, hum and normal modes: time-scale phase-weighted stack and beyond

    NASA Astrophysics Data System (ADS)

    Ventosa, Sergi; Schimmel, Martin; Stutzmann, Eleonore

    2017-10-01

    Stacks of ambient noise correlations are routinely used to extract empirical Green's functions (EGFs) between station pairs. The time-frequency phase-weighted stack (tf-PWS) is a physically intuitive nonlinear denoising method that uses the phase coherence to improve EGF convergence when the performance of conventional linear averaging methods is not sufficient. The high computational cost of a continuous approach to the time-frequency transformation is currently a main limitation in ambient noise studies. We introduce the time-scale phase-weighted stack (ts-PWS) as an alternative extension of the phase-weighted stack that uses complex frames of wavelets to build a time-frequency representation that is much more efficient and fast to compute and that preserve the performance and flexibility of the tf-PWS. In addition, we propose two strategies: the unbiased phase coherence and the two-stage ts-PWS methods to further improve noise attenuation, quality of the extracted signals and convergence speed. We demonstrate that these approaches enable to extract minor- and major-arc Rayleigh waves (up to the sixth Rayleigh wave train) from many years of data from the GEOSCOPE global network. Finally we also show that fundamental spheroidal modes can be extracted from these EGF.

  13. Indirect measurement of diluents in a multi-component natural gas

    DOEpatents

    Morrow, Thomas B.; Owen, Thomas E.

    2006-03-07

    A method of indirectly measuring the diluent (nitrogen and carbon dioxide) concentrations in a natural gas mixture. The molecular weight of the gas is modeled as a function of the speed of sound in the gas, the diluent concentrations in the gas, and constant values, resulting in a model equation. A set of reference gas mixtures with known molecular weights and diluent concentrations is used to calculate the constant values. For the gas in question, if the speed of sound in the gas is measured at three states, the three resulting expressions of molecular weight can be solved for the nitrogen and carbon dioxide concentrations in the gas mixture.

  14. An innovative method for offshore wind farm site selection based on the interval number with probability distribution

    NASA Astrophysics Data System (ADS)

    Wu, Yunna; Chen, Kaifeng; Xu, Hu; Xu, Chuanbo; Zhang, Haobo; Yang, Meng

    2017-12-01

    There is insufficient research relating to offshore wind farm site selection in China. The current methods for site selection have some defects. First, information loss is caused by two aspects: the implicit assumption that the probability distribution on the interval number is uniform; and ignoring the value of decision makers' (DMs') common opinion on the criteria information evaluation. Secondly, the difference in DMs' utility function has failed to receive attention. An innovative method is proposed in this article to solve these drawbacks. First, a new form of interval number and its weighted operator are proposed to reflect the uncertainty and reduce information loss. Secondly, a new stochastic dominance degree is proposed to quantify the interval number with a probability distribution. Thirdly, a two-stage method integrating the weighted operator with stochastic dominance degree is proposed to evaluate the alternatives. Finally, a case from China proves the effectiveness of this method.

  15. Salient object detection: manifold-based similarity adaptation approach

    NASA Astrophysics Data System (ADS)

    Zhou, Jingbo; Ren, Yongfeng; Yan, Yunyang; Gao, Shangbing

    2014-11-01

    A saliency detection algorithm based on manifold-based similarity adaptation is proposed. The proposed algorithm is divided into three steps. First, we segment an input image into superpixels, which are represented as the nodes in a graph. Second, a new similarity measurement is used in the proposed algorithm. The weight matrix of the graph, which indicates the similarities between the nodes, uses a similarity-based method. It also captures the manifold structure of the image patches, in which the graph edges are determined in a data adaptive manner in terms of both similarity and manifold structure. Then, we use local reconstruction method as a diffusion method to obtain the saliency maps. The objective function in the proposed method is based on local reconstruction, with which estimated weights capture the manifold structure. Experiments on four bench-mark databases demonstrate the accuracy and robustness of the proposed method.

  16. Long-Term Effects of a Randomised Controlled Trial Comparing High Protein or High Carbohydrate Weight Loss Diets on Testosterone, SHBG, Erectile and Urinary Function in Overweight and Obese Men

    PubMed Central

    Moran, Lisa J.; Brinkworth, Grant D.; Martin, Sean; Wycherley, Thomas P.; Stuckey, Bronwyn; Lutze, Janna; Clifton, Peter M.; Wittert, Gary A.; Noakes, Manny

    2016-01-01

    Introduction Obesity is associated with reduced testosterone and worsened erectile and sexual function in men. Weight loss improves these outcomes. High protein diets potentially offer anthropometric and metabolic benefits, but their effects on reproductive and sexual outcomes is not known. Aim To examine the long-term effects of weight loss with a higher protein or carbohydrate diet on testosterone, sex hormone binding globulin, erectile dysfunction, lower urinary tract symptoms and sexual desire in overweight and obese men. Methods One-hundred and eighteen overweight or obese men (body mass index 27–40 kg/m2, age 20–65 years) were randomly assigned to an energy restricted higher protein low fat (35% protein, 40% carbohydrate, 25% fat; n = 57) or higher carbohydrate low fat diet (17% protein, 58% carbohydrate, 25% fat, n = 61) diet for 52 weeks (12 weeks weight loss, 40 weeks weight maintenance). Primary outcomes were serum total testosterone, sex hormone binding globulin and calculated free testosterone. Secondary outcomes were erectile function as assessed by the International Index of Erectile Function (IIEF) (total score and erectile function domain), lower urinary tract symptoms and sexual desire. Results Total testosterone, sex hormone binding globulin and free testosterone increased (P<0.001) and the total IIEF increased (P = 0.017) with no differences between diets (P≥0.244). Increases in testosterone (P = 0.037) and sex hormone binding globulin (P<0.001) and improvements in the total IIEF (P = 0.041) occurred from weeks 0–12 with a further increase in testosterone from week 12–52 (P = 0.002). Increases in free testosterone occurred from week 12–52 (p = 0.002). The IIEF erectile functon domain, lower urinary tract symptoms and sexual desire did not change in either group (P≥0.126). Conclusions In overweight and obese men, weight loss with both high protein and carbohydrate diets improve testosterone, sex hormone binding globulin and overall sexual function. Trial Registration Anzctr.org.au ACTRN12606000002583 PMID:27584019

  17. Hamiltonian stability for weighted measure and generalized Lagrangian mean curvature flow

    NASA Astrophysics Data System (ADS)

    Kajigaya, Toru; Kunikawa, Keita

    2018-06-01

    In this paper, we generalize several results for the Hamiltonian stability and the mean curvature flow of Lagrangian submanifolds in a Kähler-Einstein manifold to more general Kähler manifolds including a Fano manifold equipped with a Kähler form ω ∈ 2 πc1(M) by using the method proposed by Behrndt (2011). Namely, we first consider a weighted measure on a Lagrangian submanifold L in a Kähler manifold M and investigate the variational problem of L for the weighted volume functional. We call a stationary point of the weighted volume functional f-minimal, and define the notion of Hamiltonian f-stability as a local minimizer under Hamiltonian deformations. We show such examples naturally appear in a toric Fano manifold. Moreover, we consider the generalized Lagrangian mean curvature flow in a Fano manifold which is introduced by Behrndt and Smoczyk-Wang. We generalize the result of H. Li, and show that if the initial Lagrangian submanifold is a small Hamiltonian deformation of an f-minimal and Hamiltonian f-stable Lagrangian submanifold, then the generalized MCF converges exponentially fast to an f-minimal Lagrangian submanifold.

  18. Inferring the Functions of Proteins from the Interrelationships between Functional Categories.

    PubMed

    Taha, Kamal

    2018-01-01

    This study proposes a new method to determine the functions of an unannotated protein. The proteins and amino acid residues mentioned in biomedical texts associated with an unannotated protein can be considered as characteristics terms for , which are highly predictive of the potential functions of . Similarly, proteins and amino acid residues mentioned in biomedical texts associated with proteins annotated with a functional category can be considered as characteristics terms of . We introduce in this paper an information extraction system called IFP_IFC that predicts the functions of an unannotated protein by representing and each functional category by a vector of weights. Each weight reflects the degree of association between a characteristic term and (or a characteristic term and ). First, IFP_IFC constructs a network, whose nodes represent the different functional categories, and its edges the interrelationships between the nodes. Then, it determines the functions of by employing random walks with restarts on the mentioned network. The walker is the vector of . Finally, is assigned to the functional categories of the nodes in the network that are visited most by the walker. We evaluated the quality of IFP_IFC by comparing it experimentally with two other systems. Results showed marked improvement.

  19. The heritability of the functional connectome is robust to common nonlinear registration methods

    NASA Astrophysics Data System (ADS)

    Hafzalla, George W.; Prasad, Gautam; Baboyan, Vatche G.; Faskowitz, Joshua; Jahanshad, Neda; McMahon, Katie L.; de Zubicaray, Greig I.; Wright, Margaret J.; Braskie, Meredith N.; Thompson, Paul M.

    2016-03-01

    Nonlinear registration algorithms are routinely used in brain imaging, to align data for inter-subject and group comparisons, and for voxelwise statistical analyses. To understand how the choice of registration method affects maps of functional brain connectivity in a sample of 611 twins, we evaluated three popular nonlinear registration methods: Advanced Normalization Tools (ANTs), Automatic Registration Toolbox (ART), and FMRIB's Nonlinear Image Registration Tool (FNIRT). Using both structural and functional MRI, we used each of the three methods to align the MNI152 brain template, and 80 regions of interest (ROIs), to each subject's T1-weighted (T1w) anatomical image. We then transformed each subject's ROIs onto the associated resting state functional MRI (rs-fMRI) scans and computed a connectivity network or functional connectome for each subject. Given the different degrees of genetic similarity between pairs of monozygotic (MZ) and same-sex dizygotic (DZ) twins, we used structural equation modeling to estimate the additive genetic influences on the elements of the function networks, or their heritability. The functional connectome and derived statistics were relatively robust to nonlinear registration effects.

  20. An Optimization Principle for Deriving Nonequilibrium Statistical Models of Hamiltonian Dynamics

    NASA Astrophysics Data System (ADS)

    Turkington, Bruce

    2013-08-01

    A general method for deriving closed reduced models of Hamiltonian dynamical systems is developed using techniques from optimization and statistical estimation. Given a vector of resolved variables, selected to describe the macroscopic state of the system, a family of quasi-equilibrium probability densities on phase space corresponding to the resolved variables is employed as a statistical model, and the evolution of the mean resolved vector is estimated by optimizing over paths of these densities. Specifically, a cost function is constructed to quantify the lack-of-fit to the microscopic dynamics of any feasible path of densities from the statistical model; it is an ensemble-averaged, weighted, squared-norm of the residual that results from submitting the path of densities to the Liouville equation. The path that minimizes the time integral of the cost function determines the best-fit evolution of the mean resolved vector. The closed reduced equations satisfied by the optimal path are derived by Hamilton-Jacobi theory. When expressed in terms of the macroscopic variables, these equations have the generic structure of governing equations for nonequilibrium thermodynamics. In particular, the value function for the optimization principle coincides with the dissipation potential that defines the relation between thermodynamic forces and fluxes. The adjustable closure parameters in the best-fit reduced equations depend explicitly on the arbitrary weights that enter into the lack-of-fit cost function. Two particular model reductions are outlined to illustrate the general method. In each example the set of weights in the optimization principle contracts into a single effective closure parameter.

  1. Enhancing pairwise state-transition weights: A new weighting scheme in simulated tempering that can minimize transition time between a pair of conformational states

    NASA Astrophysics Data System (ADS)

    Qiao, Qin; Zhang, Hou-Dao; Huang, Xuhui

    2016-04-01

    Simulated tempering (ST) is a widely used enhancing sampling method for Molecular Dynamics simulations. As one expanded ensemble method, ST is a combination of canonical ensembles at different temperatures and the acceptance probability of cross-temperature transitions is determined by both the temperature difference and the weights of each temperature. One popular way to obtain the weights is to adopt the free energy of each canonical ensemble, which achieves uniform sampling among temperature space. However, this uniform distribution in temperature space may not be optimal since high temperatures do not always speed up the conformational transitions of interest, as anti-Arrhenius kinetics are prevalent in protein and RNA folding. Here, we propose a new method: Enhancing Pairwise State-transition Weights (EPSW), to obtain the optimal weights by minimizing the round-trip time for transitions among different metastable states at the temperature of interest in ST. The novelty of the EPSW algorithm lies in explicitly considering the kinetics of conformation transitions when optimizing the weights of different temperatures. We further demonstrate the power of EPSW in three different systems: a simple two-temperature model, a two-dimensional model for protein folding with anti-Arrhenius kinetics, and the alanine dipeptide. The results from these three systems showed that the new algorithm can substantially accelerate the transitions between conformational states of interest in the ST expanded ensemble and further facilitate the convergence of thermodynamics compared to the widely used free energy weights. We anticipate that this algorithm is particularly useful for studying functional conformational changes of biological systems where the initial and final states are often known from structural biology experiments.

  2. Enhancing pairwise state-transition weights: A new weighting scheme in simulated tempering that can minimize transition time between a pair of conformational states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qiao, Qin, E-mail: qqiao@ust.hk; Zhang, Hou-Dao; Huang, Xuhui, E-mail: xuhuihuang@ust.hk

    2016-04-21

    Simulated tempering (ST) is a widely used enhancing sampling method for Molecular Dynamics simulations. As one expanded ensemble method, ST is a combination of canonical ensembles at different temperatures and the acceptance probability of cross-temperature transitions is determined by both the temperature difference and the weights of each temperature. One popular way to obtain the weights is to adopt the free energy of each canonical ensemble, which achieves uniform sampling among temperature space. However, this uniform distribution in temperature space may not be optimal since high temperatures do not always speed up the conformational transitions of interest, as anti-Arrhenius kineticsmore » are prevalent in protein and RNA folding. Here, we propose a new method: Enhancing Pairwise State-transition Weights (EPSW), to obtain the optimal weights by minimizing the round-trip time for transitions among different metastable states at the temperature of interest in ST. The novelty of the EPSW algorithm lies in explicitly considering the kinetics of conformation transitions when optimizing the weights of different temperatures. We further demonstrate the power of EPSW in three different systems: a simple two-temperature model, a two-dimensional model for protein folding with anti-Arrhenius kinetics, and the alanine dipeptide. The results from these three systems showed that the new algorithm can substantially accelerate the transitions between conformational states of interest in the ST expanded ensemble and further facilitate the convergence of thermodynamics compared to the widely used free energy weights. We anticipate that this algorithm is particularly useful for studying functional conformational changes of biological systems where the initial and final states are often known from structural biology experiments.« less

  3. Structural Optimization for Reliability Using Nonlinear Goal Programming

    NASA Technical Reports Server (NTRS)

    El-Sayed, Mohamed E.

    1999-01-01

    This report details the development of a reliability based multi-objective design tool for solving structural optimization problems. Based on two different optimization techniques, namely sequential unconstrained minimization and nonlinear goal programming, the developed design method has the capability to take into account the effects of variability on the proposed design through a user specified reliability design criterion. In its sequential unconstrained minimization mode, the developed design tool uses a composite objective function, in conjunction with weight ordered design objectives, in order to take into account conflicting and multiple design criteria. Multiple design criteria of interest including structural weight, load induced stress and deflection, and mechanical reliability. The nonlinear goal programming mode, on the other hand, provides for a design method that eliminates the difficulty of having to define an objective function and constraints, while at the same time has the capability of handling rank ordered design objectives or goals. For simulation purposes the design of a pressure vessel cover plate was undertaken as a test bed for the newly developed design tool. The formulation of this structural optimization problem into sequential unconstrained minimization and goal programming form is presented. The resulting optimization problem was solved using: (i) the linear extended interior penalty function method algorithm; and (ii) Powell's conjugate directions method. Both single and multi-objective numerical test cases are included demonstrating the design tool's capabilities as it applies to this design problem.

  4. Influence of microclimatic ammonia levels on productive performance of different broilers’ breeds estimated with univariate and multivariate approaches

    PubMed Central

    Soliman, Essam S.; Moawed, Sherif A.; Hassan, Rania A.

    2017-01-01

    Background and Aim: Birds litter contains unutilized nitrogen in the form of uric acid that is converted into ammonia; a fact that does not only affect poultry performance but also has a negative effect on people’s health around the farm and contributes in the environmental degradation. The influence of microclimatic ammonia emissions on Ross and Hubbard broilers reared in different housing systems at two consecutive seasons (fall and winter) was evaluated using a discriminant function analysis to differentiate between Ross and Hubbard breeds. Materials and Methods: A total number of 400 air samples were collected and analyzed for ammonia levels during the experimental period. Data were analyzed using univariate and multivariate statistical methods. Results: Ammonia levels were significantly higher (p< 0.01) in the Ross compared to the Hubbard breed farm, although no significant differences (p>0.05) were found between the two farms in body weight, body weight gain, feed intake, feed conversion ratio, and performance index (PI) of broilers. Body weight; weight gain and PI had increased values (p< 0.01) during fall compared to winter irrespective of broiler breed. Ammonia emissions were positively (although weekly) correlated with the ambient relative humidity (r=0.383; p< 0.01), but not with the ambient temperature (r=−0.045; p>0.05). Test of significance of discriminant function analysis did not show a classification based on the studied traits suggesting that they cannot been used as predictor variables. The percentage of correct classification was 52% and it was improved after deletion of highly correlated traits to 57%. Conclusion: The study revealed that broiler’s growth was negatively affected by increased microclimatic ammonia concentrations and recommended the analysis of broilers’ growth performance parameters data using multivariate discriminant function analysis. PMID:28919677

  5. Low-molecular-weight heparins: differential characterization/physical characterization.

    PubMed

    Guerrini, Marco; Bisio, Antonella

    2012-01-01

    Low-molecular-weight heparins (LMWHs), derived from unfractionated heparin (UFH) through different depolymerization processes, have advantages with respect to the parent heparin in terms of pharmacokinetics, convenience of administration, and reduced side effects. Each LMWH can be considered as an independent drug with its own activity profile, placing significance on their biophysical characterization, which will also enable a better understanding of their structure-function relationship. Several chemical and physical methods, some involving sample modification, are now available and are reviewed.

  6. Features of Discontinuous Galerkin Algorithms in Gkeyll, and Exponentially-Weighted Basis Functions

    NASA Astrophysics Data System (ADS)

    Hammett, G. W.; Hakim, A.; Shi, E. L.

    2016-10-01

    There are various versions of Discontinuous Galerkin (DG) algorithms that have interesting features that could help with challenging problems of higher-dimensional kinetic problems (such as edge turbulence in tokamaks and stellarators). We are developing the gyrokinetic code Gkeyll based on DG methods. Higher-order methods do more FLOPS to extract more information per byte, thus reducing memory and communication costs (which are a bottleneck for exascale computing). The inner product norm can be chosen to preserve energy conservation with non-polynomial basis functions (such as Maxwellian-weighted bases), which alternatively can be viewed as a Petrov-Galerkin method. This allows a full- F code to benefit from similar Gaussian quadrature employed in popular δf continuum gyrokinetic codes. We show some tests for a 1D Spitzer-Härm heat flux problem, which requires good resolution for the tail. For two velocity dimensions, this approach could lead to a factor of 10 or more speedup. Supported by the Max-Planck/Princeton Center for Plasma Physics, the SciDAC Center for the Study of Plasma Microturbulence, and DOE Contract DE-AC02-09CH11466.

  7. Multifunctional Graphene-Silicone Elastomer Nanocomposite, Method of Making the Same, and Uses Thereof

    NASA Technical Reports Server (NTRS)

    Prud'Homme, Robert K. (Inventor); Pan, Shuyang (Inventor); Aksay, Ilhan A. (Inventor)

    2018-01-01

    A nanocomposite composition having a silicone elastomer matrix having therein a filler loading of greater than 0.05 wt %, based on total nanocomposite weight, wherein the filler is functional graphene sheets (FGS) having a surface area of from 300 sq m/g to 2630 sq m2/g; and a method for producing the nanocomposite and uses thereof.

  8. Bias Correction for the Maximum Likelihood Estimate of Ability. Research Report. ETS RR-05-15

    ERIC Educational Resources Information Center

    Zhang, Jinming

    2005-01-01

    Lord's bias function and the weighted likelihood estimation method are effective in reducing the bias of the maximum likelihood estimate of an examinee's ability under the assumption that the true item parameters are known. This paper presents simulation studies to determine the effectiveness of these two methods in reducing the bias when the item…

  9. Le poids de l'histoire: A la recherche d'une pedagogie (The Weight of History: In Search of a Method).

    ERIC Educational Resources Information Center

    Bourbeau-Walker, Micheline

    1984-01-01

    It is proposed that while the sciences have progressed steadily, language teaching methods have swung like a pendulum between two broad approaches: formal and functional. The history of this pattern is outlined, current practices are discussed, and the possibility of escaping from this polarizing cycle is examined. (MSE)

  10. Obesity, change of body mass index and subsequent physical and mental health functioning: a 12-year follow-up study among ageing employees.

    PubMed

    Svärd, Anna; Lahti, Jouni; Roos, Eira; Rahkonen, Ossi; Lahelma, Eero; Lallukka, Tea; Mänty, Minna

    2017-09-26

    Studies suggest an association between weight change and subsequent poor physical health functioning, whereas the association with mental health functioning is inconsistent. We aimed to examine whether obesity and change of body mass index among normal weight, overweight and obese women and men associate with changes in physical and mental health functioning. The Helsinki Health Study cohort includes Finnish municipal employees aged 40 to 60 in 2000-02 (phase 1, response rate 67%). Phase 2 mail survey (response rate 82%) took place in 2007 and phase 3 in 2012 (response rate 76%). This study included 5668 participants (82% women). Seven weight change categories were formed based on body mass index (BMI) (phase 1) and weight change (BMI change ≥5%) (phase 1-2). The Short Form 36 Health Survey (SF-36) measured physical and mental health functioning. The change in health functioning (phase 1-3) score was examined with repeated measures analyses. Covariates were age, sociodemographic factors, health behaviours, and somatic ill-health. Weight gain was common among women (34%) and men (25%). Weight-gaining normal weight (-1.3 points), overweight (-1.3 points) and obese (-3.6 points) women showed a greater decline in physical component summary scores than weight-maintaining normal weight women. Among weight-maintainers, only obese (-1.8 points) women showed a greater decline than weight-maintaining normal weight women. The associations were similar, but statistically non-significant for obese men. No statistically significant differences in the change in mental health functioning occurred. Preventing weight gain likely helps maintaining good physical health functioning and work ability.

  11. Beta cell function after weight loss: a clinical trial comparing gastric bypass surgery and intensive lifestyle intervention

    PubMed Central

    Hofsø, D; Jenssen, T; Bollerslev, J; Ueland, T; Godang, K; Stumvoll, M; Sandbu, R; Røislien, J; Hjelmesæth, J

    2011-01-01

    Objective The effects of various weight loss strategies on pancreatic beta cell function remain unclear. We aimed to compare the effect of intensive lifestyle intervention (ILI) and Roux-en-Y gastric bypass surgery (RYGB) on beta cell function. Design One year controlled clinical trial (ClinicalTrials.gov identifier NCT00273104). Methods One hundred and nineteen morbidly obese participants without known diabetes from the MOBIL study (mean (s.d.) age 43.6 (10.8) years, body mass index (BMI) 45.5 (5.6) kg/m2, 84 women) were allocated to RYGB (n=64) or ILI (n=55). The patients underwent repeated oral glucose tolerance tests (OGTTs) and were categorised as having either normal (NGT) or abnormal glucose tolerance (AGT). Twenty-nine normal-weight subjects with NGT (age 42.6 (8.7) years, BMI 22.6 (1.5) kg/m2, 19 women) served as controls. OGTT-based indices of beta cell function were calculated. Results One year weight reduction was 30 % (8) after RYGB and 9 % (10) after ILI (P<0.001). Disposition index (DI) increased in all treatment groups (all P<0.05), although more in the surgery groups (both P<0.001). Stimulated proinsulin-to-insulin (PI/I) ratio decreased in both surgery groups (both P<0.001), but to a greater extent in the surgery group with AGT at baseline (P<0.001). Post surgery, patients with NGT at baseline had higher DI and lower stimulated PI/I ratio than controls (both P<0.027). Conclusions Gastric bypass surgery improved beta cell function to a significantly greater extent than ILI. Supra-physiological insulin secretion and proinsulin processing may indicate excessive beta cell function after gastric bypass surgery. PMID:21078684

  12. [A graph cuts-based interactive method for segmentation of magnetic resonance images of meningioma].

    PubMed

    Li, Shuan-qiang; Feng, Qian-jin; Chen, Wu-fan; Lin, Ya-zhong

    2011-06-01

    For accurate segmentation of the magnetic resonance (MR) images of meningioma, we propose a novel interactive segmentation method based on graph cuts. The high dimensional image features was extracted, and for each pixel, the probabilities of its origin, either the tumor or the background regions, were estimated by exploiting the weighted K-nearest neighborhood classifier. Based on these probabilities, a new energy function was proposed. Finally, a graph cut optimal framework was used for the solution of the energy function. The proposed method was evaluated by application in the segmentation of MR images of meningioma, and the results showed that the method significantly improved the segmentation accuracy compared with the gray level information-based graph cut method.

  13. Barrier Function-Based Neural Adaptive Control With Locally Weighted Learning and Finite Neuron Self-Growing Strategy.

    PubMed

    Jia, Zi-Jun; Song, Yong-Duan

    2017-06-01

    This paper presents a new approach to construct neural adaptive control for uncertain nonaffine systems. By integrating locally weighted learning with barrier Lyapunov function (BLF), a novel control design method is presented to systematically address the two critical issues in neural network (NN) control field: one is how to fulfill the compact set precondition for NN approximation, and the other is how to use varying rather than a fixed NN structure to improve the functionality of NN control. A BLF is exploited to ensure the NN inputs to remain bounded during the entire system operation. To account for system nonlinearities, a neuron self-growing strategy is proposed to guide the process for adding new neurons to the system, resulting in a self-adjustable NN structure for better learning capabilities. It is shown that the number of neurons needed to accomplish the control task is finite, and better performance can be obtained with less number of neurons as compared with traditional methods. The salient feature of the proposed method also lies in the continuity of the control action everywhere. Furthermore, the resulting control action is smooth almost everywhere except for a few time instants at which new neurons are added. Numerical example illustrates the effectiveness of the proposed approach.

  14. Half-unit weighted bilinear algorithm for image contrast enhancement in capsule endoscopy

    NASA Astrophysics Data System (ADS)

    Rukundo, Olivier

    2018-04-01

    This paper proposes a novel enhancement method based exclusively on the bilinear interpolation algorithm for capsule endoscopy images. The proposed method does not convert the original RBG image components to HSV or any other color space or model; instead, it processes directly RGB components. In each component, a group of four adjacent pixels and half-unit weight in the bilinear weighting function are used to calculate the average pixel value, identical for each pixel in that particular group. After calculations, groups of identical pixels are overlapped successively in horizontal and vertical directions to achieve a preliminary-enhanced image. The final-enhanced image is achieved by halving the sum of the original and preliminary-enhanced image pixels. Quantitative and qualitative experiments were conducted focusing on pairwise comparisons between original and enhanced images. Final-enhanced images have generally the best diagnostic quality and gave more details about the visibility of vessels and structures in capsule endoscopy images.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rivasseau, Vincent, E-mail: vincent.rivasseau@th.u-psud.fr, E-mail: adrian.tanasa@ens-lyon.org; Tanasa, Adrian, E-mail: vincent.rivasseau@th.u-psud.fr, E-mail: adrian.tanasa@ens-lyon.org

    The Loop Vertex Expansion (LVE) is a quantum field theory (QFT) method which explicitly computes the Borel sum of Feynman perturbation series. This LVE relies in a crucial way on symmetric tree weights which define a measure on the set of spanning trees of any connected graph. In this paper we generalize this method by defining new tree weights. They depend on the choice of a partition of a set of vertices of the graph, and when the partition is non-trivial, they are no longer symmetric under permutation of vertices. Nevertheless we prove they have the required positivity property tomore » lead to a convergent LVE; in fact we formulate this positivity property precisely for the first time. Our generalized tree weights are inspired by the Brydges-Battle-Federbush work on cluster expansions and could be particularly suited to the computation of connected functions in QFT. Several concrete examples are explicitly given.« less

  16. Gain weighted eigenspace assignment

    NASA Technical Reports Server (NTRS)

    Davidson, John B.; Andrisani, Dominick, II

    1994-01-01

    This report presents the development of the gain weighted eigenspace assignment methodology. This provides a designer with a systematic methodology for trading off eigenvector placement versus gain magnitudes, while still maintaining desired closed-loop eigenvalue locations. This is accomplished by forming a cost function composed of a scalar measure of error between desired and achievable eigenvectors and a scalar measure of gain magnitude, determining analytical expressions for the gradients, and solving for the optimal solution by numerical iteration. For this development the scalar measure of gain magnitude is chosen to be a weighted sum of the squares of all the individual elements of the feedback gain matrix. An example is presented to demonstrate the method. In this example, solutions yielding achievable eigenvectors close to the desired eigenvectors are obtained with significant reductions in gain magnitude compared to a solution obtained using a previously developed eigenspace (eigenstructure) assignment method.

  17. Effects of cereal fiber on bowel function: A systematic review of intervention trials

    PubMed Central

    de Vries, Jan; Miller, Paige E; Verbeke, Kristin

    2015-01-01

    AIM: To comprehensively review and quantitatively summarize results from intervention studies that examined the effects of intact cereal dietary fiber on parameters of bowel function. METHODS: A systematic literature search was conducted using PubMed and EMBASE. Supplementary literature searches included screening reference lists from relevant studies and reviews. Eligible outcomes were stool wet and dry weight, percentage water in stools, stool frequency and consistency, and total transit time. Weighted regression analyses generated mean change (± SD) in these measures per g/d of dietary fiber. RESULTS: Sixty-five intervention studies among generally healthy populations were identified. A quantitative examination of the effects of non-wheat sources of intact cereal dietary fibers was not possible due to an insufficient number of studies. Weighted regression analyses demonstrated that each extra g/d of wheat fiber increased total stool weight by 3.7 ± 0.09 g/d (P < 0.0001; 95%CI: 3.50-3.84), dry stool weight by 0.75 ± 0.03 g/d (P < 0.0001; 95%CI: 0.69-0.82), and stool frequency by 0.004 ± 0.002 times/d (P = 0.0346; 95%CI: 0.0003-0.0078). Transit time decreased by 0.78 ± 0.13 h per additional g/d (P < 0.0001; 95%CI: 0.53-1.04) of wheat fiber among those with an initial transit time greater than 48 h. CONCLUSION: Wheat dietary fiber, and predominately wheat bran dietary fiber, improves measures of bowel function. PMID:26269686

  18. New type of liquid rubber and compositions based on it.

    PubMed

    Semikolenov, S V; Nartova, A V; Voronchikhin, V D; Dubkov, K A

    2014-11-01

    The new method for producing the functionalized polymers and oligomers containing carbonyl C=O groups is developed. The method is based on the noncatalytic oxidation of unsaturated rubbers by nitrous oxide (N2O) at 180-230 °С. The proposed method allows obtaining the new type of functionalized rubbers-liquid unsaturated polyketones with regulated molecular weight and concentration of C=O groups. The influence of the liquid polyketone addition on properties of rubber-based composites is investigated. The study indicates good prospects of using the liquid polyketones for the improvement of properties and operating characteristics of the various types of rubbers and the rubber-cord systems.

  19. Atom and Bond Fukui Functions and Matrices: A Hirshfeld-I Atoms-in-Molecule Approach.

    PubMed

    Oña, Ofelia B; De Clercq, Olivier; Alcoba, Diego R; Torre, Alicia; Lain, Luis; Van Neck, Dimitri; Bultinck, Patrick

    2016-09-19

    The Fukui function is often used in its atom-condensed form by isolating it from the molecular Fukui function using a chosen weight function for the atom in the molecule. Recently, Fukui functions and matrices for both atoms and bonds separately were introduced for semiempirical and ab initio levels of theory using Hückel and Mulliken atoms-in-molecule models. In this work, a double partitioning method of the Fukui matrix is proposed within the Hirshfeld-I atoms-in-molecule framework. Diagonalizing the resulting atomic and bond matrices gives eigenvalues and eigenvectors (Fukui orbitals) describing the reactivity of atoms and bonds. The Fukui function is the diagonal element of the Fukui matrix and may be resolved in atom and bond contributions. The extra information contained in the atom and bond resolution of the Fukui matrices and functions is highlighted. The effect of the choice of weight function arising from the Hirshfeld-I approach to obtain atom- and bond-condensed Fukui functions is studied. A comparison of the results with those generated by using the Mulliken atoms-in-molecule approach shows low correlation between the two partitioning schemes. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.

  1. Uncertainty plus prior equals rational bias: an intuitive Bayesian probability weighting function.

    PubMed

    Fennell, John; Baddeley, Roland

    2012-10-01

    Empirical research has shown that when making choices based on probabilistic options, people behave as if they overestimate small probabilities, underestimate large probabilities, and treat positive and negative outcomes differently. These distortions have been modeled using a nonlinear probability weighting function, which is found in several nonexpected utility theories, including rank-dependent models and prospect theory; here, we propose a Bayesian approach to the probability weighting function and, with it, a psychological rationale. In the real world, uncertainty is ubiquitous and, accordingly, the optimal strategy is to combine probability statements with prior information using Bayes' rule. First, we show that any reasonable prior on probabilities leads to 2 of the observed effects; overweighting of low probabilities and underweighting of high probabilities. We then investigate 2 plausible kinds of priors: informative priors based on previous experience and uninformative priors of ignorance. Individually, these priors potentially lead to large problems of bias and inefficiency, respectively; however, when combined using Bayesian model comparison methods, both forms of prior can be applied adaptively, gaining the efficiency of empirical priors and the robustness of ignorance priors. We illustrate this for the simple case of generic good and bad options, using Internet blogs to estimate the relevant priors of inference. Given this combined ignorant/informative prior, the Bayesian probability weighting function is not only robust and efficient but also matches all of the major characteristics of the distortions found in empirical research. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  2. A Comparison of Weights Matrices on Computation of Dengue Spatial Autocorrelation

    NASA Astrophysics Data System (ADS)

    Suryowati, K.; Bekti, R. D.; Faradila, A.

    2018-04-01

    Spatial autocorrelation is one of spatial analysis to identify patterns of relationship or correlation between locations. This method is very important to get information on the dispersal patterns characteristic of a region and linkages between locations. In this study, it applied on the incidence of Dengue Hemorrhagic Fever (DHF) in 17 sub districts in Sleman, Daerah Istimewa Yogyakarta Province. The link among location indicated by a spatial weight matrix. It describe the structure of neighbouring and reflects the spatial influence. According to the spatial data, type of weighting matrix can be divided into two types: point type (distance) and the neighbourhood area (contiguity). Selection weighting function is one determinant of the results of the spatial analysis. This study use queen contiguity based on first order neighbour weights, queen contiguity based on second order neighbour weights, and inverse distance weights. Queen contiguity first order and inverse distance weights shows that there is the significance spatial autocorrelation in DHF, but not by queen contiguity second order. Queen contiguity first and second order compute 68 and 86 neighbour list

  3. Tensor distribution function

    NASA Astrophysics Data System (ADS)

    Leow, Alex D.; Zhu, Siwei

    2008-03-01

    Diffusion weighted MR imaging is a powerful tool that can be employed to study white matter microstructure by examining the 3D displacement profile of water molecules in brain tissue. By applying diffusion-sensitizing gradients along a minimum of 6 directions, second-order tensors (represetnted by 3-by-3 positive definiite matrices) can be computed to model dominant diffusion processes. However, it has been shown that conventional DTI is not sufficient to resolve more complicated white matter configurations, e.g. crossing fiber tracts. More recently, High Angular Resolution Diffusion Imaging (HARDI) seeks to address this issue by employing more than 6 gradient directions. To account for fiber crossing when analyzing HARDI data, several methodologies have been introduced. For example, q-ball imaging was proposed to approximate Orientation Diffusion Function (ODF). Similarly, the PAS method seeks to reslove the angular structure of displacement probability functions using the maximum entropy principle. Alternatively, deconvolution methods extract multiple fiber tracts by computing fiber orientations using a pre-specified single fiber response function. In this study, we introduce Tensor Distribution Function (TDF), a probability function defined on the space of symmetric and positive definite matrices. Using calculus of variations, we solve for the TDF that optimally describes the observed data. Here, fiber crossing is modeled as an ensemble of Gaussian diffusion processes with weights specified by the TDF. Once this optimal TDF is determined, ODF can easily be computed by analytical integration of the resulting displacement probability function. Moreover, principle fiber directions can also be directly derived from the TDF.

  4. Effects of increasing left ventricular filling pressure in patients with acute myocardial infarction

    PubMed Central

    Russell, Richard O.; Rackley, Charles E.; Pombo, Jaoquin; Hunt, David; Potanin, Constantine; Dodge, Harold T.

    1970-01-01

    Left ventricular performance in 19 patients with acute myocardial infarction has been evaluated by measuring left ventricular response in terms of cardiac output, stroke volume, work, and power to progressive elevation of filling pressure accomplished by progressive expansion of blood volume with rapid infusion of low molecular weight dextran. Such infusion can elevate the cardiac output, stroke volume, work, and power and thus delineate the function of the left ventricle by Frank-Starling function curves. Left ventricular filling pressure in the range of 20-24 mm Hg was associated with the peak of the curves and when the filling pressure exceeded this range, the curves became flattened or decreased. An increase in cardiac output could be maintained for 4 or more hr. Patients with a flattened function curve had a high mortality in the ensuing 8 wk. The function curve showed improvement in myocardial function during the early convalescence. When left ventricular filling pressure is monitored directly or as pulmonary artery end-diastolic pressure, low molecular weight dextran provides a method for assessment of left ventricular function. Images PMID:5431663

  5. Functional weight-bearing mobilization after Achilles tendon rupture enhances early healing response: a single-blinded randomized controlled trial.

    PubMed

    Valkering, Kars P; Aufwerber, Susanna; Ranuccio, Francesco; Lunini, Enricomaria; Edman, Gunnar; Ackermann, Paul W

    2017-06-01

    Functional weight-bearing mobilization may improve repair of Achilles tendon rupture (ATR), but the underlying mechanisms and outcome were unknown. We hypothesized that functional weight-bearing mobilization by means of increased metabolism could improve both early and long-term healing. In this prospective randomized controlled trial, patients with acute ATR were randomized to either direct post-operative functional weight-bearing mobilization (n = 27) in an orthosis or to non-weight-bearing (n = 29) plaster cast immobilization. During the first two post-operative weeks, 15°-30° of plantar flexion was allowed and encouraged in the functional weight-bearing mobilization group. At 2 weeks, patients in the non-weight-bearing cast immobilization group received a stiff orthosis, while the functional weight-bearing mobilization group continued with increased range of motion. At 6 weeks, all patients discontinued immobilization. At 2 weeks, healing metabolites and markers of procollagen type I (PINP) and III (PIIINP) were examined using microdialysis. At 6 and 12 months, functional outcome using heel-rise test was assessed. Healing tendons of both groups exhibited increased levels of metabolites glutamate, lactate, pyruvate, and of PIIINP (all p < 0.05). Patients in functional weight-bearing mobilization group demonstrated significantly higher concentrations of glutamate compared to the non-weight-bearing cast immobilization group (p = 0.045).The upregulated glutamate levels were significantly correlated with the concentrations of PINP (r = 0.5, p = 0.002) as well as with improved functional outcome at 6 months (r = 0.4; p = 0.014). Heel-rise tests at 6 and 12 months did not display any differences between the two groups. Functional weight-bearing mobilization enhanced the early healing response of ATR. In addition, early ankle range of motion was improved without the risk of Achilles tendon elongation and without altering long-term functional outcome. The relationship between functional weight-bearing mobilization-induced upregulation of glutamate and enhanced healing suggests novel opportunities to optimize post-operative rehabilitation.

  6. Correlation between CT numbers and tissue parameters needed for Monte Carlo simulations of clinical dose distributions

    NASA Astrophysics Data System (ADS)

    Schneider, Wilfried; Bortfeld, Thomas; Schlegel, Wolfgang

    2000-02-01

    We describe a new method to convert CT numbers into mass density and elemental weights of tissues required as input for dose calculations with Monte Carlo codes such as EGS4. As a first step, we calculate the CT numbers for 71 human tissues. To reduce the effort for the necessary fits of the CT numbers to mass density and elemental weights, we establish four sections on the CT number scale, each confined by selected tissues. Within each section, the mass density and elemental weights of the selected tissues are interpolated. For this purpose, functional relationships between the CT number and each of the tissue parameters, valid for media which are composed of only two components in varying proportions, are derived. Compared with conventional data fits, no loss of accuracy is accepted when using the interpolation functions. Assuming plausible values for the deviations of calculated and measured CT numbers, the mass density can be determined with an accuracy better than 0.04 g cm-3 . The weights of phosphorus and calcium can be determined with maximum uncertainties of 1 or 2.3 percentage points (pp) respectively. Similar values can be achieved for hydrogen (0.8 pp) and nitrogen (3 pp). For carbon and oxygen weights, errors up to 14 pp can occur. The influence of the elemental weights on the results of Monte Carlo dose calculations is investigated and discussed.

  7. The Impact of Weight and Fat Mass Loss and Increased Physical Activity on Physical Function in Overweight, Postmenopausal Women: Results from the WOMAN Study

    PubMed Central

    Gabriel, Kelley Pettee; Conroy, Molly B.; Schmid, Kendra K.; Storti, Kristi L.; High, Robin R.; Underwood, Darcy A.; Kriska, Andrea M.; Kuller, Lewis H.

    2011-01-01

    Objective To determine whether changes in leisure time physical activity (LTPA) and body composition reflect concomitant changes in 400 m walk time. Methods Data were collected at the baseline and 48 month visits in Women on the Move through Activity and Nutrition Study. At baseline, participants (n=508) were randomized to the Lifestyle Intervention (LC) or Health Education (HE) group. The LC intervention focused on weight (7–10%) and waist circumference reduction through healthy lifestyle behavior change. Change in walk time over 48 months was the primary outcome. Secondary measures included change in LTPA and body composition measures including, body weight, BMI, waist circumference (WC), and dual energy x-ray absorptiometry-derived fat and lean mass. Results Increased LTPA and reductions in body weight, BMI, WC, and fat mass were associated with decreased walk time from baseline to 48 months (p<0.01). After stratification by group, LTPA was no longer significantly related to walk time in the HE group. Conclusions Increased LTPA and weight loss resulted in improved physical function, as measured by the 400 m walk, in a group of overweight, post-menopausal women. These findings support the utility of the 400 m walk to evaluate progress in physical activity or weight loss programs. PMID:21705864

  8. New Internet search volume-based weighting method for integrating various environmental impacts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ji, Changyoon, E-mail: changyoon@yonsei.ac.kr; Hong, Taehoon, E-mail: hong7@yonsei.ac.kr

    Weighting is one of the steps in life cycle impact assessment that integrates various characterized environmental impacts as a single index. Weighting factors should be based on the society's preferences. However, most previous studies consider only the opinion of some people. Thus, this research proposes a new weighting method that determines the weighting factors of environmental impact categories by considering public opinion on environmental impacts using the Internet search volumes for relevant terms. To validate the new weighting method, the weighting factors for six environmental impacts calculated by the new weighting method were compared with the existing weighting factors. Themore » resulting Pearson's correlation coefficient between the new and existing weighting factors was from 0.8743 to 0.9889. It turned out that the new weighting method presents reasonable weighting factors. It also requires less time and lower cost compared to existing methods and likewise meets the main requirements of weighting methods such as simplicity, transparency, and reproducibility. The new weighting method is expected to be a good alternative for determining the weighting factor. - Highlight: • A new weighting method using Internet search volume is proposed in this research. • The new weighting method reflects the public opinion using Internet search volume. • The correlation coefficient between new and existing weighting factors is over 0.87. • The new weighting method can present the reasonable weighting factors. • The proposed method can be a good alternative for determining the weighting factors.« less

  9. Relationship between weight status and health-related quality of life in Chinese primary school children in Guangzhou: a cross-sectional study.

    PubMed

    Liu, Wei; Lin, Rong; Liu, Weijia; Guo, Zhongshan; Xiong, Lihua; Li, Bai; Cheng, K K; Adab, Peymane; Pallan, Miranda

    2016-12-03

    To investigate the association between weight status and health-related quality of life (HRQOL) among pupils in Guangzhou, China. The study comprised 5781 children aged 8-12 years from 29 schools. Height and weight were objectively measured using standardized methods, and BMI z-score derived using the age and sex specific WHO reference 2007 for 5-19 years. Weight status was classified as underweight (<-2SD), healthy weight (between -2SD and 1SD), overweight/obesity (>1SD). HRQOL was measured by the self-report version of the Pediatric Quality of Life Inventory 4.0. After controlling for gender, age, school type, parental education, and family income, HRQOL scores were significantly lower in overweight/obese compared with healthy weight children only in the social functioning domain (β = -1.93, p = 0.001). Compared with healthy weight children, underweight children had significantly lower total (β = -1.47, p = 0.05) and physical summary scores (β = -2.18, p = 0.02). Subgroup analysis for gender indicated that compared to healthy weight, total (β = -1.96, p = 0.02), psychosocial (β = -2.40, p = 0.01), social functioning (β = -3.36, p = 0.001), and school functioning (β = -2.19, p = 0.03) scores were lower in overweight/obese girls, but not boys. On the other hand, being underweight was associated with lower physical functioning (β = -2.27, p = 0.047) in girls, and lower social functioning (β = -3.63, p = 0.01) in boys. The associations were mainly observed in children aged 10 and over, but were not significant in younger children. Children from private schools had generally lower HRQOL compared to those in public schools, but the associations with weight status were similar in both groups. The relationship between overweight/obesity and HRQOL in children in China is not as prominent as that seen in children in western or high-income countries. However, there appears to be gender and age differences, with more of an impact of overweight on HRQOL in girls and older children compared with boys and younger children. Underweight is also associated with lower HRQOL. Future intervention to prevent both obesity and undernutrition may have a positive impact on the HRQOL in children in China.

  10. Velocity-space sensitivity of the time-of-flight neutron spectrometer at JET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacobsen, A. S., E-mail: Ajsen@fysik.dtu.dk; Salewski, M.; Korsholm, S. B.

    2014-11-15

    The velocity-space sensitivities of fast-ion diagnostics are often described by so-called weight functions. Recently, we formulated weight functions showing the velocity-space sensitivity of the often dominant beam-target part of neutron energy spectra. These weight functions for neutron emission spectrometry (NES) are independent of the particular NES diagnostic. Here we apply these NES weight functions to the time-of-flight spectrometer TOFOR at JET. By taking the instrumental response function of TOFOR into account, we calculate time-of-flight NES weight functions that enable us to directly determine the velocity-space sensitivity of a given part of a measured time-of-flight spectrum from TOFOR.

  11. CO-occurring exposure to perchlorate, nitrate and thiocyanate alters thyroid function in healthy pregnant women

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horton, Megan K., E-mail: megan.horton@mssm.edu; Blount, Benjamin C.; Valentin-Blasini, Liza

    Background: Adequate maternal thyroid function during pregnancy is necessary for normal fetal brain development, making pregnancy a critical window of vulnerability to thyroid disrupting insults. Sodium/iodide symporter (NIS) inhibitors, namely perchlorate, nitrate, and thiocyanate, have been shown individually to competitively inhibit uptake of iodine by the thyroid. Several epidemiologic studies examined the association between these individual exposures and thyroid function. Few studies have examined the effect of this chemical mixture on thyroid function during pregnancy Objectives: We examined the cross sectional association between urinary perchlorate, thiocyanate and nitrate concentrations and thyroid function among healthy pregnant women living in New Yorkmore » City using weighted quantile sum (WQS) regression. Methods: We measured thyroid stimulating hormone (TSH) and free thyroxine (FreeT4) in blood samples; perchlorate, thiocyanate, nitrate and iodide in urine samples collected from 284 pregnant women at 12 (±2.8) weeks gestation. We examined associations between urinary analyte concentrations and TSH or FreeT4 using linear regression or WQS adjusting for gestational age, urinary iodide and creatinine. Results: Individual analyte concentrations in urine were significantly correlated (Spearman's r 0.4–0.5, p<0.001). Linear regression analyses did not suggest associations between individual concentrations and thyroid function. The WQS revealed a significant positive association between the weighted sum of urinary concentrations of the three analytes and increased TSH. Perchlorate had the largest weight in the index, indicating the largest contribution to the WQS. Conclusions: Co-exposure to perchlorate, nitrate and thiocyanate may alter maternal thyroid function, specifically TSH, during pregnancy. - Highlights: • Perchlorate, nitrate, thiocyanate and iodide measured in maternal urine. • Thyroid function (TSH and Free T4) measured in maternal blood. • Weighted quantile sum (WQS) regression examined complex mixture effect. • WQS identified an inverse association between the exposure mixture and maternal TSH. • Perchlorate indicated as the ‘bad actor’ of the mixture.« less

  12. An Efficient Method Coupling Kernel Principal Component Analysis with Adjoint-Based Optimal Control and Its Goal-Oriented Extensions

    NASA Astrophysics Data System (ADS)

    Thimmisetty, C.; Talbot, C.; Tong, C. H.; Chen, X.

    2016-12-01

    The representativeness of available data poses a significant fundamental challenge to the quantification of uncertainty in geophysical systems. Furthermore, the successful application of machine learning methods to geophysical problems involving data assimilation is inherently constrained by the extent to which obtainable data represent the problem considered. We show how the adjoint method, coupled with optimization based on methods of machine learning, can facilitate the minimization of an objective function defined on a space of significantly reduced dimension. By considering uncertain parameters as constituting a stochastic process, the Karhunen-Loeve expansion and its nonlinear extensions furnish an optimal basis with respect to which optimization using L-BFGS can be carried out. In particular, we demonstrate that kernel PCA can be coupled with adjoint-based optimal control methods to successfully determine the distribution of material parameter values for problems in the context of channelized deformable media governed by the equations of linear elasticity. Since certain subsets of the original data are characterized by different features, the convergence rate of the method in part depends on, and may be limited by, the observations used to furnish the kernel principal component basis. By determining appropriate weights for realizations of the stochastic random field, then, one may accelerate the convergence of the method. To this end, we present a formulation of Weighted PCA combined with a gradient-based means using automatic differentiation to iteratively re-weight observations concurrent with the determination of an optimal reduced set control variables in the feature space. We demonstrate how improvements in the accuracy and computational efficiency of the weighted linear method can be achieved over existing unweighted kernel methods, and discuss nonlinear extensions of the algorithm.

  13. Probability Weighting Functions Derived from Hyperbolic Time Discounting: Psychophysical Models and Their Individual Level Testing.

    PubMed

    Takemura, Kazuhisa; Murakami, Hajime

    2016-01-01

    A probability weighting function (w(p)) is considered to be a nonlinear function of probability (p) in behavioral decision theory. This study proposes a psychophysical model of probability weighting functions derived from a hyperbolic time discounting model and a geometric distribution. The aim of the study is to show probability weighting functions from the point of view of waiting time for a decision maker. Since the expected value of a geometrically distributed random variable X is 1/p, we formulized the probability weighting function of the expected value model for hyperbolic time discounting as w(p) = (1 - k log p)(-1). Moreover, the probability weighting function is derived from Loewenstein and Prelec's (1992) generalized hyperbolic time discounting model. The latter model is proved to be equivalent to the hyperbolic-logarithmic weighting function considered by Prelec (1998) and Luce (2001). In this study, we derive a model from the generalized hyperbolic time discounting model assuming Fechner's (1860) psychophysical law of time and a geometric distribution of trials. In addition, we develop median models of hyperbolic time discounting and generalized hyperbolic time discounting. To illustrate the fitness of each model, a psychological experiment was conducted to assess the probability weighting and value functions at the level of the individual participant. The participants were 50 university students. The results of individual analysis indicated that the expected value model of generalized hyperbolic discounting fitted better than previous probability weighting decision-making models. The theoretical implications of this finding are discussed.

  14. Perceived health status and cardiometabolic risk among a sample of youth in Mexico

    PubMed Central

    Flores, Yvonne N.; Shaibi, Gabriel Q.; Morales, Leo S.; Salmerón, Jorge; Skalicky, Anne M.; Edwards, Todd C.; Gallegos-Carrillo, Katia; Patrick, Donald L.

    2015-01-01

    Purpose To examine differences in self-reported perceived mental and physical health status (PHS), as well as known cardiometabolic risk factors in a sample of normal weight, overweight, and obese Mexican youths. Methods Cross-sectional analysis of 164 youths aged 11-18 years recruited in Cuernavaca, Mexico. Participants completed a self-administered questionnaire that included measures of generic and weight-specific quality of life (QoL), perceived health, physical function, depressive symptoms, and body shape satisfaction. Height, weight and waist circumference were measured and body mass index (BMI) was determined. Fasting blood samples from participants yielded levels of glucose, triglycerides, and cholesterol (total, HDL and LDL). Results Nearly 50% of participants were female, 21% had a normal BMI, 39% were overweight, and 40% were obese. Obese youths reported significantly lower measures of PHS and showed an increase in cardiometabolic risk, compared to normal weight youths. Physical functioning, generic and weight-specific QoL were inversely associated with BMI, waist circumference and glucose. Depressive symptoms were positively correlated with BMI, waist circumference, glucose levels and HDL cholesterol. No correlation was found between PHS and cardiometabolic risk measures after controlling for BMI. Conclusions In this sample of Mexican youths, obesity was associated with a significantly lower PHS and increased cardiometabolic risk. PMID:25648756

  15. Modeling of Mean-VaR portfolio optimization by risk tolerance when the utility function is quadratic

    NASA Astrophysics Data System (ADS)

    Sukono, Sidi, Pramono; Bon, Abdul Talib bin; Supian, Sudradjat

    2017-03-01

    The problems of investing in financial assets are to choose a combination of weighting a portfolio can be maximized return expectations and minimizing the risk. This paper discusses the modeling of Mean-VaR portfolio optimization by risk tolerance, when square-shaped utility functions. It is assumed that the asset return has a certain distribution, and the risk of the portfolio is measured using the Value-at-Risk (VaR). So, the process of optimization of the portfolio is done based on the model of Mean-VaR portfolio optimization model for the Mean-VaR done using matrix algebra approach, and the Lagrange multiplier method, as well as Khun-Tucker. The results of the modeling portfolio optimization is in the form of a weighting vector equations depends on the vector mean return vector assets, identities, and matrix covariance between return of assets, as well as a factor in risk tolerance. As an illustration of numeric, analyzed five shares traded on the stock market in Indonesia. Based on analysis of five stocks return data gained the vector of weight composition and graphics of efficient surface of portfolio. Vector composition weighting weights and efficient surface charts can be used as a guide for investors in decisions to invest.

  16. The importance of body weight and weight management for military personnel.

    PubMed

    Naghii, Mohammad Reza

    2006-06-01

    Weight or fat reduction and maintenance among military personnel and attainment of desired body composition and physical appearance are considered important. A high level of body fat has been shown to have an adverse effect on performance in a number of military activities. The effect of rapid weight loss on performance appears to depend on the method of weight loss, the magnitude of weight loss, and the type of exercise or activity performance test used. Personnel who undertake imprudent weight-loss strategies, that is, personnel who try to change their usual body size by chronically restricting their food and fluid intake, may suffer a number of problems. Overweight personnel and their military coaches are just as susceptible to false ideas about weight loss and dieting as the rest of the community. Inappropriate weight loss causes a loss of lean tissue and can reduce, rather than enhance, performance. The understanding and promotion of safe, effective, appropriate weight-loss and weight-maintenance strategies represent important functions of the military system and officials. The greatest likelihood of success requires an integrated program, both during and after the weight-loss phase, in which assessment, increased energy expenditure through exercise and other daily activities, energy intake reduction, nutrition education, lifestyle changes, environmental changes, and psychological support are all components.

  17. Extracting Useful Semantic Information from Large Scale Corpora of Text

    ERIC Educational Resources Information Center

    Mendoza, Ray Padilla, Jr.

    2012-01-01

    Extracting and representing semantic information from large scale corpora is at the crux of computer-assisted knowledge generation. Semantic information depends on collocation extraction methods, mathematical models used to represent distributional information, and weighting functions which transform the space. This dissertation provides a…

  18. Characterization of the salivary microbiome in people with obesity

    PubMed Central

    Zhang, Qian

    2018-01-01

    Background The interactions between the gut microbiome and obesity have been extensively studied. Although the oral cavity is the gateway to the gut, and is extensively colonized with microbes, little is known about the oral microbiome in people with obesity. In the present study, we investigated the salivary microbiome in obese and normal weight healthy participants using metagenomic analysis. The subjects were categorized into two groups, obesity and normal weight, based on their BMIs. Methods We characterized the salivary microbiome of 33 adults with obesity and 29 normal weight controls using high-throughput sequencing of the V3–V4 region of the 16S rRNA gene (Illumina MiSeq). None of the selected participants had systemic, oral mucosal, or periodontal diseases. Results The salivary microbiome of the obesity group was distinct from that of the normal weight group. The salivary microbiome of periodontally healthy people with obesity had both significantly lower bacterial diversity and richness compared with the controls. The genus Prevotella, Granulicatella, Peptostreptococcus, Solobacterium, Catonella, and Mogibacterium were significantly more abundant in the obesity group; meanwhile the genus Haemophilus, Corynebacterium, Capnocytophaga, and Staphylococcus were less abundant in the obesity group. We also performed a functional analysis of the inferred metagenomes, and showed that the salivary community associated with obesity had a stronger signature of immune disease and a decreased functional signature related to environmental adaptation and Xenobiotics biodegradation compared with the normal weight controls. Discussion Our study demonstrates that the microbial diversity and structure of the salivary microbiome in people with obesity are significantly different from those of normal weight controls. These results suggested that changes in the structure and function of salivary microbiome in people with obesity might reflect their susceptibility to oral diseases. PMID:29576948

  19. A Multifactorial Weight Reduction Programme for Children with Overweight and Asthma: A Randomized Controlled Trial

    PubMed Central

    Willeboordse, Maartje; van de Kant, Kim D. G.; Tan, Frans E. S.; Mulkens, Sandra; Schellings, Julia; Crijns, Yvonne; van der Ploeg, Liesbeth; van Schayck, Constant P.; Dompeling, Edward

    2016-01-01

    Background There is increasing evidence that obesity is related to asthma development and severity. However, it is largely unknown whether weight reduction can influence asthma management, especially in children. Objective To determine the effects of a multifactorial weight reduction intervention on asthma management in overweight/obese children with (a high risk of developing) asthma. Methods An 18-month weight-reduction randomized controlled trial was conducted in 87 children with overweight/obesity and asthma. Every six months, measurements of anthropometry, lung function, lifestyle parameters and inflammatory markers were assessed. Analyses were performed with linear mixed models for longitudinal analyses. Results After 18 months, the body mass index-standard deviation score decreased by -0.14±0.29 points (p<0.01) in the intervention group and -0.12±0.34 points (p<0.01) in the control group. This change over time did not differ between groups (p>0.05). Asthma features (including asthma control and asthma-related quality of life) and lung function indices (static and dynamic) improved significantly over time in both groups. The FVC% predicted improved over time by 10.1 ± 8.7% in the intervention group (p<0.001), which was significantly greater than the 6.1 ± 8.4% in the control group (p<0.05). Conclusions & clinical relevance Clinically relevant improvements in body weight, lung function and asthma features were found in both the intervention and control group, although some effects were more pronounced in the intervention group (FVC, asthma control, and quality of life). This implies that a weight reduction intervention could be clinically beneficial for children with asthma. Trial Registration ClinicalTrials.gov NCT00998413 PMID:27294869

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakeman, John D.; Narayan, Akil; Zhou, Tao

    We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less

  1. Fluorescent techniques for discovery and characterization of phosphopantetheinyl transferase inhibitors

    PubMed Central

    Kosa, Nicolas M.; Foley, Timothy L.; Burkart, Michael D.

    2016-01-01

    Phosphopantetheinyl transferase (E.C. 2.7.8.-) activates biosynthetic pathways that synthesize both primary and secondary metabolites in bacteria. Inhibitors of these enzymes have the potential to serve as antibiotic compounds that function through a unique mode of action and possess clinical utility. Here we report a direct and continuous assay for this enzyme class based upon monitoring polarization of a fluorescent phosphopantetheine analog as it is transferred from a low molecular weight coenzyme A substrate to higher molecular weight protein acceptor. We demonstrate the utility of this method for the biochemical characterization of phosphopantetheinyl transferase Sfp, a canonical representative from this class. We also establish the portability of this technique to other homologs by adapting the assay to function with the human phosphopantetheinyl transferase, a target for which a microplate detection method does not currently exist. Comparison of these targets provides a basis to predict therapeutic index of inhibitor candidates and offers a valuable characterization of enzyme activity. PMID:24192555

  2. Charge characteristics of humic and fulvic acids: comparative analysis by colloid titration and potentiometric titration with continuous pK-distribution function model.

    PubMed

    Bratskaya, S; Golikov, A; Lutsenko, T; Nesterova, O; Dudarchik, V

    2008-09-01

    Charge characteristics of humic and fulvic acids of a different origin (inshore soils, peat, marine sediments, and soil (lysimetric) waters) were evaluated by means of two alternative methods - colloid titration and potentiometric titration. In order to elucidate possible limitations of the colloid titration as an express method of analysis of low content of humic substances we monitored changes in acid-base properties and charge densities of humic substances with soil depth, fractionation, and origin. We have shown that both factors - strength of acidic groups and molecular weight distribution in humic and fulvic acids - can affect the reliability of colloid titration. Due to deviations from 1:1 stoichiometry in interactions of humic substances with polymeric cationic titrant, the colloid titration can underestimate total acidity (charge density) of humic substances with domination of weak acidic functional groups (pK>6) and high content of the fractions with molecular weight below 1kDa.

  3. A Generalized Sampling and Preconditioning Scheme for Sparse Approximation of Polynomial Chaos Expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakeman, John D.; Narayan, Akil; Zhou, Tao

    We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less

  4. A Generalized Sampling and Preconditioning Scheme for Sparse Approximation of Polynomial Chaos Expansions

    DOE PAGES

    Jakeman, John D.; Narayan, Akil; Zhou, Tao

    2017-06-22

    We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less

  5. Bivariate functional data clustering: grouping streams based on a varying coefficient model of the stream water and air temperature relationship

    Treesearch

    H. Li; X. Deng; Andy Dolloff; E. P. Smith

    2015-01-01

    A novel clustering method for bivariate functional data is proposed to group streams based on their water–air temperature relationship. A distance measure is developed for bivariate curves by using a time-varying coefficient model and a weighting scheme. This distance is also adjusted by spatial correlation of streams via the variogram. Therefore, the proposed...

  6. Efficient and accurate Greedy Search Methods for mining functional modules in protein interaction networks.

    PubMed

    He, Jieyue; Li, Chaojun; Ye, Baoliu; Zhong, Wei

    2012-06-25

    Most computational algorithms mainly focus on detecting highly connected subgraphs in PPI networks as protein complexes but ignore their inherent organization. Furthermore, many of these algorithms are computationally expensive. However, recent analysis indicates that experimentally detected protein complexes generally contain Core/attachment structures. In this paper, a Greedy Search Method based on Core-Attachment structure (GSM-CA) is proposed. The GSM-CA method detects densely connected regions in large protein-protein interaction networks based on the edge weight and two criteria for determining core nodes and attachment nodes. The GSM-CA method improves the prediction accuracy compared to other similar module detection approaches, however it is computationally expensive. Many module detection approaches are based on the traditional hierarchical methods, which is also computationally inefficient because the hierarchical tree structure produced by these approaches cannot provide adequate information to identify whether a network belongs to a module structure or not. In order to speed up the computational process, the Greedy Search Method based on Fast Clustering (GSM-FC) is proposed in this work. The edge weight based GSM-FC method uses a greedy procedure to traverse all edges just once to separate the network into the suitable set of modules. The proposed methods are applied to the protein interaction network of S. cerevisiae. Experimental results indicate that many significant functional modules are detected, most of which match the known complexes. Results also demonstrate that the GSM-FC algorithm is faster and more accurate as compared to other competing algorithms. Based on the new edge weight definition, the proposed algorithm takes advantages of the greedy search procedure to separate the network into the suitable set of modules. Experimental analysis shows that the identified modules are statistically significant. The algorithm can reduce the computational time significantly while keeping high prediction accuracy.

  7. A Long-Term Intensive Lifestyle Intervention and Physical Function: the Look AHEAD Movement and Memory Study

    PubMed Central

    Houston, Denise K.; Leng, Xiaoyan; Bray, George A.; Hergenroeder, Andrea L.; Hill, James O.; Jakicic, John M.; Johnson, Karen C.; Neiberg, Rebecca H.; Marsh, Anthony P.; Rejeski, W. Jack; Kritchevsky, Stephen B.

    2014-01-01

    OBJECTIVE To assess the long-term effects of an intensive lifestyle intervention on physical function using a randomized post-test design in the Look AHEAD trial. METHODS Overweight and obese (BMI ≥25 kg/m2) middle-aged and older adults (aged 45–76 years at enrollment) with type 2 diabetes (n=964) at four clinics in Look AHEAD, a trial evaluating an intensive lifestyle intervention (ILI) designed to achieve weight loss through caloric restriction and increased physical activity compared to diabetes support and education (DSE), underwent standardized assessments of performance-based physical function including an expanded short physical performance battery (SPPBexp), 20-m and 400-m walk, and grip and knee extensor strength 8 years post-randomization, during the trial’s weight maintenance phase. RESULTS Eight years post-randomization, individuals randomized to ILI had better SPPBexp scores (adjusted mean (SE) difference: 0.055 (0.022), p=0.01) and faster 20-m and 400-m walk speeds (0.032 (0.012) m/sec, p=0.01, and 0.025 (0.011) m/sec, p=0.02, respectively) compared to those randomized to DSE. Achieved weight loss greatly attenuated the group differences in physical function and the intervention effect was no longer significant. CONCLUSIONS An intensive lifestyle intervention has long-term benefits for mobility function in overweight and obese middle-aged and older individuals with type 2 diabetes. PMID:25452229

  8. Reliable Viscosity Calculation from Equilibrium Molecular Dynamics Simulations: A Time Decomposition Method.

    PubMed

    Zhang, Yong; Otani, Akihito; Maginn, Edward J

    2015-08-11

    Equilibrium molecular dynamics is often used in conjunction with a Green-Kubo integral of the pressure tensor autocorrelation function to compute the shear viscosity of fluids. This approach is computationally expensive and is subject to a large amount of variability because the plateau region of the Green-Kubo integral is difficult to identify unambiguously. Here, we propose a time decomposition approach for computing the shear viscosity using the Green-Kubo formalism. Instead of one long trajectory, multiple independent trajectories are run and the Green-Kubo relation is applied to each trajectory. The averaged running integral as a function of time is fit to a double-exponential function with a weighting function derived from the standard deviation of the running integrals. Such a weighting function minimizes the uncertainty of the estimated shear viscosity and provides an objective means of estimating the viscosity. While the formal Green-Kubo integral requires an integration to infinite time, we suggest an integration cutoff time tcut, which can be determined by the relative values of the running integral and the corresponding standard deviation. This approach for computing the shear viscosity can be easily automated and used in computational screening studies where human judgment and intervention in the data analysis are impractical. The method has been applied to the calculation of the shear viscosity of a relatively low-viscosity liquid, ethanol, and relatively high-viscosity ionic liquid, 1-n-butyl-3-methylimidazolium bis(trifluoromethane-sulfonyl)imide ([BMIM][Tf2N]), over a range of temperatures. These test cases show that the method is robust and yields reproducible and reliable shear viscosity values.

  9. Missing RRI interpolation for HRV analysis using locally-weighted partial least squares regression.

    PubMed

    Kamata, Keisuke; Fujiwara, Koichi; Yamakawa, Toshiki; Kano, Manabu

    2016-08-01

    The R-R interval (RRI) fluctuation in electrocardiogram (ECG) is called heart rate variability (HRV). Since HRV reflects autonomic nervous function, HRV-based health monitoring services, such as stress estimation, drowsy driving detection, and epileptic seizure prediction, have been proposed. In these HRV-based health monitoring services, precise R wave detection from ECG is required; however, R waves cannot always be detected due to ECG artifacts. Missing RRI data should be interpolated appropriately for HRV analysis. The present work proposes a missing RRI interpolation method by utilizing using just-in-time (JIT) modeling. The proposed method adopts locally weighted partial least squares (LW-PLS) for RRI interpolation, which is a well-known JIT modeling method used in the filed of process control. The usefulness of the proposed method was demonstrated through a case study of real RRI data collected from healthy persons. The proposed JIT-based interpolation method could improve the interpolation accuracy in comparison with a static interpolation method.

  10. Eighth-order explicit two-step hybrid methods with symmetric nodes and weights for solving orbital and oscillatory IVPs

    NASA Astrophysics Data System (ADS)

    Franco, J. M.; Rández, L.

    The construction of new two-step hybrid (TSH) methods of explicit type with symmetric nodes and weights for the numerical integration of orbital and oscillatory second-order initial value problems (IVPs) is analyzed. These methods attain algebraic order eight with a computational cost of six or eight function evaluations per step (it is one of the lowest costs that we know in the literature) and they are optimal among the TSH methods in the sense that they reach a certain order of accuracy with minimal cost per step. The new TSH schemes also have high dispersion and dissipation orders (greater than 8) in order to be adapted to the solution of IVPs with oscillatory solutions. The numerical experiments carried out with several orbital and oscillatory problems show that the new eighth-order explicit TSH methods are more efficient than other standard TSH or Numerov-type methods proposed in the scientific literature.

  11. A Procedure for Structural Weight Estimation of Single Stage to Orbit Launch Vehicles (Interim User's Manual)

    NASA Technical Reports Server (NTRS)

    Martinovic, Zoran N.; Cerro, Jeffrey A.

    2002-01-01

    This is an interim user's manual for current procedures used in the Vehicle Analysis Branch at NASA Langley Research Center, Hampton, Virginia, for launch vehicle structural subsystem weight estimation based on finite element modeling and structural analysis. The process is intended to complement traditional methods of conceptual and early preliminary structural design such as the application of empirical weight estimation or application of classical engineering design equations and criteria on one dimensional "line" models. Functions of two commercially available software codes are coupled together. Vehicle modeling and analysis are done using SDRC/I-DEAS, and structural sizing is performed with the Collier Research Corp. HyperSizer program.

  12. Analysis of Modified SMI Method for Adaptive Array Weight Control. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Dilsavor, Ronald Louis

    1989-01-01

    An adaptive array is used to receive a desired signal in the presence of weak interference signals which need to be suppressed. A modified sample matrix inversion (SMI) algorithm controls the array weights. The modification leads to increased interference suppression by subtracting a fraction of the noise power from the diagonal elements of the covariance matrix. The modified algorithm maximizes an intuitive power ratio criterion. The expected values and variances of the array weights, output powers, and power ratios as functions of the fraction and the number of snapshots are found and compared to computer simulation and real experimental array performance. Reduced-rank covariance approximations and errors in the estimated covariance are also described.

  13. Identifying reprioritization response shift in a stroke caregiver population: a comparison of missing data methods.

    PubMed

    Sajobi, Tolulope T; Lix, Lisa M; Singh, Gurbakhshash; Lowerison, Mark; Engbers, Jordan; Mayo, Nancy E

    2015-03-01

    Response shift (RS) is an important phenomenon that influences the assessment of longitudinal changes in health-related quality of life (HRQOL) studies. Given that RS effects are often small, missing data due to attrition or item non-response can contribute to failure to detect RS effects. Since missing data are often encountered in longitudinal HRQOL data, effective strategies to deal with missing data are important to consider. This study aims to compare different imputation methods on the detection of reprioritization RS in the HRQOL of caregivers of stroke survivors. Data were from a Canadian multi-center longitudinal study of caregivers of stroke survivors over a one-year period. The Stroke Impact Scale physical function score at baseline, with a cutoff of 75, was used to measure patient stroke severity for the reprioritization RS analysis. Mean imputation, likelihood-based expectation-maximization imputation, and multiple imputation methods were compared in test procedures based on changes in relative importance weights to detect RS in SF-36 domains over a 6-month period. Monte Carlo simulation methods were used to compare the statistical powers of relative importance test procedures for detecting RS in incomplete longitudinal data under different missing data mechanisms and imputation methods. Of the 409 caregivers, 15.9 and 31.3 % of them had missing data at baseline and 6 months, respectively. There were no statistically significant changes in relative importance weights on any of the domains when complete-case analysis was adopted. But statistical significant changes were detected on physical functioning and/or vitality domains when mean imputation or EM imputation was adopted. There were also statistically significant changes in relative importance weights for physical functioning, mental health, and vitality domains when multiple imputation method was adopted. Our simulations revealed that relative importance test procedures were least powerful under complete-case analysis method and most powerful when a mean imputation or multiple imputation method was adopted for missing data, regardless of the missing data mechanism and proportion of missing data. Test procedures based on relative importance measures are sensitive to the type and amount of missing data and imputation method. Relative importance test procedures based on mean imputation and multiple imputation are recommended for detecting RS in incomplete data.

  14. RELATIONSHIP BETWEEN ISOMETRIC THIGH MUSCLE STRENGTH AND MINIMAL CLINICALLY IMPORTANT DIFFERENCES (MCIDS) IN KNEE FUNCTION IN OSTEOARTHRITIS – DATA FROM THE OSTEOARTHRITIS INITIATIVE

    PubMed Central

    Ruhdorfer, Anja; Wirth, Wolfgang; Eckstein, Felix

    2014-01-01

    Objective To determine the relationship between thigh muscle strength and clinically relevant differences in self-assessed lower limb function. Methods Isometric knee extensor and flexor strength of 4553 Osteoarthritis Initiative participants (2651 women/1902 men) was related to Western Ontario McMasters Universities (WOMAC) physical function scores by linear regression. Further, groups of Male and female participant strata with minimal clinically important differences (MCIDs) in WOMAC function scores (6/68) were compared across the full range of observed values, and to participants without functional deficits (WOMAC=0). The effect of WOMAC knee pain and body mass index on the above relationships was explored using stepwise regression. Results Per regression equations, a 3.7% reduction in extensor and a 4.0% reduction in flexor strength were associated with an MCID in WOMAC function in women, and a 3.6%/4.8% reduction in men. For strength divided by body weight, reductions were 5.2%/6.7% in women and 5.8%/6.7% in men. Comparing MCID strata across the full observed range of WOMAC function confirmed the above estimates and did not suggest non-linear relationships across the spectrum of observed values. WOMAC pain correlated strongly with WOMAC function, but extensor (and flexor) muscle strength contributed significant independent information. Conclusion Reductions of approximately 4% in isometric muscle strength and of 6% in strength/weight were related to a clinically relevant difference in WOMAC functional disability. Longitudinal studies will need to confirm these relationships within persons. Muscle extensor (and flexor) strength (per body weight) provided significant independent information in addition to pain in explaining variability in lower limb function. PMID:25303012

  15. The association between functional movement and overweight and obesity in British primary school children

    PubMed Central

    2013-01-01

    Background The purpose of this study was to examine the association between functional movement and overweight and obesity in British children. Methods Data were obtained from 90, 7–10 year old children (38 boys and 52 girls). Body mass (kg) and height (m) were assessed from which body mass index (BMI) was determined and children were classified as normal weight, overweight or obese according to international cut offs. Functional movement was assessed using the functional movement screen. Results Total functional movement score was significantly, negatively correlated with BMI (P = .0001). Functional movement scores were also significantly higher for normal weight children compared to obese children (P = .0001). Normal weight children performed significantly better on all individual tests within the functional movement screen compared to their obese peers (P <0.05) and significantly better than overweight children for the deep squat (P = .0001) and shoulder mobility tests (P = .04). Overweight children scored significantly better than obese in the hurdle step (P = .0001), in line lunge (P = .05), shoulder mobility (P = .04) and active straight leg raise (P = .016). Functional movement scores were not significantly different between boys and girls (P > .05) when considered as total scores. However, girls performed significantly better than boys on the hurdle step (P = .03) and straight leg raise (P = .004) but poorer than boys on the trunk stability push-up (P = .014). Conclusions This study highlights that overweight and obesity are significantly associated with poorer functional movement in children and that girls outperform boys in functional movements. PMID:23675746

  16. A simplified method for active-site titration of lipases immobilised on hydrophobic supports.

    PubMed

    Nalder, Tim D; Kurtovic, Ivan; Barrow, Colin J; Marshall, Susan N

    2018-06-01

    The aim of this work was to develop a simple and accurate protocol to measure the functional active site concentration of lipases immobilised on highly hydrophobic supports. We used the potent lipase inhibitor methyl 4-methylumbelliferyl hexylphosphonate to titrate the active sites of Candida rugosa lipase (CrL) bound to three highly hydrophobic supports: octadecyl methacrylate (C18), divinylbenzene crosslinked methacrylate (DVB) and styrene. The method uses correction curves to take into account the binding of the fluorophore (4-methylumbelliferone, 4-MU) by the support materials. We showed that the uptake of the detection agent by the three supports is not linear relative to the weight of the resin, and that the uptake occurs in an equilibrium that is independent of the total fluorophore concentration. Furthermore, the percentage of bound fluorophore varied among the supports, with 50 mg of C18 and styrene resins binding approximately 64 and 94%, respectively. When the uptake of 4-MU was calculated and corrected for, the total 4-MU released via inhibition (i.e. the concentration of functional lipase active sites) could be determined via a linear relationship between immobilised lipase weight and total inhibition. It was found that the functional active site concentration of immobilised CrL varied greatly among different hydrophobic supports, with 56% for C18, compared with 14% for DVB. The described method is a simple and robust approach to measuring functional active site concentration in immobilised lipase samples. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Examination of the relation between body mass index, functional level and health-related quality of life in children with cerebral palsy

    PubMed Central

    Şimşek, Tülay Tarsuslu; Tuç, Gamze

    2014-01-01

    Aim: The aim of this study was to examine the relation between body mass index (BMI) and functional level and health-related quality of life in children with cerebral palsy (CP). Material and Methods: Two hundred seventy-eight children with CP aged between 2 and 18 years were included in the study. The sociodemographic properties of the children were recorded. Their functional independence levels were assessed with WeeFIM and their health-related quality of life levels were assessed with the Child Health Questionnaire-Parent Form (PF-50). Approval was obtained from the ethics committee of Abant İzzet Baysal University Medical Faculty for this study (Number: 2008/100-77). Results: When classified by body mass index, 26.3% of the children had a normal body weight, 5.4% were overweight, 11.5% were obese and 56.8% had a low body weight. The rate of low body weight was higher in children with moderate and severe CP (52.7% and 53.8%, respectively), while the rate of obesity was higher in children with mild CP who could walk (7.1%). A significant difference was found in children with CP with a normal body weight, overweight children with CP, obese children with CP and children with CP with a low body weight in terms of the total WeeFIM score and the variables of quality of life including physical functionality and role/social limitations because of physical health (p<0.05). In the correlation analysis, a positive correlation was found between WeeFIM and BMI and the subdimensions of role/social limitations because of emotional or behavioral difficulties, pain and discomfort and self-esteem (p<0.05). Conclusions: Our results showed that BMI affected functional independence and health-related quality of life in children with CP and this was more prominent in children who had severe CP and low BMI values. More studies are needed in this area. PMID:26078648

  18. [Weight loss in overweight or obese patients and family functioning].

    PubMed

    Jaramillo-Sánchez, Rosalba; Espinosa-de Santillana, Irene; Espíndola-Jaramillo, Ilia Angélica

    2012-01-01

    to determine the association between weight loss and family functioning. a cohort of 168 persons with overweight or obesity from 20-49 years, either sex, with no comorbidity was studied at the nutrition department. A sociodemographic data was obtained and FACES III instrument to measure family functioning was applied. At the third month a new assessment of the body mass index was measured. Descriptive statistical analysis and relative risk were done. obesity presented in 50.6 %, 59.53 % of them did not lose weight. Family dysfunction was present in 56.6 % of which 50 % did not lose weight. From 43.4 % of functional families, 9.52 % did not lose weight (p = 0.001). The probability or risk of not losing weight was to belong to a dysfunctional family is 4.03 % (CI = 2.60-6.25). A significant association was found between the variables: weight loss and family functioning. Belonging to a dysfunctional family may be a risk factor for not losing weight.

  19. The association between body mass index, weight loss and physical function in the year following a hip fracture.

    PubMed

    Reider, L; Hawkes, W; Hebel, J R; D'Adamo, C; Magaziner, J; Miller, R; Orwig, D; Alley, D E

    2013-01-01

    To determine whether body mass index (BMI) at the time of hospitalization or weight change in the period immediately following hospitalization predict physical function in the year after hip fracture. Prospective observational study. Two hospitals in Baltimore, Maryland. Female hip fracture patients age 65 years or older (N=136 for BMI analysis, N=41 for analysis of weight change). Body mass index was calculated based on weight and height from the medical chart. Weight change was based on DXA scans at 3 and 10 days post fracture. Physical function was assessed at 2, 6 and 12 months following fracture using the lower extremity gain scale (LEGS), walking speed and grip strength. LEGS score and walking speed did not differ across BMI tertiles. However, grip strength differed significantly across BMI tertiles (p=0.029), with underweight women having lower grip strength than normal weight women at all time points. Women experiencing the most weight loss (>4.8%) had significantly lower LEGS scores at all time points, slower walking speed at 6 months, and weaker grip strength at 12 months post-fracture relative to women with more modest weight loss. In adjusted models, overall differences in function and functional change across all time points were not significant. However, at 12 months post fracture,women with the most weight loss had an average grip strength 7.0 kg lower than women with modest weight loss (p=0.030). Adjustment for confounders accounts for much of the relationships between BMI and function and weight change and function in the year after fracture. However, weight loss is associated with weakness during hip fracture recovery. Weight loss during and immediately after hospitalization appears to identify women at risk of poor function and may represent an important target for future interventions.

  20. Drosophila Insulin Pathway Mutants Affect Visual Physiology and Brain Function Besides Growth, Lipid, and Carbohydrate Metabolism

    PubMed Central

    Murillo-Maldonado, Juan M.; Sánchez-Chávez, Gustavo; Salgado, Luis M.; Salceda, Rocío; Riesgo-Escovar, Juan R.

    2011-01-01

    OBJECTIVE Type 2 diabetes is the most common form of diabetes worldwide. Some of its complications, such as retinopathy and neuropathy, are long-term and protracted, with an unclear etiology. Given this problem, genetic model systems, such as in flies where type 2 diabetes can be modeled and studied, offer distinct advantages. RESEARCH DESIGN AND METHODS We used individual flies in experiments: control and mutant individuals with partial loss-of-function insulin pathway genes. We measured wing size and tested body weight for growth phenotypes, the latter by means of a microbalance. We studied total lipid and carbohydrate content, lipids by a reaction in single fly homogenates with vanillin-phosphoric acid, and carbohydrates with an anthrone-sulfuric acid reaction. Cholinesterase activity was measured using the Ellman method in head homogenates from pooled fly heads, and electroretinograms with glass capillary microelectrodes to assess performance of central brain activity and retinal function. RESULTS Flies with partial loss-of-function of insulin pathway genes have significantly reduced body weight, higher total lipid content, and sometimes elevated carbohydrate levels. Brain function is impaired, as is retinal function, but no clear correlation can be drawn from nervous system function and metabolic state. CONCLUSIONS These studies show that flies can be models of type 2 diabetes. They weigh less but have significant lipid gains (obese); some also have carbohydrate gains and compromised brain and retinal functions. This is significant because flies have an open circulatory system without microvasculature and can be studied without the complications of vascular defects. PMID:21464442

  1. Brain MRI Tumor Detection using Active Contour Model and Local Image Fitting Energy

    NASA Astrophysics Data System (ADS)

    Nabizadeh, Nooshin; John, Nigel

    2014-03-01

    Automatic abnormality detection in Magnetic Resonance Imaging (MRI) is an important issue in many diagnostic and therapeutic applications. Here an automatic brain tumor detection method is introduced that uses T1-weighted images and K. Zhang et. al.'s active contour model driven by local image fitting (LIF) energy. Local image fitting energy obtains the local image information, which enables the algorithm to segment images with intensity inhomogeneities. Advantage of this method is that the LIF energy functional has less computational complexity than the local binary fitting (LBF) energy functional; moreover, it maintains the sub-pixel accuracy and boundary regularization properties. In Zhang's algorithm, a new level set method based on Gaussian filtering is used to implement the variational formulation, which is not only vigorous to prevent the energy functional from being trapped into local minimum, but also effective in keeping the level set function regular. Experiments show that the proposed method achieves high accuracy brain tumor segmentation results.

  2. Exponential operations and aggregation operators of interval neutrosophic sets and their decision making methods.

    PubMed

    Ye, Jun

    2016-01-01

    An interval neutrosophic set (INS) is a subclass of a neutrosophic set and a generalization of an interval-valued intuitionistic fuzzy set, and then the characteristics of INS are independently described by the interval numbers of its truth-membership, indeterminacy-membership, and falsity-membership degrees. However, the exponential parameters (weights) of all the existing exponential operational laws of INSs and the corresponding exponential aggregation operators are crisp values in interval neutrosophic decision making problems. As a supplement, this paper firstly introduces new exponential operational laws of INSs, where the bases are crisp values or interval numbers and the exponents are interval neutrosophic numbers (INNs), which are basic elements in INSs. Then, we propose an interval neutrosophic weighted exponential aggregation (INWEA) operator and a dual interval neutrosophic weighted exponential aggregation (DINWEA) operator based on these exponential operational laws and introduce comparative methods based on cosine measure functions for INNs and dual INNs. Further, we develop decision-making methods based on the INWEA and DINWEA operators. Finally, a practical example on the selecting problem of global suppliers is provided to illustrate the applicability and rationality of the proposed methods.

  3. Edge guided image reconstruction in linear scan CT by weighted alternating direction TV minimization.

    PubMed

    Cai, Ailong; Wang, Linyuan; Zhang, Hanming; Yan, Bin; Li, Lei; Xi, Xiaoqi; Li, Jianxin

    2014-01-01

    Linear scan computed tomography (CT) is a promising imaging configuration with high scanning efficiency while the data set is under-sampled and angularly limited for which high quality image reconstruction is challenging. In this work, an edge guided total variation minimization reconstruction (EGTVM) algorithm is developed in dealing with this problem. The proposed method is modeled on the combination of total variation (TV) regularization and iterative edge detection strategy. In the proposed method, the edge weights of intermediate reconstructions are incorporated into the TV objective function. The optimization is efficiently solved by applying alternating direction method of multipliers. A prudential and conservative edge detection strategy proposed in this paper can obtain the true edges while restricting the errors within an acceptable degree. Based on the comparison on both simulation studies and real CT data set reconstructions, EGTVM provides comparable or even better quality compared to the non-edge guided reconstruction and adaptive steepest descent-projection onto convex sets method. With the utilization of weighted alternating direction TV minimization and edge detection, EGTVM achieves fast and robust convergence and reconstructs high quality image when applied in linear scan CT with under-sampled data set.

  4. A Laplacian based image filtering using switching noise detector.

    PubMed

    Ranjbaran, Ali; Hassan, Anwar Hasni Abu; Jafarpour, Mahboobe; Ranjbaran, Bahar

    2015-01-01

    This paper presents a Laplacian-based image filtering method. Using a local noise estimator function in an energy functional minimizing scheme we show that Laplacian that has been known as an edge detection function can be used for noise removal applications. The algorithm can be implemented on a 3x3 window and easily tuned by number of iterations. Image denoising is simplified to the reduction of the pixels value with their related Laplacian value weighted by local noise estimator. The only parameter which controls smoothness is the number of iterations. Noise reduction quality of the introduced method is evaluated and compared with some classic algorithms like Wiener and Total Variation based filters for Gaussian noise. And also the method compared with the state-of-the-art method BM3D for some images. The algorithm appears to be easy, fast and comparable with many classic denoising algorithms for Gaussian noise.

  5. Using the choice experiment method in the design of breeding goals in dairy sheep.

    PubMed

    Ragkos, A; Abas, Z

    2015-02-01

    Market failures are the main cause of poor acknowledgement of the true impact of functional sheep traits on the management and economic performance of farms, which results in their omission from the breeding goal or the estimation of non-representative economic weights in the breeding goal. Consequently, stated-preference non-market valuation techniques, which recently emerged to mitigate these problems, are necessary to estimate economic weights for functional traits. The purpose of this paper is to present an example of the use of a choice experiment (CE) in the estimation of economic weights for sheep traits for the design of breeding goals. Through a questionnaire survey the preferences of sheep farmers are recorded and their marginal willingness to pay (MWTP) for 10 production and functional traits is estimated. Data are analysed using random parameter logit models. The results reveal unobserved preference heterogeneity for fertility, adaptability to grazing and resistance to disease, thus highlighting that these traits are appreciated differently by farmers, because their needs are diverse. Positive MWTP is found for Greek breeds, high milk production and lambs with low fat deposition, for which there is high demand in Greek markets. On the other hand, MWTP for the cheese-making ability of milk is negative, stemming from the fact that sheep milk prices in Greece are not formulated according to milk composition. In addition, farmers seem to understand differences between udder shapes and attribute different values to various types. This application of the CE method indicates that communication channels among farmers and breeders should be established in order to enhance market performance and to provide orientation to the design of breeding programmes. Non-market valuation can be used complementarily to market valuation techniques, in order to provide accurate estimates for production and functional traits.

  6. Comparison of the morphometric features of the left and right horse kidneys: a stereological approach.

    PubMed

    Bolat, D; Bahar, S; Tipirdamaz, S; Selcuk, M L

    2013-12-01

    The aims of this study were to determine the total volume of the horse kidney and volume fractions of its functional subcomponents (cortex, medulla, renal pelvis) using stereological methods and investigate any possible difference in the functional subcomponents of the right and left kidneys that may arise from differences in shape. The study was carried out on the kidneys of 5 horses of different breed and sex. The weight of the kidneys was measured by a digital scale, and kidney volume was calculated by Archimedes' principle. Total kidney volume and volume fractions of subcomponents of the right and left kidneys were estimated by the Cavalieri's principle. The weights of the right and left kidneys were 550 ± 25 g and 585 ± 23 g, respectively. The volumes of the right and left kidneys estimated using the Cavalieri method were 542 ± 46 ml and 581 ± 29 ml. The relative organ weight of the kidneys was calculated as 1:330. The densities of the right and left kidneys were determined to be 1.01 and 1.00, respectively. The mean volume fractions of the cortex, medulla and renal pelvis were determined as 55.6, 42.7 and 1.7 in both kidneys. No statistically significant difference existed between morphometric data pertaining to the right and left kidneys (P > 0.05). To determine precisely whether differences in shape cause any difference in the functional subcomponents of the right and left kidneys requires further investigation of differences in the number of microscopically functional unit of the kidney such as renal glomeruli and nephrons. © 2013 Blackwell Verlag GmbH.

  7. A Weighted Deep Representation Learning Model for Imbalanced Fault Diagnosis in Cyber-Physical Systems.

    PubMed

    Wu, Zhenyu; Guo, Yang; Lin, Wenfang; Yu, Shuyang; Ji, Yang

    2018-04-05

    Predictive maintenance plays an important role in modern Cyber-Physical Systems (CPSs) and data-driven methods have been a worthwhile direction for Prognostics Health Management (PHM). However, two main challenges have significant influences on the traditional fault diagnostic models: one is that extracting hand-crafted features from multi-dimensional sensors with internal dependencies depends too much on expertise knowledge; the other is that imbalance pervasively exists among faulty and normal samples. As deep learning models have proved to be good methods for automatic feature extraction, the objective of this paper is to study an optimized deep learning model for imbalanced fault diagnosis for CPSs. Thus, this paper proposes a weighted Long Recurrent Convolutional LSTM model with sampling policy (wLRCL-D) to deal with these challenges. The model consists of 2-layer CNNs, 2-layer inner LSTMs and 2-Layer outer LSTMs, with under-sampling policy and weighted cost-sensitive loss function. Experiments are conducted on PHM 2015 challenge datasets, and the results show that wLRCL-D outperforms other baseline methods.

  8. A Weighted Deep Representation Learning Model for Imbalanced Fault Diagnosis in Cyber-Physical Systems

    PubMed Central

    Guo, Yang; Lin, Wenfang; Yu, Shuyang; Ji, Yang

    2018-01-01

    Predictive maintenance plays an important role in modern Cyber-Physical Systems (CPSs) and data-driven methods have been a worthwhile direction for Prognostics Health Management (PHM). However, two main challenges have significant influences on the traditional fault diagnostic models: one is that extracting hand-crafted features from multi-dimensional sensors with internal dependencies depends too much on expertise knowledge; the other is that imbalance pervasively exists among faulty and normal samples. As deep learning models have proved to be good methods for automatic feature extraction, the objective of this paper is to study an optimized deep learning model for imbalanced fault diagnosis for CPSs. Thus, this paper proposes a weighted Long Recurrent Convolutional LSTM model with sampling policy (wLRCL-D) to deal with these challenges. The model consists of 2-layer CNNs, 2-layer inner LSTMs and 2-Layer outer LSTMs, with under-sampling policy and weighted cost-sensitive loss function. Experiments are conducted on PHM 2015 challenge datasets, and the results show that wLRCL-D outperforms other baseline methods. PMID:29621131

  9. Double inverse-weighted estimation of cumulative treatment effects under nonproportional hazards and dependent censoring.

    PubMed

    Schaubel, Douglas E; Wei, Guanghui

    2011-03-01

    In medical studies of time-to-event data, nonproportional hazards and dependent censoring are very common issues when estimating the treatment effect. A traditional method for dealing with time-dependent treatment effects is to model the time-dependence parametrically. Limitations of this approach include the difficulty to verify the correctness of the specified functional form and the fact that, in the presence of a treatment effect that varies over time, investigators are usually interested in the cumulative as opposed to instantaneous treatment effect. In many applications, censoring time is not independent of event time. Therefore, we propose methods for estimating the cumulative treatment effect in the presence of nonproportional hazards and dependent censoring. Three measures are proposed, including the ratio of cumulative hazards, relative risk, and difference in restricted mean lifetime. For each measure, we propose a double inverse-weighted estimator, constructed by first using inverse probability of treatment weighting (IPTW) to balance the treatment-specific covariate distributions, then using inverse probability of censoring weighting (IPCW) to overcome the dependent censoring. The proposed estimators are shown to be consistent and asymptotically normal. We study their finite-sample properties through simulation. The proposed methods are used to compare kidney wait-list mortality by race. © 2010, The International Biometric Society.

  10. Quantization and training of object detection networks with low-precision weights and activations

    NASA Astrophysics Data System (ADS)

    Yang, Bo; Liu, Jian; Zhou, Li; Wang, Yun; Chen, Jie

    2018-01-01

    As convolutional neural networks have demonstrated state-of-the-art performance in object recognition and detection, there is a growing need for deploying these systems on resource-constrained mobile platforms. However, the computational burden and energy consumption of inference for these networks are significantly higher than what most low-power devices can afford. To address these limitations, this paper proposes a method to train object detection networks with low-precision weights and activations. The probability density functions of weights and activations of each layer are first directly estimated using piecewise Gaussian models. Then, the optimal quantization intervals and step sizes for each convolution layer are adaptively determined according to the distribution of weights and activations. As the most computationally expensive convolutions can be replaced by effective fixed point operations, the proposed method can drastically reduce computation complexity and memory footprint. Performing on the tiny you only look once (YOLO) and YOLO architectures, the proposed method achieves comparable accuracy to their 32-bit counterparts. As an illustration, the proposed 4-bit and 8-bit quantized versions of the YOLO model achieve a mean average precision of 62.6% and 63.9%, respectively, on the Pascal visual object classes 2012 test dataset. The mAP of the 32-bit full-precision baseline model is 64.0%.

  11. Does participation in a weight control program also improve clinical and functional outcomes for Chinese patients with schizophrenia treated with olanzapine?

    PubMed Central

    Montgomery, William; Treuer, Tamas; Ye, Wenyu; Xue, Hai Bo; Wu, Sheng Hu; Liu, Li; Kadziola, Zbigniew; Stensland, Michael D; Ascher-Svanum, Haya

    2014-01-01

    Objectives This study examined whether participation in a weight control program (WCP) by patients with schizophrenia treated with olanzapine was also associated with improvements in clinical and functional outcomes. Methods A post-hoc analysis was conducted using data from the Chinese subgroup (n=330) of a multi-country, 6-month, prospective, observational study of outpatients with schizophrenia who initiated or switched to oral olanzapine. At study entry and monthly visits, participants were assessed with the Clinical Global Impression of Severity, and measures of patient insight, social activities, and work impairment. The primary comparison was between the 153 patients who participated in a WCP at study entry (n=93) or during the study (n=60) and the 177 patients who did not participate in a weight control program (non-WCP). Mixed Models for Repeated Measures with baseline covariates were used to compare outcomes over time. Kaplan–Meier survival analysis was used to assess time to response. Results Participants had a mean age of 29.0 years and 29.3 years, and 51.0% and 57.6% were female for WCP and non-WCP groups, respectively. Average initiated daily dose for olanzapine was 9.5±5.4 mg. WCP participants gained less weight than non-participants (3.9 kg vs 4.9 kg, P=0.03) and showed statistically significant better clinical and functional outcomes: greater improvement in illness severity (−2.8 vs −2.1, P<0.001), higher treatment response rates (94.1% vs 80.9%, P<0.001), shorter time to response (P<0.001), and greater improvement in patients’ insight (P<0.001). Patients who enrolled in a WCP during the study had greater initial weight gain than those who enrolled at baseline (P<0.05), but similar total weight gain. Conclusion Participation in a WCP may not only lower the risk of clinically significant weight gain in olanzapine-treated patients, but may also be associated with additional clinical and functional benefits. PMID:25031537

  12. Gait functional assessment: Spatio-temporal analysis and classification of barefoot plantar pressure in a group of 11-12-year-old children.

    PubMed

    Latour, Ewa; Latour, Marek; Arlet, Jarosław; Adach, Zdzisław; Bohatyrewicz, Andrzej

    2011-07-01

    Analysis of pedobarographical data requires geometric identification of specific anatomical areas extracted from recorded plantar pressures. This approach has led to ambiguity in measurements that may underlie the inconsistency of conclusions reported in pedobarographical studies. The goal of this study was to design a new analysis method less susceptible to the projection accuracy of anthropometric points and distance estimation, based on rarely used spatio-temporal indices. Six pedobarographic records per person (three per foot) from a group of 60 children aged 11-12 years were obtained and analyzed. The basis of the analysis was a mutual relationship between two spatio-temporal indices created by excursion of the peak pressure point and the center-of-pressure point on the dynamic pedobarogram. Classification of weight-shift patterns was elaborated and performed, and their frequencies of occurrence were assessed. This new method allows an assessment of body weight shift through the plantar pressure surface based on distribution analysis of spatio-temporal indices not affected by the shape of this surface. Analysis of the distribution of the created index confirmed the existence of typical ways of weight shifting through the plantar surface of the foot during gait, as well as large variability of the intrasubject occurrence. This method may serve as the basis for interpretation of foot functional features and may extend the clinical usefulness of pedobarography. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. The relationship between obesity and neurocognitive function in Chinese patients with schizophrenia

    PubMed Central

    2013-01-01

    Background Studies have reported that up to 60% of individuals with schizophrenia are overweight or obese. This study explored the relationship between obesity and cognitive performance in Chinese patients with schizophrenia. Methods Outpatients with schizophrenia aged 18–50 years were recruited from 10 study sites across China. Demographic and clinical information was collected. A neuropsychological battery including tests of attention, processing speed, learning/memory, and executive functioning was used to assess cognitive function, and these 4 individual domains were transformed into a neurocognitive composite z score. In addition, height and weight were measured to calculate body mass index (BMI). Patients were categorized into 4 groups (underweight, normal weight, overweight and obese) based on BMI cutoff values for Asian populations recommended by the World Health Organization. Results A total number of 896 patients were enrolled into the study. Fifty-four percent of participants were overweight or obese. A higher BMI was significantly associated with lower scores on the Wechsler Memory Scale-Revised (WMS-R) Visual Reproduction subscale, the Wechsler Adult Intelligence Scale-Revised (WAIS-R) Digit Symbol subscale, and the composite z score (p’s ≤ 0.024). Obese patients with schizophrenia had significantly lower scores than normal weight patients on the Trail Making Test B, the WMS-R Visual Reproduction subscale, the WAIS Digit Symbol subscale, and the composite z score (p’s ≤ 0.004). Conclusions Our study suggests that, in addition to its well established risk for various cardiometabolic conditions, obesity is also associated with decreased cognitive function in Chinese patients with schizophrenia. Future studies should explore if weight loss and management can improve cognitive function in obese patients who suffer from schizophrenia. PMID:23570390

  14. Identification and characterisation of midbrain nuclei using optimised functional magnetic resonance imaging

    PubMed Central

    Limbrick-Oldfield, Eve H.; Brooks, Jonathan C.W.; Wise, Richard J.S.; Padormo, Francesco; Hajnal, Jo V.; Beckmann, Christian F.; Ungless, Mark A.

    2012-01-01

    Localising activity in the human midbrain with conventional functional MRI (fMRI) is challenging because the midbrain nuclei are small and located in an area that is prone to physiological artefacts. Here we present a replicable and automated method to improve the detection and localisation of midbrain fMRI signals. We designed a visual fMRI task that was predicted would activate the superior colliculi (SC) bilaterally. A limited number of coronal slices were scanned, orientated along the long axis of the brainstem, whilst simultaneously recording cardiac and respiratory traces. A novel anatomical registration pathway was used to optimise the localisation of the small midbrain nuclei in stereotactic space. Two additional structural scans were used to improve registration between functional and structural T1-weighted images: an echo-planar image (EPI) that matched the functional data but had whole-brain coverage, and a whole-brain T2-weighted image. This pathway was compared to conventional registration pathways, and was shown to significantly improve midbrain registration. To reduce the physiological artefacts in the functional data, we estimated and removed structured noise using a modified version of a previously described physiological noise model (PNM). Whereas a conventional analysis revealed only unilateral SC activity, the PNM analysis revealed the predicted bilateral activity. We demonstrate that these methods improve the measurement of a biologically plausible fMRI signal. Moreover they could be used to investigate the function of other midbrain nuclei. PMID:21867762

  15. Swallow Characteristics in Patients with Oculopharyngeal Muscular Dystrophy

    ERIC Educational Resources Information Center

    Palmer, Phyllis M.; Neel, Amy T.; Sprouls, Gwyneth; Morrison, Leslie

    2010-01-01

    Purpose: This prospective investigation evaluates oral weakness and its impact on swallow function, weight, and quality of life in patients with oculopharyngeal muscular dystrophy (OPMD). Method: Intraoral pressure, swallow pressure, and endurance were measured using an Iowa Oral Performance Instrument in participants with OPMD and matched…

  16. SIMULATION OF DISPERSION OF A POWER PLANT PLUME USING AN ADAPTIVE GRID ALGORITHM

    EPA Science Inventory

    A new dynamic adaptive grid algorithm has been developed for use in air quality modeling. This algorithm uses a higher order numerical scheme?the piecewise parabolic method (PPM)?for computing advective solution fields; a weight function capable of promoting grid node clustering ...

  17. THRESHOLD ELEMENTS AND THE DESIGN OF SEQUENTIAL SWITCHING NETWORKS.

    DTIC Science & Technology

    The report covers research performed from March 1966 to March 1967. The major topics treated are: (1) methods for finding weight- threshold vectors...that realize a given switching function in multi- threshold linear logic; (2) synthesis of sequential machines by means of shift registers and simple

  18. ? and ? nonquadratic stabilisation of discrete-time Takagi-Sugeno systems based on multi-instant fuzzy Lyapunov functions

    NASA Astrophysics Data System (ADS)

    Tognetti, Eduardo S.; Oliveira, Ricardo C. L. F.; Peres, Pedro L. D.

    2015-01-01

    The problem of state feedback control design for discrete-time Takagi-Sugeno (TS) (T-S) fuzzy systems is investigated in this paper. A Lyapunov function, which is quadratic in the state and presents a multi-polynomial dependence on the fuzzy weighting functions at the current and past instants of time, is proposed.This function contains, as particular cases, other previous Lyapunov functions already used in the literature, being able to provide less conservative conditions of control design for TS fuzzy systems. The structure of the proposed Lyapunov function also motivates the design of a new stabilising compensator for Takagi-Sugeno fuzzy systems. The main novelty of the proposed state feedback control law is that the gain is composed of matrices with multi-polynomial dependence on the fuzzy weighting functions at a set of past instants of time, including the current one. The conditions for the existence of a stabilising state feedback control law that minimises an upper bound to the ? or ? norms are given in terms of linear matrix inequalities. Numerical examples show that the approach can be less conservative and more efficient than other methods available in the literature.

  19. lop-DWI: A Novel Scheme for Pre-Processing of Diffusion-Weighted Images in the Gradient Direction Domain.

    PubMed

    Sepehrband, Farshid; Choupan, Jeiran; Caruyer, Emmanuel; Kurniawan, Nyoman D; Gal, Yaniv; Tieng, Quang M; McMahon, Katie L; Vegh, Viktor; Reutens, David C; Yang, Zhengyi

    2014-01-01

    We describe and evaluate a pre-processing method based on a periodic spiral sampling of diffusion-gradient directions for high angular resolution diffusion magnetic resonance imaging. Our pre-processing method incorporates prior knowledge about the acquired diffusion-weighted signal, facilitating noise reduction. Periodic spiral sampling of gradient direction encodings results in an acquired signal in each voxel that is pseudo-periodic with characteristics that allow separation of low-frequency signal from high frequency noise. Consequently, it enhances local reconstruction of the orientation distribution function used to define fiber tracks in the brain. Denoising with periodic spiral sampling was tested using synthetic data and in vivo human brain images. The level of improvement in signal-to-noise ratio and in the accuracy of local reconstruction of fiber tracks was significantly improved using our method.

  20. Generalized weighted ratio method for accurate turbidity measurement over a wide range.

    PubMed

    Liu, Hongbo; Yang, Ping; Song, Hong; Guo, Yilu; Zhan, Shuyue; Huang, Hui; Wang, Hangzhou; Tao, Bangyi; Mu, Quanquan; Xu, Jing; Li, Dejun; Chen, Ying

    2015-12-14

    Turbidity measurement is important for water quality assessment, food safety, medicine, ocean monitoring, etc. In this paper, a method that accurately estimates the turbidity over a wide range is proposed, where the turbidity of the sample is represented as a weighted ratio of the scattered light intensities at a series of angles. An improvement in the accuracy is achieved by expanding the structure of the ratio function, thus adding more flexibility to the turbidity-intensity fitting. Experiments have been carried out with an 850 nm laser and a power meter fixed on a turntable to measure the light intensity at different angles. The results show that the relative estimation error of the proposed method is 0.58% on average for a four-angle intensity combination for all test samples with a turbidity ranging from 160 NTU to 4000 NTU.

  1. An image mosaic method based on corner

    NASA Astrophysics Data System (ADS)

    Jiang, Zetao; Nie, Heting

    2015-08-01

    In view of the shortcomings of the traditional image mosaic, this paper describes a new algorithm for image mosaic based on the Harris corner. Firstly, Harris operator combining the constructed low-pass smoothing filter based on splines function and circular window search is applied to detect the image corner, which allows us to have better localisation performance and effectively avoid the phenomenon of cluster. Secondly, the correlation feature registration is used to find registration pair, remove the false registration using random sampling consensus. Finally use the method of weighted trigonometric combined with interpolation function for image fusion. The experiments show that this method can effectively remove the splicing ghosting and improve the accuracy of image mosaic.

  2. COMPARISONS OF THE FINITE-ELEMENT-WITH-DISCONTIGUOUS-SUPPORT METHOD TO CONTINUOUS-ENERGY MONTE CARLO FOR PIN-CELL PROBLEMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    A. T. Till; M. Hanuš; J. Lou

    The standard multigroup (MG) method for energy discretization of the transport equation can be sensitive to approximations in the weighting spectrum chosen for cross-section averaging. As a result, MG often inaccurately treats important phenomena such as self-shielding variations across a material. From a finite-element viewpoint, MG uses a single fixed basis function (the pre-selected spectrum) within each group, with no mechanism to adapt to local solution behavior. In this work, we introduce the Finite-Element-with-Discontiguous-Support (FEDS) method, whose only approximation with respect to energy is that the angular flux is a linear combination of unknowns multiplied by basis functions. A basismore » function is non-zero only in the discontiguous set of energy intervals associated with its energy element. Discontiguous energy elements are generalizations of bands and are determined by minimizing a norm of the difference between snapshot spectra and their averages over the energy elements. We begin by presenting the theory of the FEDS method. We then compare to continuous-energy Monte Carlo for one-dimensional slab and two-dimensional pin-cell problem. We find FEDS to be accurate and efficient at producing quantities of interest such as reaction rates and eigenvalues. Results show that FEDS converges at a rate that is approximately first-order in the number of energy elements and that FEDS is less sensitive to weighting spectrum than standard MG.« less

  3. Multiphonon contribution to the polaron formation in cuprates with strong electron correlations and strong electron-phonon interaction

    NASA Astrophysics Data System (ADS)

    Ovchinnikov, Sergey G.; Makarov, Ilya A.; Kozlov, Peter A.

    2017-03-01

    In this work dependences of the electron band structure and spectral function in the HTSC cuprates on magnitude of electron-phonon interaction (EPI) and temperature are investigated. We use three-band p-d model with diagonal and offdiagonal EPI with breathing and buckling phonon mode in the frameworks of polaronic version of the generalized tight binding (GTB) method. The polaronic quasiparticle excitation in the system with EPI within this approach is formed by a hybridization of the local multiphonon Franck-Condon excitations with lower and upper Hubbard bands. Increasing EPI leads to transfer of spectral weight to high-energy multiphonon excitations and broadening of the spectral function. Temperature effects are taken into account by occupation numbers of local excited polaronic states and variations in the magnitude of spin-spin correlation functions. Increasing the temperature results in band structure reconstruction, spectral weight redistribution, broadening of the spectral function peak at the top of the valence band and the decreasing of the peak intensity. The effect of EPI with two phonon modes on the polaron spectral function is discussed.

  4. A Weighted Multipath Measurement Based on Gene Ontology for Estimating Gene Products Similarity

    PubMed Central

    Liu, Lizhen; Dai, Xuemin; Song, Wei; Lu, Jingli

    2014-01-01

    Abstract Many different methods have been proposed for calculating the semantic similarity of term pairs based on gene ontology (GO). Most existing methods are based on information content (IC), and the methods based on IC are used more commonly than those based on the structure of GO. However, most IC-based methods not only fail to handle identical annotations but also show a strong bias toward well-annotated proteins. We propose a new method called weighted multipath measurement (WMM) for estimating the semantic similarity of gene products based on the structure of the GO. We not only considered the contribution of every path between two GO terms but also took the depth of the lowest common ancestors into account. We assigned different weights for different kinds of edges in GO graph. The similarity values calculated by WMM can be reused because they are only relative to the characteristics of GO terms. Experimental results showed that the similarity values obtained by WMM have a higher accuracy. We compared the performance of WMM with that of other methods using GO data and gene annotation datasets for yeast and humans downloaded from the GO database. We found that WMM is more suited for prediction of gene function than most existing IC-based methods and that it can distinguish proteins with identical annotations (two proteins are annotated with the same terms) from each other. PMID:25229994

  5. An increase in visceral fat is associated with a decrease in the taste and olfactory capacity

    PubMed Central

    Fernandez-Garcia, Jose Carlos; Alcaide, Juan; Santiago-Fernandez, Concepcion; Roca-Rodriguez, MM.; Aguera, Zaida; Baños, Rosa; Botella, Cristina; de la Torre, Rafael; Fernandez-Real, Jose M.; Fruhbeck, Gema; Gomez-Ambrosi, Javier; Jimenez-Murcia, Susana; Menchon, Jose M.; Casanueva, Felipe F.; Fernandez-Aranda, Fernando; Tinahones, Francisco J.; Garrido-Sanchez, Lourdes

    2017-01-01

    Introduction Sensory factors may play an important role in the determination of appetite and food choices. Also, some adipokines may alter or predict the perception and pleasantness of specific odors. We aimed to analyze differences in smell–taste capacity between females with different weights and relate them with fat and fat-free mass, visceral fat, and several adipokines. Materials and methods 179 females with different weights (from low weight to morbid obesity) were studied. We analyzed the relation between fat, fat-free mass, visceral fat (indirectly estimated by bioelectrical impedance analysis with visceral fat rating (VFR)), leptin, adiponectin and visfatin. The smell and taste assessments were performed through the "Sniffin’ Sticks" and "Taste Strips" respectively. Results We found a lower score in the measurement of smell (TDI-score (Threshold, Discrimination and Identification)) in obese subjects. All the olfactory functions measured, such as threshold, discrimination, identification and the TDI-score, correlated negatively with age, body mass index (BMI), leptin, fat mass, fat-free mass and VFR. In a multiple linear regression model, VFR mainly predicted the TDI-score. With regard to the taste function measurements, the normal weight subjects showed a higher score of taste functions. However a tendency to decrease was observed in the groups with greater or lesser BMI. In a multiple linear regression model VFR and age mainly predicted the total taste scores. Discussion We show for the first time that a reverse relationship exists between visceral fat and sensory signals, such as smell and taste, across a population with different body weight conditions. PMID:28158237

  6. Mass Spectrometry Imaging of low Molecular Weight Compounds in Garlic (Allium sativum L.) with Gold Nanoparticle Enhanced Target.

    PubMed

    Misiorek, Maria; Sekuła, Justyna; Ruman, Tomasz

    2017-11-01

    Garlic (Allium sativum) is the subject of many studies due to its numerous beneficial properties. Although compounds of garlic have been studied by various analytical methods, their tissue distributions are still unclear. Mass spectrometry imaging (MSI) appears to be a very powerful tool for the identification of the localisation of compounds within a garlic clove. Visualisation of the spatial distribution of garlic low-molecular weight compounds with nanoparticle-based MSI. Compounds occurring on the cross-section of sprouted garlic has been transferred to gold-nanoparticle enhanced target (AuNPET) by imprinting. The imprint was then subjected to MSI analysis. The results suggest that low molecular weight compounds, such as amino acids, dipeptides, fatty acids, organosulphur and organoselenium compounds are distributed within the garlic clove in a characteristic manner. It can be connected with their biological functions and metabolic properties in the plant. New methodology for the visualisation of low molecular weight compounds allowed a correlation to be made between their spatial distribution within a sprouted garlic clove and their biological function. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  7. A simplified approach for slope stability analysis of uncontrolled waste dumps.

    PubMed

    Turer, Dilek; Turer, Ahmet

    2011-02-01

    Slope stability analysis of municipal solid waste has always been problematic because of the heterogeneous nature of the waste materials. The requirement for large testing equipment in order to obtain representative samples has identified the need for simplified approaches to obtain the unit weight and shear strength parameters of the waste. In the present study, two of the most recently published approaches for determining the unit weight and shear strength parameters of the waste have been incorporated into a slope stability analysis using the Bishop method to prepare slope stability charts. The slope stability charts were prepared for uncontrolled waste dumps having no liner and leachate collection systems with pore pressure ratios of 0, 0.1, 0.2, 0.3, 0.4 and 0.5, considering the most critical slip surface passing through the toe of the slope. As the proposed slope stability charts were prepared by considering the change in unit weight as a function of height, they reflect field conditions better than accepting a constant unit weight approach in the stability analysis. They also streamline the selection of slope or height as a function of the desired factor of safety.

  8. A Precise Drunk Driving Detection Using Weighted Kernel Based on Electrocardiogram.

    PubMed

    Wu, Chung Kit; Tsang, Kim Fung; Chi, Hao Ran; Hung, Faan Hei

    2016-05-09

    Globally, 1.2 million people die and 50 million people are injured annually due to traffic accidents. These traffic accidents cost $500 billion dollars. Drunk drivers are found in 40% of the traffic crashes. Existing drunk driving detection (DDD) systems do not provide accurate detection and pre-warning concurrently. Electrocardiogram (ECG) is a proven biosignal that accurately and simultaneously reflects human's biological status. In this letter, a classifier for DDD based on ECG is investigated in an attempt to reduce traffic accidents caused by drunk drivers. At this point, it appears that there is no known research or literature found on ECG classifier for DDD. To identify drunk syndromes, the ECG signals from drunk drivers are studied and analyzed. As such, a precise ECG-based DDD (ECG-DDD) using a weighted kernel is developed. From the measurements, 10 key features of ECG signals were identified. To incorporate the important features, the feature vectors are weighted in the customization of kernel functions. Four commonly adopted kernel functions are studied. Results reveal that weighted feature vectors improve the accuracy by 11% compared to the computation using the prime kernel. Evaluation shows that ECG-DDD improved the accuracy by 8% to 18% compared to prevailing methods.

  9. A Comparison of Supervised Machine Learning Algorithms and Feature Vectors for MS Lesion Segmentation Using Multimodal Structural MRI

    PubMed Central

    Sweeney, Elizabeth M.; Vogelstein, Joshua T.; Cuzzocreo, Jennifer L.; Calabresi, Peter A.; Reich, Daniel S.; Crainiceanu, Ciprian M.; Shinohara, Russell T.

    2014-01-01

    Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance. PMID:24781953

  10. A comparison of supervised machine learning algorithms and feature vectors for MS lesion segmentation using multimodal structural MRI.

    PubMed

    Sweeney, Elizabeth M; Vogelstein, Joshua T; Cuzzocreo, Jennifer L; Calabresi, Peter A; Reich, Daniel S; Crainiceanu, Ciprian M; Shinohara, Russell T

    2014-01-01

    Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance.

  11. Kindergarten classroom functioning of extremely preterm/extremely low birth weight children.

    PubMed

    Wong, Taylor; Taylor, H Gerry; Klein, Nancy; Espy, Kimberly A; Anselmo, Marcia G; Minich, Nori; Hack, Maureen

    2014-12-01

    Cognitive, behavioral, and learning problems are evident in extremely preterm/extremely low birth weight (EPT/ELBW, <28 weeks gestational age or <1000 g) children by early school age. However, we know little about how they function within the classroom once they start school. To determine how EPT/ELBW children function in kindergarten classrooms compared to termborn normal birth weight (NBW) classmates and identify factors related to difficulties in classroom functioning. A 2001-2003 birth cohort of 111 EPT/ELBW children and 110 NBW classmate controls were observed in regular kindergarten classrooms during a 1-hour instructional period using a time-sample method. The groups were compared on frequencies of individual teacher attention, competing or offtask behaviors, task management/preparation, and academic responding. Regression analysis was also conducted within the EPT/ELBW group to examine associations of these measures with neonatal and developmental risk factors, kindergarten neuropsychological and behavioral assessments, and classroom characteristics. The EPT/ELBW group received more individual teacher attention and was more often off-task than the NBW controls. Poorer classroom functioning in the EPT/ELBW group was associated with higher neonatal and developmental risk, poorer executive function skills, more negative teaching ratings of behavior and learning progress, and classroom characteristics. EPT/ELBW children require more teacher support and are less able to engage in instructional activities than their NBW classmates. Associations of classroom functioning with developmental history and cognitive and behavioral traits suggest that these factors may be useful in identifying the children most in need of special educational interventions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Co-occurring exposure to perchlorate, nitrate and thiocyanate alters thyroid function in healthy pregnant women

    PubMed Central

    Horton, Megan K.; Blount, Benjamin C.; Valentin-Blasini, Liza; Wapner, Ronald; Whyatt, Robin; Gennings, Chris; Factor-Litvak, Pam

    2015-01-01

    Background Adequate maternal thyroid function during pregnancy is necessary for normal fetal brain development, making pregnancy a critical window of vulnerability to thyroid disrupting insults. Sodium/iodide symporter (NIS) inhibitors, namely perchlorate, nitrate, and thiocyanate, have been shown individually to competitively inhibit uptake of iodine by the thyroid. Several epidemiologic studies examined the association between these individual exposures and thyroid function. Few studies have examined the effect of this chemical mixture on thyroid function during pregnancy. Objectives We examined the cross sectional association between urinary perchlorate, thiocyanate and nitrate concentrations and thyroid function among healthy pregnant women living in New York City using weighted quantile sum (WQS) regression. Methods We measured thyroid stimulating hormone (TSH) and free thyroxine (FreeT4) in blood samples; perchlorate, thiocyanate, nitrate and iodide in urine samples collected from 284 pregnant women at 12 (± 2.8) weeks gestation. We examined associations between urinary analyte concentrations and TSH or FreeT4 using linear regression or WQS adjusting for gestational age, urinary iodide and creatinine. Results Individual analyte concentrations in urine were significantly correlated (Spearman’s r 0.4–0.5, p < 0.001). Linear regression analyses did not suggest associations between individual concentrations and thyroid function. The WQS revealed a significant positive association between the weighted sum of urinary concentrations of the three analytes and increased TSH. Perchlorate had the largest weight in the index, indicating the largest contribution to the WQS. Conclusions Co-exposure to perchlorate, nitrate and thiocyanate may alter maternal thyroid function, specifically TSH, during pregnancy. PMID:26408806

  13. Asymptotics of quantum weighted Hurwitz numbers

    NASA Astrophysics Data System (ADS)

    Harnad, J.; Ortmann, Janosch

    2018-06-01

    This work concerns both the semiclassical and zero temperature asymptotics of quantum weighted double Hurwitz numbers. The partition function for quantum weighted double Hurwitz numbers can be interpreted in terms of the energy distribution of a quantum Bose gas with vanishing fugacity. We compute the leading semiclassical term of the partition function for three versions of the quantum weighted Hurwitz numbers, as well as lower order semiclassical corrections. The classical limit is shown to reproduce the simple single and double Hurwitz numbers studied by Okounkov and Pandharipande (2000 Math. Res. Lett. 7 447–53, 2000 Lett. Math. Phys. 53 59–74). The KP-Toda τ-function that serves as generating function for the quantum Hurwitz numbers is shown to have the τ-function of Okounkov and Pandharipande (2000 Math. Res. Lett. 7 447–53, 2000 Lett. Math. Phys. 53 59–74) as its leading term in the classical limit, and, with suitable scaling, the same holds for the partition function, the weights and expectations of Hurwitz numbers. We also compute the zero temperature limit of the partition function and quantum weighted Hurwitz numbers. The KP or Toda τ-function serving as generating function for the quantum Hurwitz numbers are shown to give the one for Belyi curves in the zero temperature limit and, with suitable scaling, the same holds true for the partition function, the weights and the expectations of Hurwitz numbers.

  14. Relationships among body weight, joint moments generated during functional activities, and hip bone mass in older adults

    PubMed Central

    Wang, Man-Ying; Flanagan, Sean P.; Song, Joo-Eun; Greendale, Gail A.; Salem, George J.

    2012-01-01

    Objective To investigate the relationships among hip joint moments produced during functional activities and hip bone mass in sedentary older adults. Methods Eight male and eight female older adults (70–85 yr) performed functional activities including walking, chair sit–stand–sit, and stair stepping at a self-selected pace while instrumented for biomechanical analysis. Bone mass at proximal femur, femoral neck, and greater trochanter were measured by dual-energy X-ray absorptiometry. Three-dimensional hip moments were obtained using a six-camera motion analysis system, force platforms, and inverse dynamics techniques. Pearson’s correlation coefficients were employed to assess the relationships among hip bone mass, height, weight, age, and joint moments. Stepwise regression analyses were performed to determine the factors that significantly predicted bone mass using all significant variables identified in the correlation analysis. Findings Hip bone mass was not significantly correlated with moments during activities in men. Conversely, in women bone mass at all sites were significantly correlated with weight, moments generated with stepping, and moments generated with walking (p < 0.05 to p < 0.001). Regression analysis results further indicated that the overall moments during stepping independently predicted up to 93% of the variability in bone mass at femoral neck and proximal femur; whereas weight independently predicted up to 92% of the variability in bone mass at greater trochanter. Interpretation Submaximal loading events produced during functional activities were highly correlated with hip bone mass in sedentary older women, but not men. The findings may ultimately be used to modify exercise prescription for the preservation of bone mass. PMID:16631283

  15. An Astronomical Test of CCD Photometric Precision

    NASA Technical Reports Server (NTRS)

    Koch, David; Dunham, Edward; Borucki, William; Jenkins, Jon; DeVingenzi, D. (Technical Monitor)

    1998-01-01

    This article considers a posteriori error estimation of specified functionals for first-order systems of conservation laws discretized using the discontinuous Galerkin (DG) finite element method. Using duality techniques. we derive exact error representation formulas for both linear and nonlinear functionals given an associated bilinear or nonlinear variational form. Weighted residual approximations of the exact error representation formula are then proposed and numerically evaluated for Ringleb flow, an exact solution of the 2-D Euler equations.

  16. Geodesic active fields--a geometric framework for image registration.

    PubMed

    Zosso, Dominique; Bresson, Xavier; Thiran, Jean-Philippe

    2011-05-01

    In this paper we present a novel geometric framework called geodesic active fields for general image registration. In image registration, one looks for the underlying deformation field that best maps one image onto another. This is a classic ill-posed inverse problem, which is usually solved by adding a regularization term. Here, we propose a multiplicative coupling between the registration term and the regularization term, which turns out to be equivalent to embed the deformation field in a weighted minimal surface problem. Then, the deformation field is driven by a minimization flow toward a harmonic map corresponding to the solution of the registration problem. This proposed approach for registration shares close similarities with the well-known geodesic active contours model in image segmentation, where the segmentation term (the edge detector function) is coupled with the regularization term (the length functional) via multiplication as well. As a matter of fact, our proposed geometric model is actually the exact mathematical generalization to vector fields of the weighted length problem for curves and surfaces introduced by Caselles-Kimmel-Sapiro. The energy of the deformation field is measured with the Polyakov energy weighted by a suitable image distance, borrowed from standard registration models. We investigate three different weighting functions, the squared error and the approximated absolute error for monomodal images, and the local joint entropy for multimodal images. As compared to specialized state-of-the-art methods tailored for specific applications, our geometric framework involves important contributions. Firstly, our general formulation for registration works on any parametrizable, smooth and differentiable surface, including nonflat and multiscale images. In the latter case, multiscale images are registered at all scales simultaneously, and the relations between space and scale are intrinsically being accounted for. Second, this method is, to the best of our knowledge, the first reparametrization invariant registration method introduced in the literature. Thirdly, the multiplicative coupling between the registration term, i.e. local image discrepancy, and the regularization term naturally results in a data-dependent tuning of the regularization strength. Finally, by choosing the metric on the deformation field one can freely interpolate between classic Gaussian and more interesting anisotropic, TV-like regularization.

  17. Common spatial pattern combined with kernel linear discriminate and generalized radial basis function for motor imagery-based brain computer interface applications

    NASA Astrophysics Data System (ADS)

    Hekmatmanesh, Amin; Jamaloo, Fatemeh; Wu, Huapeng; Handroos, Heikki; Kilpeläinen, Asko

    2018-04-01

    Brain Computer Interface (BCI) can be a challenge for developing of robotic, prosthesis and human-controlled systems. This work focuses on the implementation of a common spatial pattern (CSP) base algorithm to detect event related desynchronization patterns. Utilizing famous previous work in this area, features are extracted by filter bank with common spatial pattern (FBCSP) method, and then weighted by a sensitive learning vector quantization (SLVQ) algorithm. In the current work, application of the radial basis function (RBF) as a mapping kernel of linear discriminant analysis (KLDA) method on the weighted features, allows the transfer of data into a higher dimension for more discriminated data scattering by RBF kernel. Afterwards, support vector machine (SVM) with generalized radial basis function (GRBF) kernel is employed to improve the efficiency and robustness of the classification. Averagely, 89.60% accuracy and 74.19% robustness are achieved. BCI Competition III, Iva data set is used to evaluate the algorithm for detecting right hand and foot imagery movement patterns. Results show that combination of KLDA with SVM-GRBF classifier makes 8.9% and 14.19% improvements in accuracy and robustness, respectively. For all the subjects, it is concluded that mapping the CSP features into a higher dimension by RBF and utilization GRBF as a kernel of SVM, improve the accuracy and reliability of the proposed method.

  18. Robustness of weighted networks

    NASA Astrophysics Data System (ADS)

    Bellingeri, Michele; Cassi, Davide

    2018-01-01

    Complex network response to node loss is a central question in different fields of network science because node failure can cause the fragmentation of the network, thus compromising the system functioning. Previous studies considered binary networks where the intensity (weight) of the links is not accounted for, i.e. a link is either present or absent. However, in real-world networks the weights of connections, and thus their importance for network functioning, can be widely different. Here, we analyzed the response of real-world and model networks to node loss accounting for link intensity and the weighted structure of the network. We used both classic binary node properties and network functioning measure, introduced a weighted rank for node importance (node strength), and used a measure for network functioning that accounts for the weight of the links (weighted efficiency). We find that: (i) the efficiency of the attack strategies changed using binary or weighted network functioning measures, both for real-world or model networks; (ii) in some cases, removing nodes according to weighted rank produced the highest damage when functioning was measured by the weighted efficiency; (iii) adopting weighted measure for the network damage changed the efficacy of the attack strategy with respect the binary analyses. Our results show that if the weighted structure of complex networks is not taken into account, this may produce misleading models to forecast the system response to node failure, i.e. consider binary links may not unveil the real damage induced in the system. Last, once weighted measures are introduced, in order to discover the best attack strategy, it is important to analyze the network response to node loss using nodes rank accounting the intensity of the links to the node.

  19. An Automated and Continuous Plant Weight Measurement System for Plant Factory

    PubMed Central

    Chen, Wei-Tai; Yeh, Yu-Hui F.; Liu, Ting-Yu; Lin, Ta-Te

    2016-01-01

    In plant factories, plants are usually cultivated in nutrient solution under a controllable environment. Plant quality and growth are closely monitored and precisely controlled. For plant growth evaluation, plant weight is an important and commonly used indicator. Traditional plant weight measurements are destructive and laborious. In order to measure and record the plant weight during plant growth, an automated measurement system was designed and developed herein. The weight measurement system comprises a weight measurement device and an imaging system. The weight measurement device consists of a top disk, a bottom disk, a plant holder and a load cell. The load cell with a resolution of 0.1 g converts the plant weight on the plant holder disk to an analog electrical signal for a precise measurement. The top disk and bottom disk are designed to be durable for different plant sizes, so plant weight can be measured continuously throughout the whole growth period, without hindering plant growth. The results show that plant weights measured by the weight measurement device are highly correlated with the weights estimated by the stereo-vision imaging system; hence, plant weight can be measured by either method. The weight growth of selected vegetables growing in the National Taiwan University plant factory were monitored and measured using our automated plant growth weight measurement system. The experimental results demonstrate the functionality, stability and durability of this system. The information gathered by this weight system can be valuable and beneficial for hydroponic plants monitoring research and agricultural research applications. PMID:27066040

  20. An Automated and Continuous Plant Weight Measurement System for Plant Factory.

    PubMed

    Chen, Wei-Tai; Yeh, Yu-Hui F; Liu, Ting-Yu; Lin, Ta-Te

    2016-01-01

    In plant factories, plants are usually cultivated in nutrient solution under a controllable environment. Plant quality and growth are closely monitored and precisely controlled. For plant growth evaluation, plant weight is an important and commonly used indicator. Traditional plant weight measurements are destructive and laborious. In order to measure and record the plant weight during plant growth, an automated measurement system was designed and developed herein. The weight measurement system comprises a weight measurement device and an imaging system. The weight measurement device consists of a top disk, a bottom disk, a plant holder and a load cell. The load cell with a resolution of 0.1 g converts the plant weight on the plant holder disk to an analog electrical signal for a precise measurement. The top disk and bottom disk are designed to be durable for different plant sizes, so plant weight can be measured continuously throughout the whole growth period, without hindering plant growth. The results show that plant weights measured by the weight measurement device are highly correlated with the weights estimated by the stereo-vision imaging system; hence, plant weight can be measured by either method. The weight growth of selected vegetables growing in the National Taiwan University plant factory were monitored and measured using our automated plant growth weight measurement system. The experimental results demonstrate the functionality, stability and durability of this system. The information gathered by this weight system can be valuable and beneficial for hydroponic plants monitoring research and agricultural research applications.

  1. Monte Carlo calculation of dynamical properties of the two-dimensional Hubbard model

    NASA Technical Reports Server (NTRS)

    White, S. R.; Scalapino, D. J.; Sugar, R. L.; Bickers, N. E.

    1989-01-01

    A new method is introduced for analytically continuing imaginary-time data from quantum Monte Carlo calculations to the real-frequency axis. The method is based on a least-squares-fitting procedure with constraints of positivity and smoothness on the real-frequency quantities. Results are shown for the single-particle spectral-weight function and density of states for the half-filled, two-dimensional Hubbard model.

  2. Crystal structure prediction supported by incomplete experimental data

    NASA Astrophysics Data System (ADS)

    Tsujimoto, Naoto; Adachi, Daiki; Akashi, Ryosuke; Todo, Synge; Tsuneyuki, Shinji

    2018-05-01

    We propose an efficient theoretical scheme for structure prediction on the basis of the idea of combining methods, which optimize theoretical calculation and experimental data simultaneously. In this scheme, we formulate a cost function based on a weighted sum of interatomic potential energies and a penalty function which is defined with partial experimental data totally insufficient for conventional structure analysis. In particular, we define the cost function using "crystallinity" formulated with only peak positions within the small range of the x-ray-diffraction pattern. We apply this method to well-known polymorphs of SiO2 and C with up to 108 atoms in the simulation cell and show that it reproduces the correct structures efficiently with very limited information of diffraction peaks. This scheme opens a new avenue for determining and predicting structures that are difficult to determine by conventional methods.

  3. Chest circumference and birth weight are good predictors of lung function in preschool children from an e-waste recycling area.

    PubMed

    Zeng, Xiang; Xu, Xijin; Zhang, Yuling; Li, Weiqiu; Huo, Xia

    2017-10-01

    The purpose of this study was to investigate the associations between birth weight, chest circumference, and lung function in preschool children from e-waste exposure area. A total of 206 preschool children from Guiyu (an e-waste recycling area) and Haojiang and Xiashan (the reference areas) in China were recruited and required to undergo physical examination, blood tests, and lung function tests during the study period. Birth outcome such as birth weight and birth height were obtained by questionnaire. Children living in the e-waste-exposed area have a lower birth weight, chest circumference, height, and lung function when compare to their peers from the reference areas (all p value <0.05). Both Spearman and partial correlation analyses showed that birth weight and chest circumference were positively correlated with lung function levels including forced vital capacity (FVC) and forced expiratory volume in 1 s (FEV 1 ). After adjustment for the potential confounders in further linear regression analyses, birth weight, and chest circumference were positively associated with lung function levels, respectively. Taken together, birth weight and chest circumference may be good predictors for lung function levels in preschool children.

  4. Exact Scheffé-type confidence intervals for output from groundwater flow models: 1. Use of hydrogeologic information

    USGS Publications Warehouse

    Cooley, Richard L.

    1993-01-01

    A new method is developed to efficiently compute exact Scheffé-type confidence intervals for output (or other function of parameters) g(β) derived from a groundwater flow model. The method is general in that parameter uncertainty can be specified by any statistical distribution having a log probability density function (log pdf) that can be expanded in a Taylor series. However, for this study parameter uncertainty is specified by a statistical multivariate beta distribution that incorporates hydrogeologic information in the form of the investigator's best estimates of parameters and a grouping of random variables representing possible parameter values so that each group is defined by maximum and minimum bounds and an ordering according to increasing value. The new method forms the confidence intervals from maximum and minimum limits of g(β) on a contour of a linear combination of (1) the quadratic form for the parameters used by Cooley and Vecchia (1987) and (2) the log pdf for the multivariate beta distribution. Three example problems are used to compare characteristics of the confidence intervals for hydraulic head obtained using different weights for the linear combination. Different weights generally produced similar confidence intervals, whereas the method of Cooley and Vecchia (1987) often produced much larger confidence intervals.

  5. On the Log-Normality of Historical Magnetic-Storm Intensity Statistics: Implications for Extreme-Event Probabilities

    NASA Astrophysics Data System (ADS)

    Love, J. J.; Rigler, E. J.; Pulkkinen, A. A.; Riley, P.

    2015-12-01

    An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to -Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, -Dst > 850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42, 2.41] times per century; a 100-yr magnetic storm is identified as having a -Dst > 880 nT (greater than Carrington) but a wide 95% confidence interval of [490, 1187] nT. This work is partially motivated by United States National Science and Technology Council and Committee on Space Research and International Living with a Star priorities and strategic plans for the assessment and mitigation of space-weather hazards.

  6. Impaired visually guided weight-shifting ability in children with cerebral palsy.

    PubMed

    Ballaz, Laurent; Robert, Maxime; Parent, Audrey; Prince, François; Lemay, Martin

    2014-09-01

    The ability to control voluntary weight shifting is crucial in many functional tasks. To our knowledge, weight shifting ability in response to a visual stimulus has never been evaluated in children with cerebral palsy (CP). The aim of the study was (1) to propose a new method to assess visually guided medio-lateral (M/L) weight shifting ability and (2) to compare weight-shifting ability in children with CP and typically developing (TD) children. Ten children with spastic diplegic CP (Gross Motor Function Classification System level I and II; age 7-12 years) and 10 TD age-matched children were tested. Participants played with the skiing game on the Wii Fit game console. Center of pressure (COP) displacements, trunk and lower-limb movements were recorded during the last virtual slalom. Maximal isometric lower limb strength and postural control during quiet standing were also assessed. Lower-limb muscle strength was reduced in children with CP compared to TD children and postural control during quiet standing was impaired in children with CP. As expected, the skiing game mainly resulted in M/L COP displacements. Children with CP showed lower M/L COP range and velocity as compared to TD children but larger trunk movements. Trunk and lower extremity movements were less in phase in children with CP compared to TD children. Commercially available active video games can be used to assess visually guided weight shifting ability. Children with spastic diplegic CP showed impaired visually guided weight shifting which can be explained by non-optimal coordination of postural movement and reduced muscular strength. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. SIMULATION OF DISPERSION OF A POWER PLANT PLUME USING AN ADAPTIVE GRID ALGORITHM. (R827028)

    EPA Science Inventory

    A new dynamic adaptive grid algorithm has been developed for use in air quality modeling. This algorithm uses a higher order numerical scheme––the piecewise parabolic method (PPM)––for computing advective solution fields; a weight function capable o...

  8. Finite word length effects on digital filter implementation.

    NASA Technical Reports Server (NTRS)

    Bowman, J. D.; Clark, F. H.

    1972-01-01

    This paper is a discussion of two known techniques to analyze finite word length effects on digital filters. These techniques are extended to several additional programming forms and the results verified experimentally. A correlation of the analytical weighting functions for the two methods is made through the Mason Gain Formula.

  9. Geographically weighted regression model on poverty indicator

    NASA Astrophysics Data System (ADS)

    Slamet, I.; Nugroho, N. F. T. A.; Muslich

    2017-12-01

    In this research, we applied geographically weighted regression (GWR) for analyzing the poverty in Central Java. We consider Gaussian Kernel as weighted function. The GWR uses the diagonal matrix resulted from calculating kernel Gaussian function as a weighted function in the regression model. The kernel weights is used to handle spatial effects on the data so that a model can be obtained for each location. The purpose of this paper is to model of poverty percentage data in Central Java province using GWR with Gaussian kernel weighted function and to determine the influencing factors in each regency/city in Central Java province. Based on the research, we obtained geographically weighted regression model with Gaussian kernel weighted function on poverty percentage data in Central Java province. We found that percentage of population working as farmers, population growth rate, percentage of households with regular sanitation, and BPJS beneficiaries are the variables that affect the percentage of poverty in Central Java province. In this research, we found the determination coefficient R2 are 68.64%. There are two categories of district which are influenced by different of significance factors.

  10. Temporal resolution and motion artifacts in single-source and dual-source cardiac CT.

    PubMed

    Schöndube, Harald; Allmendinger, Thomas; Stierstorfer, Karl; Bruder, Herbert; Flohr, Thomas

    2013-03-01

    The temporal resolution of a given image in cardiac computed tomography (CT) has so far mostly been determined from the amount of CT data employed for the reconstruction of that image. The purpose of this paper is to examine the applicability of such measures to the newly introduced modality of dual-source CT as well as to methods aiming to provide improved temporal resolution by means of an advanced image reconstruction algorithm. To provide a solid base for the examinations described in this paper, an extensive review of temporal resolution in conventional single-source CT is given first. Two different measures for assessing temporal resolution with respect to the amount of data involved are introduced, namely, either taking the full width at half maximum of the respective data weighting function (FWHM-TR) or the total width of the weighting function (total TR) as a base of the assessment. Image reconstruction using both a direct fan-beam filtered backprojection with Parker weighting as well as using a parallel-beam rebinning step are considered. The theory of assessing temporal resolution by means of the data involved is then extended to dual-source CT. Finally, three different advanced iterative reconstruction methods that all use the same input data are compared with respect to the resulting motion artifact level. For brevity and simplicity, the examinations are limited to two-dimensional data acquisition and reconstruction. However, all results and conclusions presented in this paper are also directly applicable to both circular and helical cone-beam CT. While the concept of total TR can directly be applied to dual-source CT, the definition of the FWHM of a weighting function needs to be slightly extended to be applicable to this modality. The three different advanced iterative reconstruction methods examined in this paper result in significantly different images with respect to their motion artifact level, despite exactly the same amount of data being used in the reconstruction process. The concept of assessing temporal resolution by means of the data employed for reconstruction can nicely be extended from single-source to dual-source CT. However, for advanced (possibly nonlinear iterative) reconstruction algorithms the examined approach fails to deliver accurate results. New methods and measures to assess the temporal resolution of CT images need to be developed to be able to accurately compare the performance of such algorithms.

  11. Image fusion method based on regional feature and improved bidimensional empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Qin, Xinqiang; Hu, Gang; Hu, Kai

    2018-01-01

    The decomposition of multiple source images using bidimensional empirical mode decomposition (BEMD) often produces mismatched bidimensional intrinsic mode functions, either by their number or their frequency, making image fusion difficult. A solution to this problem is proposed using a fixed number of iterations and a union operation in the sifting process. By combining the local regional features of the images, an image fusion method has been developed. First, the source images are decomposed using the proposed BEMD to produce the first intrinsic mode function (IMF) and residue component. Second, for the IMF component, a selection and weighted average strategy based on local area energy is used to obtain a high-frequency fusion component. Third, for the residue component, a selection and weighted average strategy based on local average gray difference is used to obtain a low-frequency fusion component. Finally, the fused image is obtained by applying the inverse BEMD transform. Experimental results show that the proposed algorithm provides superior performance over methods based on wavelet transform, line and column-based EMD, and complex empirical mode decomposition, both in terms of visual quality and objective evaluation criteria.

  12. Development and application of 3-D foot-shape measurement system under different loads

    NASA Astrophysics Data System (ADS)

    Liu, Guozhong; Wang, Boxiong; Shi, Hui; Luo, Xiuzhi

    2008-03-01

    The 3-D foot-shape measurement system under different loads based on laser-line-scanning principle was designed and the model of the measurement system was developed. 3-D foot-shape measurements without blind areas under different loads and the automatic extraction of foot-parameter are achieved with the system. A global calibration method for CCD cameras using a one-axis motion unit in the measurement system and the specialized calibration kits is presented. Errors caused by the nonlinearity of CCD cameras and other devices and caused by the installation of the one axis motion platform, the laser plane and the toughened glass plane can be eliminated by using the nonlinear coordinate mapping function and the Powell optimized method in calibration. Foot measurements under different loads for 170 participants were conducted and the statistic foot parameter measurement results for male and female participants under non-weight condition and changes of foot parameters under half-body-weight condition, full-body-weight condition and over-body-weight condition compared with non-weight condition are presented. 3-D foot-shape measurement under different loads makes it possible to realize custom-made shoe-making and shows great prosperity in shoe design, foot orthopaedic treatment, shoe size standardization, and establishment of a feet database for consumers and athletes.

  13. Generating functions for weighted Hurwitz numbers

    NASA Astrophysics Data System (ADS)

    Guay-Paquet, Mathieu; Harnad, J.

    2017-08-01

    Double Hurwitz numbers enumerating weighted n-sheeted branched coverings of the Riemann sphere or, equivalently, weighted paths in the Cayley graph of Sn generated by transpositions are determined by an associated weight generating function. A uniquely determined 1-parameter family of 2D Toda τ -functions of hypergeometric type is shown to consist of generating functions for such weighted Hurwitz numbers. Four classical cases are detailed, in which the weighting is uniform: Okounkov's double Hurwitz numbers for which the ramification is simple at all but two specified branch points; the case of Belyi curves, with three branch points, two with specified profiles; the general case, with a specified number of branch points, two with fixed profiles, the rest constrained only by the genus; and the signed enumeration case, with sign determined by the parity of the number of branch points. Using the exponentiated quantum dilogarithm function as a weight generator, three new types of weighted enumerations are introduced. These determine quantum Hurwitz numbers depending on a deformation parameter q. By suitable interpretation of q, the statistical mechanics of quantum weighted branched covers may be related to that of Bosonic gases. The standard double Hurwitz numbers are recovered in the classical limit.

  14. Analysis of longitudinal data of beef cattle raised on pasture from northern Brazil using nonlinear models.

    PubMed

    Lopes, Fernando B; da Silva, Marcelo C; Marques, Ednira G; McManus, Concepta M

    2012-12-01

    This study was undertaken to aim of estimating the genetic parameters and trends for asymptotic weight (A) and maturity rate (k) of Nellore cattle from northern Brazil. The data set was made available by the Brazilian Association of Zebu Breeders and collected between the years of 1997 and 2007. The Von Bertalanffy, Brody, Gompertz, and logistic nonlinear models were fitted by the Gauss-Newton method to weight-age data of 45,895 animals collected quarterly of the birth to 750 days old. The curve parameters were analyzed using the procedures GLM and CORR. The estimation of (co)variance components and genetic parameters was obtained using the MTDFREML software. The estimated heritability coefficients were 0.21 ± 0.013 and 0.25 ± 0.014 for asymptotic weight and maturity rate, respectively. This indicates that selection for any trait shall results in genetic progress in the herd. The genetic correlation between A and k was negative (-0.57 ± 0.03) and indicated that animals selected for high maturity rate shall result in low asymptotic weight. The Von Bertalanffy function is adequate to establish the mean growth patterns and to predict the adult weight of Nellore cattle. This model is more accurate in predicting the birth weight of these animals and has better overall fit. The prediction of adult weight using nonlinear functions can be accurate when growth curve parameters and their (co)variance components are estimated jointly. The model used in this study can be applied to the prediction of mature weight in herds where a portion of the animals are culled before they reach the adult age.

  15. The role of oxidative stress in streptozotocin-induced diabetic nephropathy in rats.

    PubMed

    Fernandes, Sheila Marques; Cordeiro, Priscilla Mendes; Watanabe, Mirian; Fonseca, Cassiane Dezoti da; Vattimo, Maria de Fatima Fernandes

    2016-10-01

    The objective of this study was to evaluate the role of oxidative stress in an experimental model of streptozotocin-induced diabetic nephropathy in rats. Wistar, adult, male rats were used in the study. Animals were divided in the following groups: Citrate (control, citrate buffer 0.01M, pH 4.2 was administrated intravenously - i.v - in the caudal vein), Uninephrectomy+Citrate (left uninephrectomy-20 days before the study), DM (streptozotocin, 65 mg/kg, i.v, on the 20th day of the study), Uninephrectomy+DM. Physiological parameters (water and food intake, body weight, blood glucose, kidney weight, and relative kidney weight); renal function (creatinine clearance), urine albumin (immunodiffusion method); oxidative metabolites (urinary peroxides, thiobarbituric acid reactive substances, and thiols in renal tissue), and kidney histology were evaluated. Polyphagia, polydipsia, hyperglycemia, and reduced body weight were observed in diabetic rats. Renal function was reduced in diabetic groups (creatinine clearance, p < 0.05). Uninephrectomy potentiated urine albumin and increased kidney weight and relative kidney weight in diabetic animals (p < 0.05). Urinary peroxides and thiobarbituric acid reactive substances were increased, and the reduction in thiol levels demonstrated endogenous substrate consumption in diabetic groups (p < 0.05). The histological analysis revealed moderate lesions of diabetic nephropathy. This study confirms lipid peroxidation and intense consumption of the antioxidant defense system in diabetic rats. The association of hyperglycemia and uninephrectomy resulted in additional renal injury, demonstrating that the model is adequate for the study of diabetic nephropathy.

  16. Some New Estimation Methods for Weighted Regression When There are Possible Outliers.

    DTIC Science & Technology

    1985-01-01

    about influential points, and to add to our understanding of the structure of the data In Section 2 we show, by considering the influence function , why... influence function lampel; 1968, 1974) for the maximum likelihood esti- mator is proportional to (EP-l)h(x), where £= (y-x’B)exp[-h’(x)e], and is thus...unbounded. Since the influence function for the MLE is quadratic in the residual c, in theory a point with a sufficiently large residual can have an

  17. Weighted finite impulse response filter for chromatic dispersion equalization in coherent optical fiber communication systems

    NASA Astrophysics Data System (ADS)

    Zeng, Ziyi; Yang, Aiying; Guo, Peng; Feng, Lihui

    2018-01-01

    Time-domain CD equalization using finite impulse response (FIR) filter is now a common approach for coherent optical fiber communication systems. The complex weights of FIR taps are calculated from a truncated impulse response of the CD transfer function, and the modulus of the complex weights is constant. In our work, we take the limited bandwidth of a single channel signal into account and propose weighted FIRs to improve the performance of CD equalization. The key in weighted FIR filters is the selection and optimization of weighted functions. In order to present the performance of different types of weighted FIR filters, a square-root raised cosine FIR (SRRC-FIR) and a Gaussian FIR (GS-FIR) are investigated. The optimization of square-root raised cosine FIR and Gaussian FIR are made in term of the bit rate error (BER) of QPSK and 16QAM coherent detection signal. The results demonstrate that the optimized parameters of the weighted filters are independent of the modulation format, symbol rate and the length of transmission fiber. With the optimized weighted FIRs, the BER of CD equalization signal is decreased significantly. Although this paper has investigated two types of weighted FIR filters, i.e. SRRC-FIR filter and GS-FIR filter, the principle of weighted FIR can also be extended to other symmetric functions super Gaussian function, hyperbolic secant function and etc.

  18. Placental weight and birth weight to placental weight ratio in monochorionic and dichorionic growth-restricted and non-growth-restricted twins

    PubMed Central

    Souza, Mariângela Alves; de Lourdes Brizot, Maria; Biancolin, Sckarlet Ernandes; Schultz, Regina; de Carvalho, Mário Henrique Burlacchini; Francisco, Rossana Pulcineli Vieira; Zugaib, Marcelo

    2017-01-01

    OBJECTIVE: The aim of the present study was to compare the placental weight and birth weight/placental weight ratio for intrauterine growth-restricted and non-intrauterine growth-restricted monochorionic and dichorionic twins. METHODS: This was a retrospective analysis of placentas from twin pregnancies. Placental weight and the birth weight/placental weight ratio were compared in intrauterine growth-restricted and non-intrauterine growth-restricted monochorionic and dichorionic twins. The association between cord insertion type and placental lesions in intrauterine growth-restricted and non-intrauterine growth-restricted monochorionic and dichorionic twins was also investigated. RESULTS: A total of 105 monochorionic (intrauterine growth restriction=40; non-intrauterine growth restriction=65) and 219 dichorionic (intrauterine growth restriction=57; non-intrauterine growth restriction=162) placentas were analyzed. A significantly lower placental weight was observed in intrauterine growth-restricted monochorionic (p=0.022) and dichorionic (p<0.001) twins compared to non-intrauterine growth-restricted twins. There was no difference in the birth weight/placental weight ratio between the intrauterine growth restriction and non-intrauterine growth restriction groups for either monochorionic (p=0.36) or dichorionic (p=0.68) twins. Placental weight and the birth weight/placental weight ratio were not associated with cord insertion type or with placental lesions. CONCLUSION: Low placental weight, and consequently reduced functional mass, appears to be involved in fetal growth restriction in monochorionic and dichorionic twins. The mechanism by which low placental weight influences the birth weight/placental weight ratio in intrauterine growth-restricted monochorionic and dichorionic twins needs to be determined in larger prospective studies. PMID:28591337

  19. Disability weights from a household survey in a low socio-economic setting: how does it compare to the global burden of disease 2010 study?

    PubMed

    Neethling, Ian; Jelsma, Jennifer; Ramma, Lebogang; Schneider, Helen; Bradshaw, Debbie

    2016-01-01

    The global burden of disease (GBD) 2010 study used a universal set of disability weights to estimate disability adjusted life years (DALYs) by country. However, it is not clear whether these weights can be applied universally in calculating DALYs to inform local decision-making. This study derived disability weights for a resource-constrained community in Cape Town, South Africa, and interrogated whether the GBD 2010 disability weights necessarily represent the preferences of economically disadvantaged communities. A household survey was conducted in Lavender Hill, Cape Town, to assess the health state preferences of the general public. The responses from a paired comparison valuation method were assessed using a probit regression. The probit coefficients were anchored onto the 0 to 1 disability weight scale by running a lowess regression on the GBD 2010 disability weights and interpolating the coefficients between the upper and lower limit of the smoothed disability weights. Heroin and opioid dependence had the highest disability weight of 0.630, whereas intellectual disability had the lowest (0.040). Untreated injuries ranked higher than severe mental disorders. There were some counterintuitive results, such as moderate (15th) and severe vision impairment (16th) ranking higher than blindness (20th). A moderate correlation between the disability weights of the local study and those of the GBD 2010 study was observed (R(2)=0.440, p<0.05). This indicates that there was a relationship, although some conditions, such as untreated fracture of the radius or ulna, showed large variability in disability weights (0.488 in local study and 0.043 in GBD 2010). Respondents seemed to value physical mobility higher than cognitive functioning, which is in contrast to the GBD 2010 study. This study shows that not all health state preferences are universal. Studies estimating DALYs need to derive local disability weights using methods that are less cognitively demanding for respondents.

  20. Detection of fallen trees in ALS point clouds using a Normalized Cut approach trained by simulation

    NASA Astrophysics Data System (ADS)

    Polewski, Przemyslaw; Yao, Wei; Heurich, Marco; Krzystek, Peter; Stilla, Uwe

    2015-07-01

    Downed dead wood is regarded as an important part of forest ecosystems from an ecological perspective, which drives the need for investigating its spatial distribution. Based on several studies, Airborne Laser Scanning (ALS) has proven to be a valuable remote sensing technique for obtaining such information. This paper describes a unified approach to the detection of fallen trees from ALS point clouds based on merging short segments into whole stems using the Normalized Cut algorithm. We introduce a new method of defining the segment similarity function for the clustering procedure, where the attribute weights are learned from labeled data. Based on a relationship between Normalized Cut's similarity function and a class of regression models, we show how to learn the similarity function by training a classifier. Furthermore, we propose using an appearance-based stopping criterion for the graph cut algorithm as an alternative to the standard Normalized Cut threshold approach. We set up a virtual fallen tree generation scheme to simulate complex forest scenarios with multiple overlapping fallen stems. This simulated data is then used as a basis to learn both the similarity function and the stopping criterion for Normalized Cut. We evaluate our approach on 5 plots from the strictly protected mixed mountain forest within the Bavarian Forest National Park using reference data obtained via a manual field inventory. The experimental results show that our method is able to detect up to 90% of fallen stems in plots having 30-40% overstory cover with a correctness exceeding 80%, even in quite complex forest scenes. Moreover, the performance for feature weights trained on simulated data is competitive with the case when the weights are calculated using a grid search on the test data, which indicates that the learned similarity function and stopping criterion can generalize well on new plots.

  1. Joint design of large-tip-angle parallel RF pulses and blipped gradient trajectories.

    PubMed

    Cao, Zhipeng; Donahue, Manus J; Ma, Jun; Grissom, William A

    2016-03-01

    To design multichannel large-tip-angle kT-points and spokes radiofrequency (RF) pulses and gradient waveforms for transmit field inhomogeneity compensation in high field magnetic resonance imaging. An algorithm to design RF subpulse weights and gradient blip areas is proposed to minimize a magnitude least-squares cost function that measures the difference between realized and desired state parameters in the spin domain, and penalizes integrated RF power. The minimization problem is solved iteratively with interleaved target phase updates, RF subpulse weights updates using the conjugate gradient method with optimal control-based derivatives, and gradient blip area updates using the conjugate gradient method. Two-channel parallel transmit simulations and experiments were conducted in phantoms and human subjects at 7 T to demonstrate the method and compare it to small-tip-angle-designed pulses and circularly polarized excitations. The proposed algorithm designed more homogeneous and accurate 180° inversion and refocusing pulses than other methods. It also designed large-tip-angle pulses on multiple frequency bands with independent and joint phase relaxation. Pulses designed by the method improved specificity and contrast-to-noise ratio in a finger-tapping spin echo blood oxygen level dependent functional magnetic resonance imaging study, compared with circularly polarized mode refocusing. A joint RF and gradient waveform design algorithm was proposed and validated to improve large-tip-angle inversion and refocusing at ultrahigh field. © 2015 Wiley Periodicals, Inc.

  2. Neural networks: further insights into error function, generalized weights and others

    PubMed Central

    2016-01-01

    The article is a continuum of a previous one providing further insights into the structure of neural network (NN). Key concepts of NN including activation function, error function, learning rate and generalized weights are introduced. NN topology can be visualized with generic plot() function by passing a “nn” class object. Generalized weights assist interpretation of NN model with respect to the independent effect of individual input variables. A large variance of generalized weights for a covariate indicates non-linearity of its independent effect. If generalized weights of a covariate are approximately zero, the covariate is considered to have no effect on outcome. Finally, prediction of new observations can be performed using compute() function. Make sure that the feature variables passed to the compute() function are in the same order to that in the training NN. PMID:27668220

  3. Influence of Molecular Weight on the Mechanical Performance of a Thermoplastic Glassy Polyimide

    NASA Technical Reports Server (NTRS)

    Nicholson, Lee M.; Whitley, Karen S.; Gates, Thomas S.; Hinkley, Jeffrey A.

    1999-01-01

    Mechanical Testing of an advanced thermoplastic polyimide (LaRC-TM-SI) with known variations in molecular weight was performed over a range of temperatures below the glass transition temperature. The physical characterization, elastic properties and notched tensile strength were all determined as a function of molecular weight and test temperature. It was shown that notched tensile strength is a strong function of both temperature and molecular weight, whereas stiffness is only a strong function of temperature. A critical molecular weight (Mc) was observed to occur at a weight-average molecular weight (Mw) of approx. 22000 g/mol below which, the notched tensile strength decreases rapidly. This critical molecular weight transition is temperature-independent. Furthermore, inelastic analysis showed that low molecular weight materials tended to fail in a brittle manner, whereas high molecular weight materials exhibited ductile failure. The microstructural images supported these findings.

  4. [Correlation analysis between residual displacement and hip function after reconstruction of acetabular fractures].

    PubMed

    Ma, Kunlong; Fang, Yue; Luan, Fujun; Tu, Chongqi; Yang, Tianfu

    2012-03-01

    To investigate the relationships between residual displacement of weight-bearing and non weight-bearing zones (gap displacement and step displacement) and hip function by analyzing the CT images after reconstruction of acetabular fractures. The CT measures and clinical outcome were retrospectively analyzed from 48 patients with displaced acetabular fracture between June 2004 and June 2009. All patients were treated by open reduction and internal fixation, and were followed up 24 to 72 months (mean, 36 months); all fractures healed after operation. The residual displacement involved the weight-bearing zone in 30 cases (weight-bearing group), and involved the non weight-bearing zone in 18 cases (non weight-bearing group). The clinical outcomes were evaluated by Merle d'Aubigné-Postel criteria, and the reduction of articular surface by CT images, including the maximums of two indexes (gap displacement and step displacement). All the data were analyzed in accordance with the Spearman rank correlation coefficient analysis. There was strong negative correlation between the hip function and the residual displacement values in weight-bearing group (r(s) = -0.722, P = 0.001). But there was no correlation between the hip function and the residual displacement values in non weight-bearing group (r(s) = 0.481, P = 0.059). The results of clinical follow-up were similar to the correlation analysis results. In weight-bearing group, the hip function had strong negative correlation with step displacement (r(s) = 0.825, P = 0.002), but it had no correlation with gap displacement (r(s) = 0.577, P = 0.134). In patients with acetabular fracture, the hip function has correlation not only with the extent of the residual displacement but also with the location of the residual displacement, so the residual displacement of weight-bearing zone is a key factor to affect the hip function. In patients with residual displacement in weight-bearing zone, the bigger the step displacement is, the worse the hip function is.

  5. Few adults with functional limitations advised to exercise more or lose weight in NHANES 2011-14 seek health professional assistance: An opportunity for physical therapists.

    PubMed

    Kinslow, Brian; De Heer, Hendrik D; Warren, Meghan

    2018-03-02

    Functional limitations are associated with decreased physical activity and increased body mass index. The purpose of this study was to assess the prevalence of functional limitations among adults who reported receiving health professional advice to exercise more or lose weight, and to assess involvement of health professionals, including physical therapists, in weight loss efforts with these individuals. A cross-sectional analysis of U.S. adults from the 2011 to 2014 National Health and Nutrition Examination Survey (n = 5,480). Participant demographics, health history, and functional limitations were assessed via self-report and examination. Frequency distributions were calculated using SAS® analytical software, accounting for the complex survey design. Population estimates were calculated using the American Community Survey. 31.0% of individuals (n = 1,696), representing a population estimate of 35 million adults, advised to exercise more or lose weight by a health professional reported one or more functional limitation. Of the 31%, 57.6% attempted weight loss, and 40.1% used exercise for weight loss. Few sought health professional assistance. Physical therapists were not mentioned. Few individuals with functional limitations advised to lose weight or increase exercise seek health professional assistance for weight loss. Physical therapists have an opportunity to assist those with functional limitations with exercise prescription.

  6. Thermal density functional theory, ensemble density functional theory, and potential functional theory for warm dense matter

    NASA Astrophysics Data System (ADS)

    Pribram-Jones, Aurora

    Warm dense matter (WDM) is a high energy phase between solids and plasmas, with characteristics of both. It is present in the centers of giant planets, within the earth's core, and on the path to ignition of inertial confinement fusion. The high temperatures and pressures of warm dense matter lead to complications in its simulation, as both classical and quantum effects must be included. One of the most successful simulation methods is density functional theory-molecular dynamics (DFT-MD). Despite great success in a diverse array of applications, DFT-MD remains computationally expensive and it neglects the explicit temperature dependence of electron-electron interactions known to exist within exact DFT. Finite-temperature density functional theory (FT DFT) is an extension of the wildly successful ground-state DFT formalism via thermal ensembles, broadening its quantum mechanical treatment of electrons to include systems at non-zero temperatures. Exact mathematical conditions have been used to predict the behavior of approximations in limiting conditions and to connect FT DFT to the ground-state theory. An introduction to FT DFT is given within the context of ensemble DFT and the larger field of DFT is discussed for context. Ensemble DFT is used to describe ensembles of ground-state and excited systems. Exact conditions in ensemble DFT and the performance of approximations depend on ensemble weights. Using an inversion method, exact Kohn-Sham ensemble potentials are found and compared to approximations. The symmetry eigenstate Hartree-exchange approximation is in good agreement with exact calculations because of its inclusion of an ensemble derivative discontinuity. Since ensemble weights in FT DFT are temperature-dependent Fermi weights, this insight may help develop approximations well-suited to both ground-state and FT DFT. A novel, highly efficient approach to free energy calculations, finite-temperature potential functional theory, is derived, which has the potential to transform the simulation of warm dense matter. As a semiclassical method, it connects the normally disparate regimes of cold condensed matter physics and hot plasma physics. This orbital-free approach captures the smooth classical density envelope and quantum density oscillations that are both crucial to accurate modeling of materials where temperature and pressure effects are influential.

  7. Wavelet-based adaptive thresholding method for image segmentation

    NASA Astrophysics Data System (ADS)

    Chen, Zikuan; Tao, Yang; Chen, Xin; Griffis, Carl

    2001-05-01

    A nonuniform background distribution may cause a global thresholding method to fail to segment objects. One solution is using a local thresholding method that adapts to local surroundings. In this paper, we propose a novel local thresholding method for image segmentation, using multiscale threshold functions obtained by wavelet synthesis with weighted detail coefficients. In particular, the coarse-to- fine synthesis with attenuated detail coefficients produces a threshold function corresponding to a high-frequency- reduced signal. This wavelet-based local thresholding method adapts to both local size and local surroundings, and its implementation can take advantage of the fast wavelet algorithm. We applied this technique to physical contaminant detection for poultry meat inspection using x-ray imaging. Experiments showed that inclusion objects in deboned poultry could be extracted at multiple resolutions despite their irregular sizes and uneven backgrounds.

  8. Hermite WENO limiting for multi-moment finite-volume methods using the ADER-DT time discretization for 1-D systems of conservation laws

    DOE PAGES

    Norman, Matthew R.

    2014-11-24

    New Hermite Weighted Essentially Non-Oscillatory (HWENO) interpolants are developed and investigated within the Multi-Moment Finite-Volume (MMFV) formulation using the ADER-DT time discretization. Whereas traditional WENO methods interpolate pointwise, function-based WENO methods explicitly form a non-oscillatory, high-order polynomial over the cell in question. This study chooses a function-based approach and details how fast convergence to optimal weights for smooth flow is ensured. Methods of sixth-, eighth-, and tenth-order accuracy are developed. We compare these against traditional single-moment WENO methods of fifth-, seventh-, ninth-, and eleventh-order accuracy to compare against more familiar methods from literature. The new HWENO methods improve upon existingmore » HWENO methods (1) by giving a better resolution of unreinforced contact discontinuities and (2) by only needing a single HWENO polynomial to update both the cell mean value and cell mean derivative. Test cases to validate and assess these methods include 1-D linear transport, the 1-D inviscid Burger's equation, and the 1-D inviscid Euler equations. Smooth and non-smooth flows are used for evaluation. These HWENO methods performed better than comparable literature-standard WENO methods for all regimes of discontinuity and smoothness in all tests herein. They exhibit improved optimal accuracy due to the use of derivatives, and they collapse to solutions similar to typical WENO methods when limiting is required. The study concludes that the new HWENO methods are robust and effective when used in the ADER-DT MMFV framework. Finally, these results are intended to demonstrate capability rather than exhaust all possible implementations.« less

  9. SU-G-JeP2-02: A Unifying Multi-Atlas Approach to Electron Density Mapping Using Multi-Parametric MRI for Radiation Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, S; Tianjin University, Tianjin; Hara, W

    Purpose: MRI has a number of advantages over CT as a primary modality for radiation treatment planning (RTP). However, one key bottleneck problem still remains, which is the lack of electron density information in MRI. In the work, a reliable method to map electron density is developed by leveraging the differential contrast of multi-parametric MRI. Methods: We propose a probabilistic Bayesian approach for electron density mapping based on T1 and T2-weighted MRI, using multiple patients as atlases. For each voxel, we compute two conditional probabilities: (1) electron density given its image intensity on T1 and T2-weighted MR images, and (2)more » electron density given its geometric location in a reference anatomy. The two sources of information (image intensity and spatial location) are combined into a unifying posterior probability density function using the Bayesian formalism. The mean value of the posterior probability density function provides the estimated electron density. Results: We evaluated the method on 10 head and neck patients and performed leave-one-out cross validation (9 patients as atlases and remaining 1 as test). The proposed method significantly reduced the errors in electron density estimation, with a mean absolute HU error of 138, compared with 193 for the T1-weighted intensity approach and 261 without density correction. For bone detection (HU>200), the proposed method had an accuracy of 84% and a sensitivity of 73% at specificity of 90% (AUC = 87%). In comparison, the AUC for bone detection is 73% and 50% using the intensity approach and without density correction, respectively. Conclusion: The proposed unifying method provides accurate electron density estimation and bone detection based on multi-parametric MRI of the head with highly heterogeneous anatomy. This could allow for accurate dose calculation and reference image generation for patient setup in MRI-based radiation treatment planning.« less

  10. Creating a performance appraisal template for pharmacy technicians using the method of equal-appearing intervals.

    PubMed

    Desselle, Shane P; Vaughan, Melissa; Faria, Thomas

    2002-01-01

    To design a highly quantitative template for the evaluation of community pharmacy technicians' job performance that enables managers to provide sufficient feedback and fairly allocate organizational rewards. Two rounds of interviews with two convenience samples of community pharmacists and pharmacy technicians were conducted. The interview in phase 1 was qualitative, and responses were used to design the second interview protocol. During the phase 2 interviews, a new group of respondents ranked technicians' job responsibilities, identified through the initial interviewees' responses, using scales the researchers had designed using an interval-level scaling technique called equal-appearing intervals. Chain and independent pharmacies. Phase 1-20 pharmacists and 20 technicians from chain and independent pharmacies; phase 2-20 pharmacists and 9 technicians from chain and independent pharmacies. Ratings of the importance of technician practice functions and corresponding responsibilities. Weights were calculated for each practice function. A weighted list of practice functions was developed, and this may serve as a performance evaluation template. Customer service-related activities were judged by pharmacists and technicians alike to be the most important technician functions. Many pharmacies either lack formal performance appraisal systems or fail to implement them properly. Technicians may desire more consistent feedback from pharmacists and value information that may lead to organizational rewards. Using a weighted, behaviorally anchored performance appraisal system may help pharmacists and pharmacy managers meet these demands.

  11. Weight Gain after Lung Reduction Surgery Is Related to Improved Lung Function and Ventilatory Efficiency

    PubMed Central

    Kretschman, Dana M.; Sternberg, Alice L.; DeCamp, Malcolm M.; Criner, Gerard J.

    2012-01-01

    Rationale: Lung volume reduction surgery (LVRS) is associated with weight gain in some patients, but the group that gains weight after LVRS and the mechanisms underlying this phenomenon have not been well characterized. Objectives: To describe the weight change profiles of LVRS patients enrolled in the National Emphysema Treatment Trial (NETT) and to correlate alterations in lung physiological parameters with changes in weight. Methods: We divided 1,077 non–high-risk patients in the NETT into groups according to baseline body mass index (BMI): underweight (<21 kg/m2), normal weight (21–25 kg/m2), overweight (25–30 kg/m2), and obese (>30 kg/m2). We compared BMI groups and LVRS and medical groups within each BMI stratum with respect to baseline characteristics and percent change in BMI (%ΔBMI) from baseline. We examined patients with (ΔBMI ≥ 5%) and without (ΔBMI < 5%) significant weight gain at 6 months and assessed changes in lung function and ventilatory efficiency (V̇e/V̇co2). Measurements and Main Results: The percent change in BMI was greater in the LVRS arm than in the medical arm in the underweight and normal weight groups at all follow-up time points, and at 12 and 24 months in the overweight group. In the LVRS group, patients with ΔBMI ≥ 5% at 6 months had greater improvements in FEV1 (11.53 ± 9.31 vs. 6.58 ± 8.68%; P < 0.0001), FVC (17.51 ± 15.20 vs. 7.55 ± 14.88%; P < 0.0001), residual volume (–66.20 ± 40.26 vs. –47.06 ± 39.87%; P < 0.0001), 6-minute walk distance (38.70 ± 69.57 vs. 7.57 ± 73.37 m; P < 0.0001), maximal expiratory pressures (12.73 ± 49.08 vs. 3.54 ± 32.22; P = 0.0205), and V̇e/V̇co2 (–1.58 ± 6.20 vs. 0.22 ± 8.20; P = 0.0306) at 6 months than patients with ΔBMI < 5% at 6 months. Conclusions: LVRS leads to weight gain in nonobese patients, which is associated with improvement in lung function, exercise capacity, respiratory muscle strength, and ventilatory efficiency. These physiological changes may be partially responsible for weight gain in patients who undergo LVRS. PMID:22878279

  12. Grey Comprehensive Evaluation of Biomass Power Generation Project Based on Group Judgement

    NASA Astrophysics Data System (ADS)

    Xia, Huicong; Niu, Dongxiao

    2017-06-01

    The comprehensive evaluation of benefit is an important task needed to be carried out at all stages of biomass power generation projects. This paper proposed an improved grey comprehensive evaluation method based on triangle whiten function. To improve the objectivity of weight calculation result of only reference comparison judgment method, this paper introduced group judgment to the weighting process. In the process of grey comprehensive evaluation, this paper invited a number of experts to estimate the benefit level of projects, and optimized the basic estimations based on the minimum variance principle to improve the accuracy of evaluation result. Taking a biomass power generation project as an example, the grey comprehensive evaluation result showed that the benefit level of this project was good. This example demonstrates the feasibility of grey comprehensive evaluation method based on group judgment for benefit evaluation of biomass power generation project.

  13. Classification of Company Performance using Weighted Probabilistic Neural Network

    NASA Astrophysics Data System (ADS)

    Yasin, Hasbi; Waridi Basyiruddin Arifin, Adi; Warsito, Budi

    2018-05-01

    Classification of company performance can be judged by looking at its financial status, whether good or bad state. Classification of company performance can be achieved by some approach, either parametric or non-parametric. Neural Network is one of non-parametric methods. One of Artificial Neural Network (ANN) models is Probabilistic Neural Network (PNN). PNN consists of four layers, i.e. input layer, pattern layer, addition layer, and output layer. The distance function used is the euclidean distance and each class share the same values as their weights. In this study used PNN that has been modified on the weighting process between the pattern layer and the addition layer by involving the calculation of the mahalanobis distance. This model is called the Weighted Probabilistic Neural Network (WPNN). The results show that the company's performance modeling with the WPNN model has a very high accuracy that reaches 100%.

  14. A Posteriori Error Estimation for Discontinuous Galerkin Approximations of Hyperbolic Systems

    NASA Technical Reports Server (NTRS)

    Larson, Mats G.; Barth, Timothy J.

    1999-01-01

    This article considers a posteriori error estimation of specified functionals for first-order systems of conservation laws discretized using the discontinuous Galerkin (DG) finite element method. Using duality techniques, we derive exact error representation formulas for both linear and nonlinear functionals given an associated bilinear or nonlinear variational form. Weighted residual approximations of the exact error representation formula are then proposed and numerically evaluated for Ringleb flow, an exact solution of the 2-D Euler equations.

  15. Increase in gap-junctional intercellular communications (GJIC) of normal human dermal fibroblasts (NHDF) on surfaces coated with high-molecular-weight hyaluronic acid (HMW HA).

    PubMed

    Park, Jeong Ung; Tsuchiya, Toshie

    2002-06-15

    Normal human dermal fibroblast (NHDF) cells were used to detect differences in gap-junctional intercellular communication (GJIC) by hyaluronic acid (HA), a linear polymer built from repeating disaccharide units that consist of N-acetyl-D-glucosamine (GlcNa) and D-glucuronic acid (GlcA) linked by a beta 1-4 glycosidic bond. The NHDF cells were cultured with different molecular weights (MW) of HA for 4 days. The rates of cell attachment in dishes coated with high-molecular-weight (HMW; 310 kDa or 800 kDa) HA at 2 mg/dish were significantly reduced at an early time point compared with low-molecular-weight (LMW; 4.8 kDa or 48 kDa) HA with the same coating amounts. HA-coated surfaces were observed by atomic force microscopy (AFM) under air and showed that HA molecules ran parallel in the dish coated with LMW HA and had an aggregated island structure in the dish coated with HMW HA surfaces. The cell functions of GJIC were assayed by a scrape-loading dye transfer (SLDT) method using a dye solution of Lucifer yellow. Promotion of the dye transfer was clearly obtained in the cell monolayer grown on the surface coated with HMW HA. These results suggest that HMW HA promotes the function of GJIC in NHDF cells. In contrast, when HMW HA was added to the monolayer of NHDF cells, the functions of GJIC clearly were lowered in comparison with the cells grown in the control dish or with those grown on the surface of HMW HA. Therefore it is concluded that the MW size of HA and its application method are important factors for generating biocompatible tissue-engineered products because of the manner in which the GJIC participates in cell differentiation and cell growth rate. Copyright 2002 Wiley Periodicals, Inc. J Biomed Mater Res 60: 541-547, 2002

  16. Predicting protein complexes from weighted protein-protein interaction graphs with a novel unsupervised methodology: Evolutionary enhanced Markov clustering.

    PubMed

    Theofilatos, Konstantinos; Pavlopoulou, Niki; Papasavvas, Christoforos; Likothanassis, Spiros; Dimitrakopoulos, Christos; Georgopoulos, Efstratios; Moschopoulos, Charalampos; Mavroudi, Seferina

    2015-03-01

    Proteins are considered to be the most important individual components of biological systems and they combine to form physical protein complexes which are responsible for certain molecular functions. Despite the large availability of protein-protein interaction (PPI) information, not much information is available about protein complexes. Experimental methods are limited in terms of time, efficiency, cost and performance constraints. Existing computational methods have provided encouraging preliminary results, but they phase certain disadvantages as they require parameter tuning, some of them cannot handle weighted PPI data and others do not allow a protein to participate in more than one protein complex. In the present paper, we propose a new fully unsupervised methodology for predicting protein complexes from weighted PPI graphs. The proposed methodology is called evolutionary enhanced Markov clustering (EE-MC) and it is a hybrid combination of an adaptive evolutionary algorithm and a state-of-the-art clustering algorithm named enhanced Markov clustering. EE-MC was compared with state-of-the-art methodologies when applied to datasets from the human and the yeast Saccharomyces cerevisiae organisms. Using public available datasets, EE-MC outperformed existing methodologies (in some datasets the separation metric was increased by 10-20%). Moreover, when applied to new human datasets its performance was encouraging in the prediction of protein complexes which consist of proteins with high functional similarity. In specific, 5737 protein complexes were predicted and 72.58% of them are enriched for at least one gene ontology (GO) function term. EE-MC is by design able to overcome intrinsic limitations of existing methodologies such as their inability to handle weighted PPI networks, their constraint to assign every protein in exactly one cluster and the difficulties they face concerning the parameter tuning. This fact was experimentally validated and moreover, new potentially true human protein complexes were suggested as candidates for further validation using experimental techniques. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Penalized weighted least-squares approach for low-dose x-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Li, Tianfang; Lu, Hongbing; Liang, Zhengrong

    2006-03-01

    The noise of low-dose computed tomography (CT) sinogram follows approximately a Gaussian distribution with nonlinear dependence between the sample mean and variance. The noise is statistically uncorrelated among detector bins at any view angle. However the correlation coefficient matrix of data signal indicates a strong signal correlation among neighboring views. Based on above observations, Karhunen-Loeve (KL) transform can be used to de-correlate the signal among the neighboring views. In each KL component, a penalized weighted least-squares (PWLS) objective function can be constructed and optimal sinogram can be estimated by minimizing the objective function, followed by filtered backprojection (FBP) for CT image reconstruction. In this work, we compared the KL-PWLS method with an iterative image reconstruction algorithm, which uses the Gauss-Seidel iterative calculation to minimize the PWLS objective function in image domain. We also compared the KL-PWLS with an iterative sinogram smoothing algorithm, which uses the iterated conditional mode calculation to minimize the PWLS objective function in sinogram space, followed by FBP for image reconstruction. Phantom experiments show a comparable performance of these three PWLS methods in suppressing the noise-induced artifacts and preserving resolution in reconstructed images. Computer simulation concurs with the phantom experiments in terms of noise-resolution tradeoff and detectability in low contrast environment. The KL-PWLS noise reduction may have the advantage in computation for low-dose CT imaging, especially for dynamic high-resolution studies.

  18. A new method for automated high-dimensional lesion segmentation evaluated in vascular injury and applied to the human occipital lobe.

    PubMed

    Mah, Yee-Haur; Jager, Rolf; Kennard, Christopher; Husain, Masud; Nachev, Parashkev

    2014-07-01

    Making robust inferences about the functional neuroanatomy of the brain is critically dependent on experimental techniques that examine the consequences of focal loss of brain function. Unfortunately, the use of the most comprehensive such technique-lesion-function mapping-is complicated by the need for time-consuming and subjective manual delineation of the lesions, greatly limiting the practicability of the approach. Here we exploit a recently-described general measure of statistical anomaly, zeta, to devise a fully-automated, high-dimensional algorithm for identifying the parameters of lesions within a brain image given a reference set of normal brain images. We proceed to evaluate such an algorithm in the context of diffusion-weighted imaging of the commonest type of lesion used in neuroanatomical research: ischaemic damage. Summary performance metrics exceed those previously published for diffusion-weighted imaging and approach the current gold standard-manual segmentation-sufficiently closely for fully-automated lesion-mapping studies to become a possibility. We apply the new method to 435 unselected images of patients with ischaemic stroke to derive a probabilistic map of the pattern of damage in lesions involving the occipital lobe, demonstrating the variation of anatomical resolvability of occipital areas so as to guide future lesion-function studies of the region. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Acute drug induced hepatitis secondary to a weight loss product purchased over the internet

    PubMed Central

    Joshi, Deepak; Cross, Tim JS; Wong, Voi Shim

    2007-01-01

    Background Many people now seek alternative methods of weight loss. The internet provides a readily available source of weight reduction products, the ingredients of which are often unclear. The authors describe a case of acute hepatitis in a 20 year old woman caused by such a product purchased over the internet. Case Presentation A 20-year old woman presented with a two day history of abdominal pain, vomiting and jaundice. There were no identifiable risk factors for chronic liver disease. Liver function tests demonstrated an acute hepatitis (aminoaspartate transaminase 1230 IU/L). A chronic liver disease screen was negative. The patient had started a weight loss product (Pro-Lean), purchased over the internet two weeks prior to presentation. The patient was treated conservatively, and improved. The sequence of events suggests an acute hepatitis caused by an herbal weight loss product. Conclusion This case report highlights the dangers of weight loss products available to the public over the internet, and the importance of asking specifically about alternative medicines in patients who present with an acute hepatitis. PMID:17597525

  20. Color Feature-Based Object Tracking through Particle Swarm Optimization with Improved Inertia Weight

    PubMed Central

    Guo, Siqiu; Zhang, Tao; Song, Yulong

    2018-01-01

    This paper presents a particle swarm tracking algorithm with improved inertia weight based on color features. The weighted color histogram is used as the target feature to reduce the contribution of target edge pixels in the target feature, which makes the algorithm insensitive to the target non-rigid deformation, scale variation, and rotation. Meanwhile, the influence of partial obstruction on the description of target features is reduced. The particle swarm optimization algorithm can complete the multi-peak search, which can cope well with the object occlusion tracking problem. This means that the target is located precisely where the similarity function appears multi-peak. When the particle swarm optimization algorithm is applied to the object tracking, the inertia weight adjustment mechanism has some limitations. This paper presents an improved method. The concept of particle maturity is introduced to improve the inertia weight adjustment mechanism, which could adjust the inertia weight in time according to the different states of each particle in each generation. Experimental results show that our algorithm achieves state-of-the-art performance in a wide range of scenarios. PMID:29690610

  1. Color Feature-Based Object Tracking through Particle Swarm Optimization with Improved Inertia Weight.

    PubMed

    Guo, Siqiu; Zhang, Tao; Song, Yulong; Qian, Feng

    2018-04-23

    This paper presents a particle swarm tracking algorithm with improved inertia weight based on color features. The weighted color histogram is used as the target feature to reduce the contribution of target edge pixels in the target feature, which makes the algorithm insensitive to the target non-rigid deformation, scale variation, and rotation. Meanwhile, the influence of partial obstruction on the description of target features is reduced. The particle swarm optimization algorithm can complete the multi-peak search, which can cope well with the object occlusion tracking problem. This means that the target is located precisely where the similarity function appears multi-peak. When the particle swarm optimization algorithm is applied to the object tracking, the inertia weight adjustment mechanism has some limitations. This paper presents an improved method. The concept of particle maturity is introduced to improve the inertia weight adjustment mechanism, which could adjust the inertia weight in time according to the different states of each particle in each generation. Experimental results show that our algorithm achieves state-of-the-art performance in a wide range of scenarios.

  2. Pancreatic beta cell function following liraglutide-augmented weight loss in individuals with prediabetes: analysis of a randomised, placebo-controlled study

    PubMed Central

    Liu, Alice; Ariel, Danit; Abbasi, Fahim; Lamendola, Cindy; Grove, Kaylene; Tomasso, Vanessa; Reaven, Gerald

    2016-01-01

    Aims/hypothesis Liraglutide can modulate insulin secretion by directly stimulating beta cells or indirectly through weight loss and enhanced insulin sensitivity. Recently, we showed that liraglutide treatment in overweight individuals with prediabetes (impaired fasting glucose and/or impaired glucose tolerance) led to greater weight loss (−7.7% vs −3.9%) and improvement in insulin resistance compared with placebo. The current study evaluates the effects on beta cell function of weight loss augmented by liraglutide compared with weight loss alone. Methods This was a parallel, randomised study conducted in a single academic centre. Both participants and study administrators were blinded to treatment assignment. Individuals who were 40–70 years old, overweight (BMI 27–40 kg/m2) and with prediabetes were randomised (via a computerised system) to receive liraglutide (n = 35) or matching placebo (n = 33), and 49 participants were analysed. All were instructed to follow an energy-restricted diet. Primary outcome was insulin secretory function, which was evaluated in response to graded infusions of glucose and day-long mixed meals. Results Liraglutide treatment (n = 24) significantly (p ≤0.03) increased the insulin secretion rate (% mean change [95% CI]; 21% [12, 31] vs −4% [−11, 3]) and pancreatic beta cell sensitivity to intravenous glucose (229% [161, 276] vs −0.5% (−15, 14]), and decreased insulin clearance rate (−3.5% [−11, 4] vs 8.2 [0.2, 16]) as compared with placebo (n = 25). The liraglutide-treated group also had significantly (p ≤0.03) lower day-long glucose (−8.2% [−11, −6] vs −0.1 [−3, 2]) and NEFA concentrations (−14 [−20, −8] vs −2.1 [−10, 6]) following mixed meals, whereas day-long insulin concentrations did not significantly differ as compared with placebo. In a multivariate regression analysis, weight loss was associated with a decrease in insulin secretion rate and day-long glucose and insulin concentrations in the placebo group (p ≤0.05), but there was no association with weight loss in the liraglutide group. The most common side effect of liraglutide was nausea. Conclusions/interpretation A direct stimulatory effect on beta cell function was the predominant change in liraglutide-augmented weight loss. These changes appear to be independent of weight loss. Trial registration ClinicalTrials.gov NCT01784965 PMID:24326527

  3. Weighting of Acoustic Cues to a Manner Distinction by Children With and Without Hearing Loss

    PubMed Central

    Lowenstein, Joanna H.

    2015-01-01

    Purpose Children must develop optimal perceptual weighting strategies for processing speech in their first language. Hearing loss can interfere with that development, especially if cochlear implants are required. The three goals of this study were to measure, for children with and without hearing loss: (a) cue weighting for a manner distinction, (b) sensitivity to those cues, and (c) real-world communication functions. Method One hundred and seven children (43 with normal hearing [NH], 17 with hearing aids [HAs], and 47 with cochlear implants [CIs]) performed several tasks: labeling of stimuli from /bɑ/-to-/wɑ/ continua varying in formant and amplitude rise time (FRT and ART), discrimination of ART, word recognition, and phonemic awareness. Results Children with hearing loss were less attentive overall to acoustic structure than children with NH. Children with CIs, but not those with HAs, weighted FRT less and ART more than children with NH. Sensitivity could not explain cue weighting. FRT cue weighting explained significant amounts of variability in word recognition and phonemic awareness; ART cue weighting did not. Conclusion Signal degradation inhibits access to spectral structure for children with CIs, but cannot explain their delayed development of optimal weighting strategies. Auditory training could strengthen the weighting of spectral cues for children with CIs, thus aiding spoken language acquisition. PMID:25813201

  4. Hazard Function Estimation with Cause-of-Death Data Missing at Random.

    PubMed

    Wang, Qihua; Dinse, Gregg E; Liu, Chunling

    2012-04-01

    Hazard function estimation is an important part of survival analysis. Interest often centers on estimating the hazard function associated with a particular cause of death. We propose three nonparametric kernel estimators for the hazard function, all of which are appropriate when death times are subject to random censorship and censoring indicators can be missing at random. Specifically, we present a regression surrogate estimator, an imputation estimator, and an inverse probability weighted estimator. All three estimators are uniformly strongly consistent and asymptotically normal. We derive asymptotic representations of the mean squared error and the mean integrated squared error for these estimators and we discuss a data-driven bandwidth selection method. A simulation study, conducted to assess finite sample behavior, demonstrates that the proposed hazard estimators perform relatively well. We illustrate our methods with an analysis of some vascular disease data.

  5. Optimal graph search segmentation using arc-weighted graph for simultaneous surface detection of bladder and prostate.

    PubMed

    Song, Qi; Wu, Xiaodong; Liu, Yunlong; Smith, Mark; Buatti, John; Sonka, Milan

    2009-01-01

    We present a novel method for globally optimal surface segmentation of multiple mutually interacting objects, incorporating both edge and shape knowledge in a 3-D graph-theoretic approach. Hard surface interacting constraints are enforced in the interacting regions, preserving the geometric relationship of those partially interacting surfaces. The soft smoothness a priori shape compliance is introduced into the energy functional to provide shape guidance. The globally optimal surfaces can be simultaneously achieved by solving a maximum flow problem based on an arc-weighted graph representation. Representing the segmentation problem in an arc-weighted graph, one can incorporate a wider spectrum of constraints into the formulation, thus increasing segmentation accuracy and robustness in volumetric image data. To the best of our knowledge, our method is the first attempt to introduce the arc-weighted graph representation into the graph-searching approach for simultaneous segmentation of multiple partially interacting objects, which admits a globally optimal solution in a low-order polynomial time. Our new approach was applied to the simultaneous surface detection of bladder and prostate. The result was quite encouraging in spite of the low saliency of the bladder and prostate in CT images.

  6. Single-shot ADC imaging for fMRI.

    PubMed

    Song, Allen W; Guo, Hua; Truong, Trong-Kha

    2007-02-01

    It has been suggested that apparent diffusion coefficient (ADC) contrast can be sensitive to cerebral blood flow (CBF) changes during brain activation. However, current ADC imaging techniques have an inherently low temporal resolution due to the requirement of multiple acquisitions with different b-factors, as well as potential confounds from cross talk between the deoxyhemoglobin-induced background gradients and the externally applied diffusion-weighting gradients. In this report a new method is proposed and implemented that addresses these two limitations. Specifically, a single-shot pulse sequence that sequentially acquires one gradient-echo (GRE) and two diffusion-weighted spin-echo (SE) images was developed. In addition, the diffusion-weighting gradient waveform was numerically optimized to null the cross terms with the deoxyhemoglobin-induced background gradients to fully isolate the effect of diffusion weighting from that of oxygenation-level changes. The experimental results show that this new single-shot method can acquire ADC maps with sufficient signal-to-noise ratio (SNR), and establish its practical utility in functional MRI (fMRI) to complement the blood oxygenation level-dependent (BOLD) technique and provide differential sensitivity for different vasculatures to better localize neural activity originating from the small vessels. Copyright (c) 2007 Wiley-Liss, Inc.

  7. Modelling population distribution using remote sensing imagery and location-based data

    NASA Astrophysics Data System (ADS)

    Song, J.; Prishchepov, A. V.

    2017-12-01

    Detailed spatial distribution of population density is essential for city studies such as urban planning, environmental pollution and city emergency, even estimate pressure on the environment and human exposure and risks to health. However, most of the researches used census data as the detailed dynamic population distribution are difficult to acquire, especially in microscale research. This research describes a method using remote sensing imagery and location-based data to model population distribution at the function zone level. Firstly, urban functional zones within a city were mapped by high-resolution remote sensing images and POIs. The workflow of functional zones extraction includes five parts: (1) Urban land use classification. (2) Segmenting images in built-up area. (3) Identification of functional segments by POIs. (4) Identification of functional blocks by functional segmentation and weight coefficients. (5) Assessing accuracy by validation points. The result showed as Fig.1. Secondly, we applied ordinary least square and geographically weighted regression to assess spatial nonstationary relationship between light digital number (DN) and population density of sampling points. The two methods were employed to predict the population distribution over the research area. The R²of GWR model were in the order of 0.7 and typically showed significant variations over the region than traditional OLS model. The result showed as Fig.2.Validation with sampling points of population density demonstrated that the result predicted by the GWR model correlated well with light value. The result showed as Fig.3. Results showed: (1) Population density is not linear correlated with light brightness using global model. (2) VIIRS night-time light data could estimate population density integrating functional zones at city level. (3) GWR is a robust model to map population distribution, the adjusted R2 of corresponding GWR models were higher than the optimal OLS models, confirming that GWR models demonstrate better prediction accuracy. So this method provide detailed population density information for microscale citizen studies.

  8. Rapid construction of pinhole SPECT system matrices by distance-weighted Gaussian interpolation method combined with geometric parameter estimations

    NASA Astrophysics Data System (ADS)

    Lee, Ming-Wei; Chen, Yi-Chun

    2014-02-01

    In pinhole SPECT applied to small-animal studies, it is essential to have an accurate imaging system matrix, called H matrix, for high-spatial-resolution image reconstructions. Generally, an H matrix can be obtained by various methods, such as measurements, simulations or some combinations of both methods. In this study, a distance-weighted Gaussian interpolation method combined with geometric parameter estimations (DW-GIMGPE) is proposed. It utilizes a simplified grid-scan experiment on selected voxels and parameterizes the measured point response functions (PRFs) into 2D Gaussians. The PRFs of missing voxels are interpolated by the relations between the Gaussian coefficients and the geometric parameters of the imaging system with distance-weighting factors. The weighting factors are related to the projected centroids of voxels on the detector plane. A full H matrix is constructed by combining the measured and interpolated PRFs of all voxels. The PRFs estimated by DW-GIMGPE showed similar profiles as the measured PRFs. OSEM reconstructed images of a hot-rod phantom and normal rat myocardium demonstrated the effectiveness of the proposed method. The detectability of a SKE/BKE task on a synthetic spherical test object verified that the constructed H matrix provided comparable detectability to that of the H matrix acquired by a full 3D grid-scan experiment. The reduction in the acquisition time of a full 1.0-mm grid H matrix was about 15.2 and 62.2 times with the simplified grid pattern on 2.0-mm and 4.0-mm grid, respectively. A finer-grid H matrix down to 0.5-mm spacing interpolated by the proposed method would shorten the acquisition time by 8 times, additionally.

  9. Helicopter flight-control design using an H(2) method

    NASA Technical Reports Server (NTRS)

    Takahashi, Marc D.

    1991-01-01

    Rate-command and attitude-command flight-control designs for a UH-60 helicopter in hover are presented and were synthesized using an H(2) method. Using weight functions, this method allows the direct shaping of the singular values of the sensitivity, complementary sensitivity, and control input transfer-function matrices to give acceptable feedback properties. The designs were implemented on the Vertical Motion Simulator, and four low-speed hover tasks were used to evaluate the control system characteristics. The pilot comments from the accel-decel, bob-up, hovering turn, and side-step tasks indicated good decoupling and quick response characteristics. However, an underlying roll PIO tendency was found to exist away from the hover condition, which was caused by a flap regressing mode with insufficient damping.

  10. Image Quality Assessment of High-Resolution Satellite Images with Mtf-Based Fuzzy Comprehensive Evaluation Method

    NASA Astrophysics Data System (ADS)

    Wu, Z.; Luo, Z.; Zhang, Y.; Guo, F.; He, L.

    2018-04-01

    A Modulation Transfer Function (MTF)-based fuzzy comprehensive evaluation method was proposed in this paper for the purpose of evaluating high-resolution satellite image quality. To establish the factor set, two MTF features and seven radiant features were extracted from the knife-edge region of image patch, which included Nyquist, MTF0.5, entropy, peak signal to noise ratio (PSNR), average difference, edge intensity, average gradient, contrast and ground spatial distance (GSD). After analyzing the statistical distribution of above features, a fuzzy evaluation threshold table and fuzzy evaluation membership functions was established. The experiments for comprehensive quality assessment of different natural and artificial objects was done with GF2 image patches. The results showed that the calibration field image has the highest quality scores. The water image has closest image quality to the calibration field, quality of building image is a little poor than water image, but much higher than farmland image. In order to test the influence of different features on quality evaluation, the experiment with different weights were tested on GF2 and SPOT7 images. The results showed that different weights correspond different evaluating effectiveness. In the case of setting up the weights of edge features and GSD, the image quality of GF2 is better than SPOT7. However, when setting MTF and PSNR as main factor, the image quality of SPOT7 is better than GF2.

  11. Segmentation of mouse dynamic PET images using a multiphase level set method

    NASA Astrophysics Data System (ADS)

    Cheng-Liao, Jinxiu; Qi, Jinyi

    2010-11-01

    Image segmentation plays an important role in medical diagnosis. Here we propose an image segmentation method for four-dimensional mouse dynamic PET images. We consider that voxels inside each organ have similar time activity curves. The use of tracer dynamic information allows us to separate regions that have similar integrated activities in a static image but with different temporal responses. We develop a multiphase level set method that utilizes both the spatial and temporal information in a dynamic PET data set. Different weighting factors are assigned to each image frame based on the noise level and activity difference among organs of interest. We used a weighted absolute difference function in the data matching term to increase the robustness of the estimate and to avoid over-partition of regions with high contrast. We validated the proposed method using computer simulated dynamic PET data, as well as real mouse data from a microPET scanner, and compared the results with those of a dynamic clustering method. The results show that the proposed method results in smoother segments with the less number of misclassified voxels.

  12. Modeling Sensor Reliability in Fault Diagnosis Based on Evidence Theory

    PubMed Central

    Yuan, Kaijuan; Xiao, Fuyuan; Fei, Liguo; Kang, Bingyi; Deng, Yong

    2016-01-01

    Sensor data fusion plays an important role in fault diagnosis. Dempster–Shafer (D-R) evidence theory is widely used in fault diagnosis, since it is efficient to combine evidence from different sensors. However, under the situation where the evidence highly conflicts, it may obtain a counterintuitive result. To address the issue, a new method is proposed in this paper. Not only the statistic sensor reliability, but also the dynamic sensor reliability are taken into consideration. The evidence distance function and the belief entropy are combined to obtain the dynamic reliability of each sensor report. A weighted averaging method is adopted to modify the conflict evidence by assigning different weights to evidence according to sensor reliability. The proposed method has better performance in conflict management and fault diagnosis due to the fact that the information volume of each sensor report is taken into consideration. An application in fault diagnosis based on sensor fusion is illustrated to show the efficiency of the proposed method. The results show that the proposed method improves the accuracy of fault diagnosis from 81.19% to 89.48% compared to the existing methods. PMID:26797611

  13. Elucidation of differences in N-glycosylation between different molecular weight forms of recombinant CLEC-2 by LC MALDI tandem MS.

    PubMed

    Zhou, Lei; Qian, Yifan; Zhang, Xingwang; Ruan, Yuanyuan; Ren, Shifang; Gu, Jianxin

    2015-01-30

    C-type lectin-like receptor 2 (CLEC-2) is a newly identified receptor expressed on the platelet surface. It has been reported that CLEC-2 exists as a higher molecular weight (HMW) and a lower molecular weight (LMW) form, which share the same protein core but differ in glycans. The two forms appear to have different ligand-binding abilities, indicating that the differential glycosylation of CLEC-2 possibly produces functionally distinct glycoforms. This study aimed to explore an easy method to directly elucidate the N-glycosylation difference by employing a glycoproteomics approach. The off-line coupling of nano-LC with a MALDI-QIT-TOF mass spectrometer was demonstrated to be capable of sensitive and direct elucidation of the glycosylation difference between HMW and LMW CLEC-2, simultaneously providing information about their oligosaccharide structures and the glycosylation sites. The results reveal that a specific glycosylation site, Asn 134, is differently glycosylated in the two forms, with complex types of bi-antennary, tri-antennary and tetra-antennary, N-linked, fucosylated glycans identified at this site in the HMW form but not in the LMW form. The observed difference in glycosylation might provide new insights into the underlying mechanisms of biological functions of CLEC-2. Because of its simplicity and sensitivity, the method explored in this work suggests that it holds promise as a method of elucidating differences in direct N-glycosylation of target glycoprotein, even in small amount of samples. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Prior-knowledge-based feedforward network simulation of true boiling point curve of crude oil.

    PubMed

    Chen, C W; Chen, D Z

    2001-11-01

    Theoretical results and practical experience indicate that feedforward networks can approximate a wide class of functional relationships very well. This property is exploited in modeling chemical processes. Given finite and noisy training data, it is important to encode the prior knowledge in neural networks to improve the fit precision and the prediction ability of the model. In this paper, as to the three-layer feedforward networks and the monotonic constraint, the unconstrained method, Joerding's penalty function method, the interpolation method, and the constrained optimization method are analyzed first. Then two novel methods, the exponential weight method and the adaptive method, are proposed. These methods are applied in simulating the true boiling point curve of a crude oil with the condition of increasing monotonicity. The simulation experimental results show that the network models trained by the novel methods are good at approximating the actual process. Finally, all these methods are discussed and compared with each other.

  15. High resolution human diffusion tensor imaging using 2-D navigated multi-shot SENSE EPI at 7 Tesla

    PubMed Central

    Jeong, Ha-Kyu; Gore, John C.; Anderson, Adam W.

    2012-01-01

    The combination of parallel imaging with partial Fourier acquisition has greatly improved the performance of diffusion-weighted single-shot EPI and is the preferred method for acquisitions at low to medium magnetic field strength such as 1.5 or 3 Tesla. Increased off-resonance effects and reduced transverse relaxation times at 7 Tesla, however, generate more significant artifacts than at lower magnetic field strength and limit data acquisition. Additional acceleration of k-space traversal using a multi-shot approach, which acquires a subset of k-space data after each excitation, reduces these artifacts relative to conventional single-shot acquisitions. However, corrections for motion-induced phase errors are not straightforward in accelerated, diffusion-weighted multi-shot EPI because of phase aliasing. In this study, we introduce a simple acquisition and corresponding reconstruction method for diffusion-weighted multi-shot EPI with parallel imaging suitable for use at high field. The reconstruction uses a simple modification of the standard SENSE algorithm to account for shot-to-shot phase errors; the method is called Image Reconstruction using Image-space Sampling functions (IRIS). Using this approach, reconstruction from highly aliased in vivo image data using 2-D navigator phase information is demonstrated for human diffusion-weighted imaging studies at 7 Tesla. The final reconstructed images show submillimeter in-plane resolution with no ghosts and much reduced blurring and off-resonance artifacts. PMID:22592941

  16. Adult Body Height Is a Good Predictor of Different Dimensions of Cognitive Function in Aged Individuals: A Cross-Sectional Study.

    PubMed

    Pereira, Vitor H; Costa, Patrício S; Santos, Nadine C; Cunha, Pedro G; Correia-Neves, Margarida; Palha, Joana A; Sousa, Nuno

    2016-01-01

    Background: Adult height, weight, and adiposity measures have been suggested by some studies to be predictors of depression, cognitive impairment, and dementia. However, the presence of confounding factors and the lack of a thorough neuropsychological evaluation in many of these studies have precluded a definitive conclusion about the influence of anthropometric measures in cognition and depression. In this study we aimed to assess the value of height, weight, and abdominal perimeter to predict cognitive impairment and depressive symptoms in aged individuals. Methods and Findings: Cross-sectional study performed between 2010 and 2012 in the Portuguese general community. A total of 1050 participants were included in the study and randomly selected from local area health authority registries. The cohort was representative of the general Portuguese population with respect to age (above 50 years of age) and gender. Cognitive function was assessed using a battery of tests grouped in two dimensions: general executive function and memory. Two-step hierarchical multiple linear regression models were conducted to determine the predictive value of anthropometric measures in cognitive performance and mood before and after correction for possible confounding factors (gender, age, school years, physical activity, alcohol consumption, and smoking habits). We found single associations of weight, height, body mass index, abdominal perimeter, and age with executive function, memory and depressive symptoms. However, when included in a predictive model adjusted for gender, age, school years, and lifestyle factors only height prevailed as a significant predictor of general executive function (β = 0.139; p < 0.001) and memory (β = 0.099; p < 0.05). No relation was found between mood and any of the anthropometric measures studied. Conclusions and Relevance: Height is an independent predictor of cognitive function in late-life and its effects on the general and executive function and memory are independent of age, weight, education level, gender, and lifestyle factors. Altogether, our data suggests that modulators of adult height during childhood may irreversibly contribute to cognitive function in adult life and that height should be used in models to predict cognitive performance.

  17. Optimization of a GO2/GH2 Impinging Injector Element

    NASA Technical Reports Server (NTRS)

    Tucker, P. Kevin; Shyy, Wei; Vaidyanathan, Rajkumar

    2001-01-01

    An injector optimization methodology, method i, is used to investigate optimal design points for a gaseous oxygen/gaseous hydrogen (GO2/GH2) impinging injector element. The unlike impinging element, a fuel-oxidizer- fuel (F-O-F) triplet, is optimized in terms of design variables such as fuel pressure drop, (Delta)P(sub f), oxidizer pressure drop, (Delta)P(sub o), combustor length, L(sub comb), and impingement half-angle, alpha, for a given mixture ratio and chamber pressure. Dependent variables such as energy release efficiency, ERE, wall heat flux, Q(sub w), injector heat flux, Q(sub inj), relative combustor weight, W(sub rel), and relative injector cost, C(sub rel), are calculated and then correlated with the design variables. An empirical design methodology is used to generate these responses for 163 combinations of input variables. Method i is then used to generate response surfaces for each dependent variable. Desirability functions based on dependent variable constraints are created and used to facilitate development of composite response surfaces representing some, or all, of the five dependent variables in terms of the input variables. Three examples illustrating the utility and flexibility of method i are discussed in detail. First, joint response surfaces are constructed by sequentially adding dependent variables. Optimum designs are identified after addition of each variable and the effect each variable has on the design is shown. This stepwise demonstration also highlights the importance of including variables such as weight and cost early in the design process. Secondly, using the composite response surface which includes all five dependent variables, unequal weights are assigned to emphasize certain variables relative to others. Here, method i is used to enable objective trade studies on design issues such as component life and thrust to weight ratio. Finally, specific variable weights are further increased to illustrate the high marginal cost of realizing the last increment of injector performance and thruster weight.

  18. An Experimental Weight Function Method for Stress Intensity Factor Calibration.

    DTIC Science & Technology

    1980-04-01

    in accuracy to the ones obtained by Macha (Reference 10) for the laser interferometry technique. The values of KI from the interpolating polynomial...Measurement. Air Force Material Laboratories, AFML-TR-74-75, July 1974. 10. D. E. Macha , W. N. Sharpe Jr., and A. F. Grandt Jr., A Laser Interferometry

  19. Predicting protein submitochondrial locations using a K-Nearest neighbor method based on the Bit-Score weighted euclidean distance

    USDA-ARS?s Scientific Manuscript database

    Mitochondria are essential subcellular organelles found in eukaryotic cells. Knowing information on a protein’s subcellular or sub subcellular location provides in-depth insights about the microenvironment where it interacts with other molecules and is crucial for inferring the protein’s function. T...

  20. Effects of conventional overground gait training and a gait trainer with partial body weight support on spatiotemporal gait parameters of patients after stroke

    PubMed Central

    Park, Byoung-Sun; Kim, Mee-Young; Lee, Lim-Kyu; Yang, Seung-Min; Lee, Won-Deok; Noh, Ji-Woong; Shin, Yong-Sub; Kim, Ju-Hyun; Lee, Jeong-Uk; Kwak, Taek-Yong; Lee, Tae-Hyun; Kim, Ju-Young; Kim, Junghwan

    2015-01-01

    [Purpose] The purpose of this study was to confirm the effects of both conventional overground gait training (CGT) and a gait trainer with partial body weight support (GTBWS) on spatiotemporal gait parameters of patients with hemiparesis following chronic stroke. [Subjects and Methods] Thirty stroke patients were alternately assigned to one of two treatment groups, and both groups underwent CGT and GTBWS. [Results] The functional ambulation classification on the affected side improved significantly in the CGT and GTBWS groups. Walking speed also improved significantly in both groups. [Conclusion] These results suggest that the GTBWS in company with CGT may be, in part, an effective method of gait training for restoring gait ability in patients after a stroke. PMID:26157272

Top