Sample records for factor method iefm

  1. The ionospheric eclipse factor method (IEFM) and its application to determining the ionospheric delay for GPS

    NASA Astrophysics Data System (ADS)

    Yuan, Y.; Tscherning, C. C.; Knudsen, P.; Xu, G.; Ou, J.

    2008-01-01

    A new method for modeling the ionospheric delay using global positioning system (GPS) data is proposed, called the ionospheric eclipse factor method (IEFM). It is based on establishing a concept referred to as the ionospheric eclipse factor (IEF) λ of the ionospheric pierce point (IPP) and the IEF’s influence factor (IFF) bar{λ}. The IEF can be used to make a relatively precise distinction between ionospheric daytime and nighttime, whereas the IFF is advantageous for describing the IEF’s variations with day, month, season and year, associated with seasonal variations of total electron content (TEC) of the ionosphere. By combining λ and bar{λ} with the local time t of IPP, the IEFM has the ability to precisely distinguish between ionospheric daytime and nighttime, as well as efficiently combine them during different seasons or months over a year at the IPP. The IEFM-based ionospheric delay estimates are validated by combining an absolute positioning mode with several ionospheric delay correction models or algorithms, using GPS data at an international Global Navigation Satellite System (GNSS) service (IGS) station (WTZR). Our results indicate that the IEFM may further improve ionospheric delay modeling using GPS data.

  2. Soliton solutions of the quantum Zakharov-Kuznetsov equation which arises in quantum magneto-plasmas

    NASA Astrophysics Data System (ADS)

    Sindi, Cevat Teymuri; Manafian, Jalil

    2017-02-01

    In this paper, we extended the improved tan(φ/2)-expansion method (ITEM) and the generalized G'/G-expansion method (GGEM) proposed by Manafian and Fazli (Opt. Quantum Electron. 48, 413 (2016)) to construct new types of soliton wave solutions of nonlinear partial differential equations (NPDEs). Moreover, we use of the improvement of the Exp-function method (IEFM) proposed by Jahani and Manafian (Eur. Phys. J. Plus 131, 54 (2016)) for obtaining solutions of NPDEs. The merit of the presented three methods is they can find further solutions to the considered problems, including soliton, periodic, kink, kink-singular wave solutions. This paper studies the quantum Zakharov-Kuznetsov (QZK) equation by the aid of the improved tan(φ/2)-expansion method, the generalized G'/G-expansion method and the improvement of the Exp-function method. Moreover, the 1-soliton solution of the modified QZK equation with power law nonlinearity is obtained by the aid of traveling wave hypothesis with the necessary constraints in place for the existence of the soliton. Comparing our new results with Ebadi et al. results (Astrophys. Space Sci. 341, 507 (2012)), namely, G'/G-expansion method, exp-function method, modified F-expansion method, shows that our results give further solutions. Finally, these solutions might play an important role in engineering, physics and applied mathematics fields.

  3. Determination of Slope Safety Factor with Analytical Solution and Searching Critical Slip Surface with Genetic-Traversal Random Method

    PubMed Central

    2014-01-01

    In the current practice, to determine the safety factor of a slope with two-dimensional circular potential failure surface, one of the searching methods for the critical slip surface is Genetic Algorithm (GA), while the method to calculate the slope safety factor is Fellenius' slices method. However GA needs to be validated with more numeric tests, while Fellenius' slices method is just an approximate method like finite element method. This paper proposed a new method to determine the minimum slope safety factor which is the determination of slope safety factor with analytical solution and searching critical slip surface with Genetic-Traversal Random Method. The analytical solution is more accurate than Fellenius' slices method. The Genetic-Traversal Random Method uses random pick to utilize mutation. A computer automatic search program is developed for the Genetic-Traversal Random Method. After comparison with other methods like slope/w software, results indicate that the Genetic-Traversal Random Search Method can give very low safety factor which is about half of the other methods. However the obtained minimum safety factor with Genetic-Traversal Random Search Method is very close to the lower bound solutions of slope safety factor given by the Ansys software. PMID:24782679

  4. Factorization method of quadratic template

    NASA Astrophysics Data System (ADS)

    Kotyrba, Martin

    2017-07-01

    Multiplication of two numbers is a one-way function in mathematics. Any attempt to distribute the outcome to its roots is called factorization. There are many methods such as Fermat's factorization, Dixońs method or quadratic sieve and GNFS, which use sophisticated techniques fast factorization. All the above methods use the same basic formula differing only in its use. This article discusses a newly designed factorization method. Effective implementation of this method in programs is not important, it only represents and clearly defines its properties.

  5. New Internet search volume-based weighting method for integrating various environmental impacts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ji, Changyoon, E-mail: changyoon@yonsei.ac.kr; Hong, Taehoon, E-mail: hong7@yonsei.ac.kr

    Weighting is one of the steps in life cycle impact assessment that integrates various characterized environmental impacts as a single index. Weighting factors should be based on the society's preferences. However, most previous studies consider only the opinion of some people. Thus, this research proposes a new weighting method that determines the weighting factors of environmental impact categories by considering public opinion on environmental impacts using the Internet search volumes for relevant terms. To validate the new weighting method, the weighting factors for six environmental impacts calculated by the new weighting method were compared with the existing weighting factors. Themore » resulting Pearson's correlation coefficient between the new and existing weighting factors was from 0.8743 to 0.9889. It turned out that the new weighting method presents reasonable weighting factors. It also requires less time and lower cost compared to existing methods and likewise meets the main requirements of weighting methods such as simplicity, transparency, and reproducibility. The new weighting method is expected to be a good alternative for determining the weighting factor. - Highlight: • A new weighting method using Internet search volume is proposed in this research. • The new weighting method reflects the public opinion using Internet search volume. • The correlation coefficient between new and existing weighting factors is over 0.87. • The new weighting method can present the reasonable weighting factors. • The proposed method can be a good alternative for determining the weighting factors.« less

  6. Using Horn's Parallel Analysis Method in Exploratory Factor Analysis for Determining the Number of Factors

    ERIC Educational Resources Information Center

    Çokluk, Ömay; Koçak, Duygu

    2016-01-01

    In this study, the number of factors obtained from parallel analysis, a method used for determining the number of factors in exploratory factor analysis, was compared to that of the factors obtained from eigenvalue and scree plot--two traditional methods for determining the number of factors--in terms of consistency. Parallel analysis is based on…

  7. Comparisons of Exploratory and Confirmatory Factor Analysis.

    ERIC Educational Resources Information Center

    Daniel, Larry G.

    Historically, most researchers conducting factor analysis have used exploratory methods. However, more recently, confirmatory factor analytic methods have been developed that can directly test theory either during factor rotation using "best fit" rotation methods or during factor extraction, as with the LISREL computer programs developed…

  8. Review on pen-and-paper-based observational methods for assessing ergonomic risk factors of computer work.

    PubMed

    Rahman, Mohd Nasrull Abdol; Mohamad, Siti Shafika

    2017-01-01

    Computer works are associated with Musculoskeletal Disorders (MSDs). There are several methods have been developed to assess computer work risk factor related to MSDs. This review aims to give an overview of current techniques available for pen-and-paper-based observational methods in assessing ergonomic risk factors of computer work. We searched an electronic database for materials from 1992 until 2015. The selected methods were focused on computer work, pen-and-paper observational methods, office risk factors and musculoskeletal disorders. This review was developed to assess the risk factors, reliability and validity of pen-and-paper observational method associated with computer work. Two evaluators independently carried out this review. Seven observational methods used to assess exposure to office risk factor for work-related musculoskeletal disorders were identified. The risk factors involved in current techniques of pen and paper based observational tools were postures, office components, force and repetition. From the seven methods, only five methods had been tested for reliability. They were proven to be reliable and were rated as moderate to good. For the validity testing, from seven methods only four methods were tested and the results are moderate. Many observational tools already exist, but no single tool appears to cover all of the risk factors including working posture, office component, force, repetition and office environment at office workstations and computer work. Although the most important factor in developing tool is proper validation of exposure assessment techniques, the existing observational method did not test reliability and validity. Futhermore, this review could provide the researchers with ways on how to improve the pen-and-paper-based observational method for assessing ergonomic risk factors of computer work.

  9. Calculating the nutrient composition of recipes with computers.

    PubMed

    Powers, P M; Hoover, L W

    1989-02-01

    The objective of this research project was to compare the nutrient values computed by four commonly used computerized recipe calculation methods. The four methods compared were the yield factor, retention factor, summing, and simplified retention factor methods. Two versions of the summing method were modeled. Four pork entrée recipes were selected for analysis: roast pork, pork and noodle casserole, pan-broiled pork chops, and pork chops with vegetables. Assumptions were made about changes expected to occur in the ingredients during preparation and cooking. Models were designed to simulate the algorithms of the calculation methods using a microcomputer spreadsheet software package. Identical results were generated in the yield factor, retention factor, and summing-cooked models for roast pork. The retention factor and summing-cooked models also produced identical results for the recipe for pan-broiled pork chops. The summing-raw model gave the highest value for water in all four recipes and the lowest values for most of the other nutrients. A superior method or methods was not identified. However, on the basis of the capabilities provided with the yield factor and retention factor methods, more serious consideration of these two methods is recommended.

  10. Global sensitivity analysis for urban water quality modelling: Terminology, convergence and comparison of different methods

    NASA Astrophysics Data System (ADS)

    Vanrolleghem, Peter A.; Mannina, Giorgio; Cosenza, Alida; Neumann, Marc B.

    2015-03-01

    Sensitivity analysis represents an important step in improving the understanding and use of environmental models. Indeed, by means of global sensitivity analysis (GSA), modellers may identify both important (factor prioritisation) and non-influential (factor fixing) model factors. No general rule has yet been defined for verifying the convergence of the GSA methods. In order to fill this gap this paper presents a convergence analysis of three widely used GSA methods (SRC, Extended FAST and Morris screening) for an urban drainage stormwater quality-quantity model. After the convergence was achieved the results of each method were compared. In particular, a discussion on peculiarities, applicability, and reliability of the three methods is presented. Moreover, a graphical Venn diagram based classification scheme and a precise terminology for better identifying important, interacting and non-influential factors for each method is proposed. In terms of convergence, it was shown that sensitivity indices related to factors of the quantity model achieve convergence faster. Results for the Morris screening method deviated considerably from the other methods. Factors related to the quality model require a much higher number of simulations than the number suggested in literature for achieving convergence with this method. In fact, the results have shown that the term "screening" is improperly used as the method may exclude important factors from further analysis. Moreover, for the presented application the convergence analysis shows more stable sensitivity coefficients for the Extended-FAST method compared to SRC and Morris screening. Substantial agreement in terms of factor fixing was found between the Morris screening and Extended FAST methods. In general, the water quality related factors exhibited more important interactions than factors related to water quantity. Furthermore, in contrast to water quantity model outputs, water quality model outputs were found to be characterised by high non-linearity.

  11. Searching for transcription factor binding sites in vector spaces

    PubMed Central

    2012-01-01

    Background Computational approaches to transcription factor binding site identification have been actively researched in the past decade. Learning from known binding sites, new binding sites of a transcription factor in unannotated sequences can be identified. A number of search methods have been introduced over the years. However, one can rarely find one single method that performs the best on all the transcription factors. Instead, to identify the best method for a particular transcription factor, one usually has to compare a handful of methods. Hence, it is highly desirable for a method to perform automatic optimization for individual transcription factors. Results We proposed to search for transcription factor binding sites in vector spaces. This framework allows us to identify the best method for each individual transcription factor. We further introduced two novel methods, the negative-to-positive vector (NPV) and optimal discriminating vector (ODV) methods, to construct query vectors to search for binding sites in vector spaces. Extensive cross-validation experiments showed that the proposed methods significantly outperformed the ungapped likelihood under positional background method, a state-of-the-art method, and the widely-used position-specific scoring matrix method. We further demonstrated that motif subtypes of a TF can be readily identified in this framework and two variants called the k NPV and k ODV methods benefited significantly from motif subtype identification. Finally, independent validation on ChIP-seq data showed that the ODV and NPV methods significantly outperformed the other compared methods. Conclusions We conclude that the proposed framework is highly flexible. It enables the two novel methods to automatically identify a TF-specific subspace to search for binding sites. Implementations are available as source code at: http://biogrid.engr.uconn.edu/tfbs_search/. PMID:23244338

  12. Global self-esteem and method effects: competing factor structures, longitudinal invariance, and response styles in adolescents.

    PubMed

    Urbán, Róbert; Szigeti, Réka; Kökönyei, Gyöngyi; Demetrovics, Zsolt

    2014-06-01

    The Rosenberg Self-Esteem Scale (RSES) is a widely used measure for assessing self-esteem, but its factor structure is debated. Our goals were to compare 10 alternative models for the RSES and to quantify and predict the method effects. This sample involves two waves (N =2,513 9th-grade and 2,370 10th-grade students) from five waves of a school-based longitudinal study. The RSES was administered in each wave. The global self-esteem factor with two latent method factors yielded the best fit to the data. The global factor explained a large amount of the common variance (61% and 46%); however, a relatively large proportion of the common variance was attributed to the negative method factor (34 % and 41%), and a small proportion of the common variance was explained by the positive method factor (5% and 13%). We conceptualized the method effect as a response style and found that being a girl and having a higher number of depressive symptoms were associated with both low self-esteem and negative response style, as measured by the negative method factor. Our study supported the one global self-esteem construct and quantified the method effects in adolescents.

  13. Global self-esteem and method effects: competing factor structures, longitudinal invariance and response styles in adolescents

    PubMed Central

    Urbán, Róbert; Szigeti, Réka; Kökönyei, Gyöngyi; Demetrovics, Zsolt

    2013-01-01

    The Rosenberg Self-Esteem Scale (RSES) is a widely used measure for assessing self-esteem, but its factor structure is debated. Our goals were to compare 10 alternative models for RSES; and to quantify and predict the method effects. This sample involves two waves (N=2513 ninth-grade and 2370 tenth-grade students) from five waves of a school-based longitudinal study. RSES was administered in each wave. The global self-esteem factor with two latent method factors yielded the best fit to the data. The global factor explained large amount of the common variance (61% and 46%); however, a relatively large proportion of the common variance was attributed to the negative method factor (34 % and 41%), and a small proportion of the common variance was explained by the positive method factor (5% and 13%). We conceptualized the method effect as a response style, and found that being a girl and having higher number of depressive symptoms were associated with both low self-esteem and negative response style measured by the negative method factor. Our study supported the one global self-esteem construct and quantified the method effects in adolescents. PMID:24061931

  14. The Hull Method for Selecting the Number of Common Factors

    ERIC Educational Resources Information Center

    Lorenzo-Seva, Urbano; Timmerman, Marieke E.; Kiers, Henk A. L.

    2011-01-01

    A common problem in exploratory factor analysis is how many factors need to be extracted from a particular data set. We propose a new method for selecting the number of major common factors: the Hull method, which aims to find a model with an optimal balance between model fit and number of parameters. We examine the performance of the method in an…

  15. Parameter Accuracy in Meta-Analyses of Factor Structures

    ERIC Educational Resources Information Center

    Gnambs, Timo; Staufenbiel, Thomas

    2016-01-01

    Two new methods for the meta-analysis of factor loadings are introduced and evaluated by Monte Carlo simulations. The direct method pools each factor loading individually, whereas the indirect method synthesizes correlation matrices reproduced from factor loadings. The results of the two simulations demonstrated that the accuracy of…

  16. Unsupervised Bayesian linear unmixing of gene expression microarrays.

    PubMed

    Bazot, Cécile; Dobigeon, Nicolas; Tourneret, Jean-Yves; Zaas, Aimee K; Ginsburg, Geoffrey S; Hero, Alfred O

    2013-03-19

    This paper introduces a new constrained model and the corresponding algorithm, called unsupervised Bayesian linear unmixing (uBLU), to identify biological signatures from high dimensional assays like gene expression microarrays. The basis for uBLU is a Bayesian model for the data samples which are represented as an additive mixture of random positive gene signatures, called factors, with random positive mixing coefficients, called factor scores, that specify the relative contribution of each signature to a specific sample. The particularity of the proposed method is that uBLU constrains the factor loadings to be non-negative and the factor scores to be probability distributions over the factors. Furthermore, it also provides estimates of the number of factors. A Gibbs sampling strategy is adopted here to generate random samples according to the posterior distribution of the factors, factor scores, and number of factors. These samples are then used to estimate all the unknown parameters. Firstly, the proposed uBLU method is applied to several simulated datasets with known ground truth and compared with previous factor decomposition methods, such as principal component analysis (PCA), non negative matrix factorization (NMF), Bayesian factor regression modeling (BFRM), and the gradient-based algorithm for general matrix factorization (GB-GMF). Secondly, we illustrate the application of uBLU on a real time-evolving gene expression dataset from a recent viral challenge study in which individuals have been inoculated with influenza A/H3N2/Wisconsin. We show that the uBLU method significantly outperforms the other methods on the simulated and real data sets considered here. The results obtained on synthetic and real data illustrate the accuracy of the proposed uBLU method when compared to other factor decomposition methods from the literature (PCA, NMF, BFRM, and GB-GMF). The uBLU method identifies an inflammatory component closely associated with clinical symptom scores collected during the study. Using a constrained model allows recovery of all the inflammatory genes in a single factor.

  17. A simple method for determining stress intensity factors for a crack in bi-material interface

    NASA Astrophysics Data System (ADS)

    Morioka, Yuta

    Because of violently oscillating nature of stress and displacement fields near the crack tip, it is difficult to obtain stress intensity factors for a crack between two dis-similar media. For a crack in a homogeneous medium, it is a common practice to find stress intensity factors through strain energy release rates. However, individual strain energy release rates do not exist for bi-material interface crack. Hence it is necessary to find alternative methods to evaluate stress intensity factors. Several methods have been proposed in the past. However they involve mathematical complexity and sometimes require additional finite element analysis. The purpose of this research is to develop a simple method to find stress intensity factors in bi-material interface cracks. A finite element based projection method is proposed in the research. It is shown that the projection method yields very accurate stress intensity factors for a crack in isotropic and anisotropic bi-material interfaces. The projection method is also compared to displacement ratio method and energy method proposed by other authors. Through comparison it is found that projection method is much simpler to apply with its accuracy comparable to that of displacement ratio method.

  18. Calculation Method of Lateral Strengths and Ductility Factors of Constructions with Shear Walls of Different Ductility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamaguchi, Nobuyoshi; Nakao, Masato; Murakami, Masahide

    2008-07-08

    For seismic design, ductility-related force modification factors are named R factor in Uniform Building Code of U.S, q factor in Euro Code 8 and Ds (inverse of R) factor in Japanese Building Code. These ductility-related force modification factors for each type of shear elements are appeared in those codes. Some constructions use various types of shear walls that have different ductility, especially for their retrofit or re-strengthening. In these cases, engineers puzzle the decision of force modification factors of the constructions. Solving this problem, new method to calculate lateral strengths of stories for simple shear wall systems is proposed andmore » named 'Stiffness--Potential Energy Addition Method' in this paper. This method uses two design lateral strengths for each type of shear walls in damage limit state and safety limit state. Two lateral strengths of stories in both limit states are calculated from these two design lateral strengths for each type of shear walls in both limit states. Calculated strengths have the same quality as values obtained by strength addition method using many steps of load-deformation data of shear walls. The new method to calculate ductility factors is also proposed in this paper. This method is based on the new method to calculate lateral strengths of stories. This method can solve the problem to obtain ductility factors of stories with shear walls of different ductility.« less

  19. Method to determine transcriptional regulation pathways in organisms

    DOEpatents

    Gardner, Timothy S.; Collins, James J.; Hayete, Boris; Faith, Jeremiah

    2012-11-06

    The invention relates to computer-implemented methods and systems for identifying regulatory relationships between expressed regulating polypeptides and targets of the regulatory activities of such regulating polypeptides. More specifically, the invention provides a new method for identifying regulatory dependencies between biochemical species in a cell. In particular embodiments, provided are computer-implemented methods for identifying a regulatory interaction between a transcription factor and a gene target of the transcription factor, or between a transcription factor and a set of gene targets of the transcription factor. Further provided are genome-scale methods for predicting regulatory interactions between a set of transcription factors and a corresponding set of transcriptional target substrates thereof.

  20. A Comparison of Distribution Free and Non-Distribution Free Factor Analysis Methods

    ERIC Educational Resources Information Center

    Ritter, Nicola L.

    2012-01-01

    Many researchers recognize that factor analysis can be conducted on both correlation matrices and variance-covariance matrices. Although most researchers extract factors from non-distribution free or parametric methods, researchers can also extract factors from distribution free or non-parametric methods. The nature of the data dictates the method…

  1. Loss Factor Estimation Using the Impulse Response Decay Method on a Stiffened Structure

    NASA Technical Reports Server (NTRS)

    Cabell, Randolph; Schiller, Noah; Allen, Albert; Moeller, Mark

    2009-01-01

    High-frequency vibroacoustic modeling is typically performed using energy-based techniques such as Statistical Energy Analysis (SEA). Energy models require an estimate of the internal damping loss factor. Unfortunately, the loss factor is difficult to estimate analytically, and experimental methods such as the power injection method can require extensive measurements over the structure of interest. This paper discusses the implications of estimating damping loss factors using the impulse response decay method (IRDM) from a limited set of response measurements. An automated procedure for implementing IRDM is described and then evaluated using data from a finite element model of a stiffened, curved panel. Estimated loss factors are compared with loss factors computed using a power injection method and a manual curve fit. The paper discusses the sensitivity of the IRDM loss factor estimates to damping of connected subsystems and the number and location of points in the measurement ensemble.

  2. Comparisons of Four Methods for Estimating a Dynamic Factor Model

    ERIC Educational Resources Information Center

    Zhang, Zhiyong; Hamaker, Ellen L.; Nesselroade, John R.

    2008-01-01

    Four methods for estimating a dynamic factor model, the direct autoregressive factor score (DAFS) model, are evaluated and compared. The first method estimates the DAFS model using a Kalman filter algorithm based on its state space model representation. The second one employs the maximum likelihood estimation method based on the construction of a…

  3. Factor Retention in Exploratory Factor Analysis: A Comparison of Alternative Methods.

    ERIC Educational Resources Information Center

    Mumford, Karen R.; Ferron, John M.; Hines, Constance V.; Hogarty, Kristine Y.; Kromrey, Jeffery D.

    This study compared the effectiveness of 10 methods of determining the number of factors to retain in exploratory common factor analysis. The 10 methods included the Kaiser rule and a modified Kaiser criterion, 3 variations of parallel analysis, 4 regression-based variations of the scree procedure, and the minimum average partial procedure. The…

  4. Determining the Number of Factors in P-Technique Factor Analysis

    ERIC Educational Resources Information Center

    Lo, Lawrence L.; Molenaar, Peter C. M.; Rovine, Michael

    2017-01-01

    Determining the number of factors is a critical first step in exploratory factor analysis. Although various criteria and methods for determining the number of factors have been evaluated in the usual between-subjects R-technique factor analysis, there is still question of how these methods perform in within-subjects P-technique factor analysis. A…

  5. Positive semidefinite tensor factorizations of the two-electron integral matrix for low-scaling ab initio electronic structure.

    PubMed

    Hoy, Erik P; Mazziotti, David A

    2015-08-14

    Tensor factorization of the 2-electron integral matrix is a well-known technique for reducing the computational scaling of ab initio electronic structure methods toward that of Hartree-Fock and density functional theories. The simplest factorization that maintains the positive semidefinite character of the 2-electron integral matrix is the Cholesky factorization. In this paper, we introduce a family of positive semidefinite factorizations that generalize the Cholesky factorization. Using an implementation of the factorization within the parametric 2-RDM method [D. A. Mazziotti, Phys. Rev. Lett. 101, 253002 (2008)], we study several inorganic molecules, alkane chains, and potential energy curves and find that this generalized factorization retains the accuracy and size extensivity of the Cholesky factorization, even in the presence of multi-reference correlation. The generalized family of positive semidefinite factorizations has potential applications to low-scaling ab initio electronic structure methods that treat electron correlation with a computational cost approaching that of the Hartree-Fock method or density functional theory.

  6. A rapid method to visualize von willebrand factor multimers by using agarose gel electrophoresis, immunolocalization and luminographic detection.

    PubMed

    Krizek, D R; Rick, M E

    2000-03-15

    A highly sensitive and rapid clinical method for the visualization of the multimeric structure of von Willebrand Factor in plasma and platelets is described. The method utilizes submerged horizontal agarose gel electrophoresis, followed by transfer of the von Willebrand Factor onto a polyvinylidine fluoride membrane, and immunolocalization and luminographic visualization of the von Willebrand Factor multimeric pattern. This method distinguishes type 1 from types 2A and 2B von Willebrand disease, allowing timely evaluation and classification of von Willebrand Factor in patient plasma. It also allows visualization of the unusually high molecular weight multimers present in platelets. There are several major advantages to this method including rapid processing, simplicity of gel preparation, high sensitivity to low concentrations of von Willebrand Factor, and elimination of radioactivity.

  7. COMPARING A NEW ALGORITHM WITH THE CLASSIC METHODS FOR ESTIMATING THE NUMBER OF FACTORS. (R826238)

    EPA Science Inventory

    This paper presents and compares a new algorithm for finding the number of factors in a data analytic model. After we describe the new method, called NUMFACT, we compare it with standard methods for finding the number of factors to use in a model. The standard methods that we ...

  8. The Empirical Verification of an Assignment of Items to Subtests: The Oblique Multiple Group Method versus the Confirmatory Common Factor Method

    ERIC Educational Resources Information Center

    Stuive, Ilse; Kiers, Henk A. L.; Timmerman, Marieke E.; ten Berge, Jos M. F.

    2008-01-01

    This study compares two confirmatory factor analysis methods on their ability to verify whether correct assignments of items to subtests are supported by the data. The confirmatory common factor (CCF) method is used most often and defines nonzero loadings so that they correspond to the assignment of items to subtests. Another method is the oblique…

  9. Fuzzy forecasting based on two-factors second-order fuzzy-trend logical relationship groups and the probabilities of trends of fuzzy logical relationships.

    PubMed

    Chen, Shyi-Ming; Chen, Shen-Wen

    2015-03-01

    In this paper, we present a new method for fuzzy forecasting based on two-factors second-order fuzzy-trend logical relationship groups and the probabilities of trends of fuzzy-trend logical relationships. Firstly, the proposed method fuzzifies the historical training data of the main factor and the secondary factor into fuzzy sets, respectively, to form two-factors second-order fuzzy logical relationships. Then, it groups the obtained two-factors second-order fuzzy logical relationships into two-factors second-order fuzzy-trend logical relationship groups. Then, it calculates the probability of the "down-trend," the probability of the "equal-trend" and the probability of the "up-trend" of the two-factors second-order fuzzy-trend logical relationships in each two-factors second-order fuzzy-trend logical relationship group, respectively. Finally, it performs the forecasting based on the probabilities of the down-trend, the equal-trend, and the up-trend of the two-factors second-order fuzzy-trend logical relationships in each two-factors second-order fuzzy-trend logical relationship group. We also apply the proposed method to forecast the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) and the NTD/USD exchange rates. The experimental results show that the proposed method outperforms the existing methods.

  10. Exploratory factor analysis in Rehabilitation Psychology: a content analysis.

    PubMed

    Roberson, Richard B; Elliott, Timothy R; Chang, Jessica E; Hill, Jessica N

    2014-11-01

    Our objective was to examine the use and quality of exploratory factor analysis (EFA) in articles published in Rehabilitation Psychology. Trained raters examined 66 separate exploratory factor analyses in 47 articles published between 1999 and April 2014. The raters recorded the aim of the EFAs, the distributional statistics, sample size, factor retention method(s), extraction and rotation method(s), and whether the pattern coefficients, structure coefficients, and the matrix of association were reported. The primary use of the EFAs was scale development, but the most widely used extraction and rotation method was principle component analysis, with varimax rotation. When determining how many factors to retain, multiple methods (e.g., scree plot, parallel analysis) were used most often. Many articles did not report enough information to allow for the duplication of their results. EFA relies on authors' choices (e.g., factor retention rules extraction, rotation methods), and few articles adhered to all of the best practices. The current findings are compared to other empirical investigations into the use of EFA in published research. Recommendations for improving EFA reporting practices in rehabilitation psychology research are provided.

  11. Proposal for a recovery prediction method for patients affected by acute mediastinitis

    PubMed Central

    2012-01-01

    Background An attempt to find a prediction method of death risk in patients affected by acute mediastinitis. There is not such a tool described in available literature for that serious disease. Methods The study comprised 44 consecutive cases of acute mediastinitis. General anamnesis and biochemical data were included. Factor analysis was used to extract the risk characteristic for the patients. The most valuable results were obtained for 8 parameters which were selected for further statistical analysis (all collected during few hours after admission). Three factors reached Eigenvalue >1. Clinical explanations of these combined statistical factors are: Factor1 - proteinic status (serum total protein, albumin, and hemoglobin level), Factor2 - inflammatory status (white blood cells, CRP, procalcitonin), and Factor3 - general risk (age, number of coexisting diseases). Threshold values of prediction factors were estimated by means of statistical analysis (factor analysis, Statgraphics Centurion XVI). Results The final prediction result for the patients is constructed as simultaneous evaluation of all factor scores. High probability of death should be predicted if factor 1 value decreases with simultaneous increase of factors 2 and 3. The diagnostic power of the proposed method was revealed to be high [sensitivity =90%, specificity =64%], for Factor1 [SNC = 87%, SPC = 79%]; for Factor2 [SNC = 87%, SPC = 50%] and for Factor3 [SNC = 73%, SPC = 71%]. Conclusion The proposed prediction method seems a useful emergency signal during acute mediastinitis control in affected patients. PMID:22574625

  12. An analytically based numerical method for computing view factors in real urban environments

    NASA Astrophysics Data System (ADS)

    Lee, Doo-Il; Woo, Ju-Wan; Lee, Sang-Hyun

    2018-01-01

    A view factor is an important morphological parameter used in parameterizing in-canyon radiative energy exchange process as well as in characterizing local climate over urban environments. For realistic representation of the in-canyon radiative processes, a complete set of view factors at the horizontal and vertical surfaces of urban facets is required. Various analytical and numerical methods have been suggested to determine the view factors for urban environments, but most of the methods provide only sky-view factor at the ground level of a specific location or assume simplified morphology of complex urban environments. In this study, a numerical method that can determine the sky-view factors ( ψ ga and ψ wa ) and wall-view factors ( ψ gw and ψ ww ) at the horizontal and vertical surfaces is presented for application to real urban morphology, which are derived from an analytical formulation of the view factor between two blackbody surfaces of arbitrary geometry. The established numerical method is validated against the analytical sky-view factor estimation for ideal street canyon geometries, showing a consolidate confidence in accuracy with errors of less than 0.2 %. Using a three-dimensional building database, the numerical method is also demonstrated to be applicable in determining the sky-view factors at the horizontal (roofs and roads) and vertical (walls) surfaces in real urban environments. The results suggest that the analytically based numerical method can be used for the radiative process parameterization of urban numerical models as well as for the characterization of local urban climate.

  13. Contribution of artificial intelligence to the knowledge of prognostic factors in laryngeal carcinoma.

    PubMed

    Zapater, E; Moreno, S; Fortea, M A; Campos, A; Armengot, M; Basterra, J

    2000-11-01

    Many studies have investigated prognostic factors in laryngeal carcinoma, with sometimes conflicting results. Apart from the importance of environmental factors, the different statistical methods employed may have influenced such discrepancies. A program based on artificial intelligence techniques is designed to determine the prognostic factors in a series of 122 laryngeal carcinomas. The results obtained are compared with those derived from two classical statistical methods (Cox regression and mortality tables). Tumor location was found to be the most important prognostic factor by all methods. The proposed intelligent system is found to be a sound method capable of detecting exceptional cases.

  14. Methods for Estimating Uncertainty in Factor Analytic Solutions

    EPA Science Inventory

    The EPA PMF (Environmental Protection Agency positive matrix factorization) version 5.0 and the underlying multilinear engine-executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS), displacement of factor elements (DI...

  15. Factor and prevention method of landslide event at FELCRA Semungkis, Hulu Langat, Selangor

    NASA Astrophysics Data System (ADS)

    Manap, N.; Jeyaramah, N.; Syahrom, N.

    2017-12-01

    Landslide is known as one of the powerful geological events that happens unpredictably due to natural or human factors. A study was carried out at FELCRA Semungkis, Hulu Langat which is known as one of the areas that has been affected by landslide that involving 16 causalities. The purpose of this study is to identify the main factor that causes the landslide at FELCRA Semungkis, Hulu Langat and to identify the protection method. Data was collected from three respondents working under government bodies through interview sessions. The data collected were analysed by using the content analysis method. From the results, it can be concluded that the main factors that caused the landslide to happened are the human factor and nature factor. The protection method that can be applied to stabilize the FELCRA Semungkis, Hulu Langat is by using the soil nailing method with the support of soil create system.

  16. Experimental design methods for bioengineering applications.

    PubMed

    Keskin Gündoğdu, Tuğba; Deniz, İrem; Çalışkan, Gülizar; Şahin, Erdem Sefa; Azbar, Nuri

    2016-01-01

    Experimental design is a form of process analysis in which certain factors are selected to obtain the desired responses of interest. It may also be used for the determination of the effects of various independent factors on a dependent factor. The bioengineering discipline includes many different areas of scientific interest, and each study area is affected and governed by many different factors. Briefly analyzing the important factors and selecting an experimental design for optimization are very effective tools for the design of any bioprocess under question. This review summarizes experimental design methods that can be used to investigate various factors relating to bioengineering processes. The experimental methods generally used in bioengineering are as follows: full factorial design, fractional factorial design, Plackett-Burman design, Taguchi design, Box-Behnken design and central composite design. These design methods are briefly introduced, and then the application of these design methods to study different bioengineering processes is analyzed.

  17. Power factor regulation for household usage

    NASA Astrophysics Data System (ADS)

    Daud, Nik Ghazali Nik; Hashim, Fakroul Ridzuan; Tarmizi, Muhammad Haziq Ahmad

    2018-02-01

    Power factor regulator technology has recently drawn attention to the consumer and to power generation company in order for consumers to use electricity efficiently. Controlling of power factor for efficient usage can reduce the production of power in fulfilment demands hence reducing the greenhouse effect. This paper presents the design method of power factor controller for household usage. There are several methods to improve the power factor. The power factor controller used by this method is by using capacitors. Total harmonic distortion also has become a major problem for the reliability of the electrical appliances and techniques to control it will be discussed.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ji, Changyoon, E-mail: changyoon@yonsei.ac.kr; Hong, Taehoon, E-mail: hong7@yonsei.ac.kr

    Previous studies have proposed several methods for integrating characterized environmental impacts as a single index in life cycle assessment. Each of them, however, may lead to different results. This study presents internal and external normalization methods, weighting factors proposed by panel methods, and a monetary valuation based on an endpoint life cycle impact assessment method as the integration methods. Furthermore, this study investigates the differences among the integration methods and identifies the causes of the differences through a case study in which five elementary school buildings were used. As a result, when using internal normalization with weighting factors, the weightingmore » factors had a significant influence on the total environmental impacts whereas the normalization had little influence on the total environmental impacts. When using external normalization with weighting factors, the normalization had more significant influence on the total environmental impacts than weighing factors. Due to such differences, the ranking of the five buildings varied depending on the integration methods. The ranking calculated by the monetary valuation method was significantly different from that calculated by the normalization and weighting process. The results aid decision makers in understanding the differences among these integration methods, and, finally, help them select the method most appropriate for the goal at hand.« less

  19. Method Effects on an Adaptation of the Rosenberg Self-Esteem Scale in Greek and the Role of Personality Traits.

    PubMed

    Michaelides, Michalis P; Koutsogiorgi, Chrystalla; Panayiotou, Georgia

    2016-01-01

    Rosenberg's Self-Esteem Scale is a balanced, 10-item scale designed to be unidimensional; however, research has repeatedly shown that its factorial structure is contaminated by method effects due to item wording. Beyond the substantive self-esteem factor, 2 additional factors linked to the positive and negative wording of items have been theoretically specified and empirically supported. Initial evidence has revealed systematic relations of the 2 method factors with variables expressing approach and avoidance motivation. This study assessed the fit of competing confirmatory factor analytic models for the Rosenberg Self-Esteem Scale using data from 2 samples of adult participants in Cyprus. Models that accounted for both positive and negative wording effects via 2 latent method factors had better fit compared to alternative models. Measures of experiential avoidance, social anxiety, and private self-consciousness were associated with the method factors in structural equation models. The findings highlight the need to specify models with wording effects for a more accurate representation of the scale's structure and support the hypothesis of method factors as response styles, which are associated with individual characteristics related to avoidance motivation, behavioral inhibition, and anxiety.

  20. Use of multiple methods to determine factors affecting quality of care of patients with diabetes.

    PubMed

    Khunti, K

    1999-10-01

    The process of care of patients with diabetes is complex; however, GPs are playing a greater role in its management. Despite the research evidence, the quality of care of patients with diabetes is variable. In order to improve care, information is required on the obstacles faced by practices in improving care. Qualitative and quantitative methods can be used for formation of hypotheses and the development of survey procedures. However, to date few examples exist in general practice research on the use of multiple methods using both quantitative and qualitative techniques for hypothesis generation. We aimed to determine information on all factors that may be associated with delivery of care to patients with diabetes. Factors for consideration on delivery of diabetes care were generated by multiple qualitative methods including brainstorming with health professionals and patients, a focus group and interviews with key informants which included GPs and practice nurses. Audit data showing variations in care of patients with diabetes were used to stimulate the brainstorming session. A systematic literature search focusing on quality of care of patients with diabetes in primary care was also conducted. Fifty-four potential factors were identified by multiple methods. Twenty (37.0%) were practice-related factors, 14 (25.9%) were patient-related factors and 20 (37.0%) were organizational factors. A combination of brainstorming and the literature review identified 51 (94.4%) factors. Patients did not identify factors in addition to those identified by other methods. The complexity of delivery of care to patients with diabetes is reflected in the large number of potential factors identified in this study. This study shows the feasibility of using multiple methods for hypothesis generation. Each evaluation method provided unique data which could not otherwise be easily obtained. This study highlights a way of combining various traditional methods in an attempt to overcome the deficiencies and bias that may occur when using a single method. Similar methods can also be used to generate hypotheses for other exploratory research. An important responsibility of health authorities and primary care groups will be to assess the health needs of their local populations. Multiple methods could also be used to identify and commission services to meet these needs.

  1. Fuzzy forecasting based on two-factors second-order fuzzy-trend logical relationship groups and particle swarm optimization techniques.

    PubMed

    Chen, Shyi-Ming; Manalu, Gandhi Maruli Tua; Pan, Jeng-Shyang; Liu, Hsiang-Chuan

    2013-06-01

    In this paper, we present a new method for fuzzy forecasting based on two-factors second-order fuzzy-trend logical relationship groups and particle swarm optimization (PSO) techniques. First, we fuzzify the historical training data of the main factor and the secondary factor, respectively, to form two-factors second-order fuzzy logical relationships. Then, we group the two-factors second-order fuzzy logical relationships into two-factors second-order fuzzy-trend logical relationship groups. Then, we obtain the optimal weighting vector for each fuzzy-trend logical relationship group by using PSO techniques to perform the forecasting. We also apply the proposed method to forecast the Taiwan Stock Exchange Capitalization Weighted Stock Index and the NTD/USD exchange rates. The experimental results show that the proposed method gets better forecasting performance than the existing methods.

  2. Development of an Advanced Respirator Fit-Test Headform

    PubMed Central

    Bergman, Michael S.; Zhuang, Ziqing; Hanson, David; Heimbuch, Brian K.; McDonald, Michael J.; Palmiero, Andrew J.; Shaffer, Ronald E.; Harnish, Delbert; Husband, Michael; Wander, Joseph D.

    2015-01-01

    Improved respirator test headforms are needed to measure the fit of N95 filtering facepiece respirators (FFRs) for protection studies against viable airborne particles. A Static (i.e., non-moving, non-speaking) Advanced Headform (StAH) was developed for evaluating the fit of N95 FFRs. The StAH was developed based on the anthropometric dimensions of a digital headform reported by the National Institute for Occupational Safety and Health (NIOSH) and has a silicone polymer skin with defined local tissue thicknesses. Quantitative fit factor evaluations were performed on seven N95 FFR models of various sizes and designs. Donnings were performed with and without a pre-test leak checking method. For each method, four replicate FFR samples of each of the seven models were tested with two donnings per replicate, resulting in a total of 56 tests per donning method. Each fit factor evaluation was comprised of three 86-sec exercises: “Normal Breathing” (NB, 11.2 liters per min (lpm)), “Deep Breathing” (DB, 20.4 lpm), then NB again. A fit factor for each exercise and an overall test fit factor were obtained. Analysis of variance methods were used to identify statistical differences among fit factors (analyzed as logarithms) for different FFR models, exercises, and testing methods. For each FFR model and for each testing method, the NB and DB fit factor data were not significantly different (P > 0.05). Significant differences were seen in the overall exercise fit factor data for the two donning methods among all FFR models (pooled data) and in the overall exercise fit factor data for the two testing methods within certain models. Utilization of the leak checking method improved the rate of obtaining overall exercise fit factors ≥100. The FFR models, which are expected to achieve overall fit factors ≥ 100 on human subjects, achieved overall exercise fit factors ≥ 100 on the StAH. Further research is needed to evaluate the correlation of FFRs fitted on the StAH to FFRs fitted on people. PMID:24369934

  3. A novel statistical approach for identification of the master regulator transcription factor.

    PubMed

    Sikdar, Sinjini; Datta, Susmita

    2017-02-02

    Transcription factors are known to play key roles in carcinogenesis and therefore, are gaining popularity as potential therapeutic targets in drug development. A 'master regulator' transcription factor often appears to control most of the regulatory activities of the other transcription factors and the associated genes. This 'master regulator' transcription factor is at the top of the hierarchy of the transcriptomic regulation. Therefore, it is important to identify and target the master regulator transcription factor for proper understanding of the associated disease process and identifying the best therapeutic option. We present a novel two-step computational approach for identification of master regulator transcription factor in a genome. At the first step of our method we test whether there exists any master regulator transcription factor in the system. We evaluate the concordance of two ranked lists of transcription factors using a statistical measure. In case the concordance measure is statistically significant, we conclude that there is a master regulator. At the second step, our method identifies the master regulator transcription factor, if there exists one. In the simulation scenario, our method performs reasonably well in validating the existence of a master regulator when the number of subjects in each treatment group is reasonably large. In application to two real datasets, our method ensures the existence of master regulators and identifies biologically meaningful master regulators. An R code for implementing our method in a sample test data can be found in http://www.somnathdatta.org/software . We have developed a screening method of identifying the 'master regulator' transcription factor just using only the gene expression data. Understanding the regulatory structure and finding the master regulator help narrowing the search space for identifying biomarkers for complex diseases such as cancer. In addition to identifying the master regulator our method provides an overview of the regulatory structure of the transcription factors which control the global gene expression profiles and consequently the cell functioning.

  4. Synthesizing Regression Results: A Factored Likelihood Method

    ERIC Educational Resources Information Center

    Wu, Meng-Jia; Becker, Betsy Jane

    2013-01-01

    Regression methods are widely used by researchers in many fields, yet methods for synthesizing regression results are scarce. This study proposes using a factored likelihood method, originally developed to handle missing data, to appropriately synthesize regression models involving different predictors. This method uses the correlations reported…

  5. A Voice-Radio Method for Collecting Human Factors Data.

    ERIC Educational Resources Information Center

    Askren, William B.; And Others

    Available methods for collecting human factors data rely heavily on observations, interviews, and questionnaires. A need exists for other methods. The feasibility of using two-way voice-radio for this purpose was studied. The data collection methodology consisted of a human factors analyst talking from a radio base station with technicians wearing…

  6. Characterization of Residual Stress Effects on Fatigue Crack Growth of a Friction Stir Welded Aluminum Alloy

    NASA Technical Reports Server (NTRS)

    Newman, John A.; Smith, Stephen W.; Seshadri, Banavara R.; James, Mark A.; Brazill, Richard L.; Schultz, Robert W.; Donald, J. Keith; Blair, Amy

    2015-01-01

    An on-line compliance-based method to account for residual stress effects in stress-intensity factor and fatigue crack growth property determinations has been evaluated. Residual stress intensity factor results determined from specimens containing friction stir weld induced residual stresses are presented, and the on-line method results were found to be in excellent agreement with residual stress-intensity factor data obtained using the cut compliance method. Variable stress-intensity factor tests were designed to demonstrate that a simple superposition model, summing the applied stress-intensity factor with the residual stress-intensity factor, can be used to determine the total crack-tip stress-intensity factor. Finite element, VCCT (virtual crack closure technique), and J-integral analysis methods have been used to characterize weld-induced residual stress using thermal expansion/contraction in the form of an equivalent delta T (change in local temperature during welding) to simulate the welding process. This equivalent delta T was established and applied to analyze different specimen configurations to predict residual stress distributions and associated residual stress-intensity factor values. The predictions were found to agree well with experimental results obtained using the crack- and cut-compliance methods.

  7. Calibrated Bayes Factors Should Not Be Used: A Reply to Hoijtink, van Kooten, and Hulsker.

    PubMed

    Morey, Richard D; Wagenmakers, Eric-Jan; Rouder, Jeffrey N

    2016-01-01

    Hoijtink, Kooten, and Hulsker ( 2016 ) present a method for choosing the prior distribution for an analysis with Bayes factor that is based on controlling error rates, which they advocate as an alternative to our more subjective methods (Morey & Rouder, 2014 ; Rouder, Speckman, Sun, Morey, & Iverson, 2009 ; Wagenmakers, Wetzels, Borsboom, & van der Maas, 2011 ). We show that the method they advocate amounts to a simple significance test, and that the resulting Bayes factors are not interpretable. Additionally, their method fails in common circumstances, and has the potential to yield arbitrarily high Type II error rates. After critiquing their method, we outline the position on subjectivity that underlies our advocacy of Bayes factors.

  8. Two methods of assessing the mortality factors affecting the larvae and pupae of Cameraria ohridella in the leaves of Aesculus hippocastanum in Switzerland and Bulgaria.

    PubMed

    Girardoz, S; Tomov, R; Eschen, R; Quicke, D L J; Kenis, M

    2007-10-01

    The horse-chestnut leaf miner, Cameraria ohridella, is an invasive alien species defoliating horse-chestnut, a popular ornamental tree in Europe. This paper presents quantitative data on mortality factors affecting larvae and pupae of the leaf miner in Switzerland and Bulgaria, both in urban and forest environments. Two sampling methods were used and compared: a cohort method, consisting of the surveying of pre-selected mines throughout their development, and a grab sampling method, consisting of single sets of leaves collected and dissected at regular intervals. The total mortality per generation varied between 14 and 99%. Mortality was caused by a variety of factors, including parasitism, host feeding, predation by birds and arthropods, plant defence reaction, leaf senescence, intra-specific competition and inter-specific competition with a fungal disease. Significant interactions were found between mortality factors and sampling methods, countries, environments and generation. No mortality factor was dominant throughout the sites, generations and methods tested. Plant defence reactions constituted the main mortality factor for the first two larval stages, whereas predation by birds and arthropods and parasitism were more important in older larvae and pupae. Mortality caused by leaf senescence was often the dominant mortality factor in the last annual generation. The cohort method detected higher mortality rates than the grab sampling method. In particular, mortality by plant defence reaction and leaf senescence were better assessed using the cohort method, which is, therefore, recommended for life table studies on leaf miners.

  9. Determination of antenna factors using a three-antenna method at open-field test site

    NASA Astrophysics Data System (ADS)

    Masuzawa, Hiroshi; Tejima, Teruo; Harima, Katsushige; Morikawa, Takao

    1992-09-01

    Recently NIST has used the three-antenna method for calibration of the antenna factor of an antenna used for EMI measurements. This method does not require the specially designed standard antennas which are necessary in the standard field method or the standard antenna method, and can be used at an open-field test site. This paper theoretically and experimentally examines the measurement errors of this method and evaluates the precision of the antenna-factor calibration. It is found that the main source of the error is the non-ideal propagation characteristics of the test site, which should therefore be measured before the calibration. The precision of the antenna-factor calibration at the test site used in these experiments, is estimated to be 0.5 dB.

  10. Method and apparatus for lead-unity-lag electric power generation system

    NASA Technical Reports Server (NTRS)

    Ganev, Evgeni (Inventor); Warr, William (Inventor); Salam, Mohamed (Arif) (Inventor)

    2013-01-01

    A method employing a lead-unity-lag adjustment on a power generation system is disclosed. The method may include calculating a unity power factor point and adjusting system parameters to shift a power factor angle to substantially match an operating power angle creating a new unity power factor point. The method may then define operation parameters for a high reactance permanent magnet machine based on the adjusted power level.

  11. Improving Your Exploratory Factor Analysis for Ordinal Data: A Demonstration Using FACTOR

    ERIC Educational Resources Information Center

    Baglin, James

    2014-01-01

    Exploratory factor analysis (EFA) methods are used extensively in the field of assessment and evaluation. Due to EFA's widespread use, common methods and practices have come under close scrutiny. A substantial body of literature has been compiled highlighting problems with many of the methods and practices used in EFA, and, in response, many…

  12. A DEIM Induced CUR Factorization

    DTIC Science & Technology

    2015-09-18

    CUR approximate matrix factorization based on the Discrete Empirical Interpolation Method (DEIM). For a given matrix A, such a factorization provides a...CUR approximations based on leverage scores. 1 Introduction This work presents a new CUR matrix factorization based upon the Discrete Empirical...SUPPLEMENTARY NOTES 14. ABSTRACT We derive a CUR approximate matrix factorization based on the Discrete Empirical Interpolation Method (DEIM). For a given

  13. Multiple Interacting Risk Factors: On Methods for Allocating Risk Factor Interactions.

    PubMed

    Price, Bertram; MacNicoll, Michael

    2015-05-01

    A persistent problem in health risk analysis where it is known that a disease may occur as a consequence of multiple risk factors with interactions is allocating the total risk of the disease among the individual risk factors. This problem, referred to here as risk apportionment, arises in various venues, including: (i) public health management, (ii) government programs for compensating injured individuals, and (iii) litigation. Two methods have been described in the risk analysis and epidemiology literature for allocating total risk among individual risk factors. One method uses weights to allocate interactions among the individual risk factors. The other method is based on risk accounting axioms and finding an optimal and unique allocation that satisfies the axioms using a procedure borrowed from game theory. Where relative risk or attributable risk is the risk measure, we find that the game-theory-determined allocation is the same as the allocation where risk factor interactions are apportioned to individual risk factors using equal weights. Therefore, the apportionment problem becomes one of selecting a meaningful set of weights for allocating interactions among the individual risk factors. Equal weights and weights proportional to the risks of the individual risk factors are discussed. © 2015 Society for Risk Analysis.

  14. The scalar and electromagnetic form factors of the nucleon in dispersively improved Chiral EFT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alarcon, Jose Manuel

    We present a method for calculating the nucleon form factors of G-parity-even operators. This method combines chiral effective field theory (χEFT) and dispersion theory. Through unitarity we factorize the imaginary part of the form factors into a perturbative part, calculable with χEFT, and a non-perturbative part, obtained through other methods. We consider the scalar and electromagnetic (EM) form factors of the nucleon. The results show an important improvement compared to standard chiral calculations, and can be used in analysis of the low-energy properties of the nucleon.

  15. Spatial association between dissection density and environmental factors over the entire conterminous United States

    NASA Astrophysics Data System (ADS)

    Luo, Wei; Jasiewicz, Jaroslaw; Stepinski, Tomasz; Wang, Jinfeng; Xu, Chengdong; Cang, Xuezhi

    2016-01-01

    Previous studies of land dissection density (D) often find contradictory results regarding factors controlling its spatial variation. We hypothesize that the dominant controlling factors (and the interactions between them) vary from region to region due to differences in each region's local characteristics and geologic history. We test this hypothesis by applying a geographical detector method to eight physiographic divisions of the conterminous United States and identify the dominant factor(s) in each. The geographical detector method computes the power of determinant (q) that quantitatively measures the affinity between the factor considered and D. Results show that the factor (or factor combination) with the largest q value is different for physiographic regions with different characteristics and geologic histories. For example, lithology dominates in mountainous regions, curvature dominates in plains, and glaciation dominates in previously glaciated areas. The geographical detector method offers an objective framework for revealing factors controlling Earth surface processes.

  16. An effective method to accurately calculate the phase space factors for β - β - decay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neacsu, Andrei; Horoi, Mihai

    2016-01-01

    Accurate calculations of the electron phase space factors are necessary for reliable predictions of double-beta decay rates and for the analysis of the associated electron angular and energy distributions. Here, we present an effective method to calculate these phase space factors that takes into account the distorted Coulomb field of the daughter nucleus, yet it allows one to easily calculate the phase space factors with good accuracy relative to the most exact methods available in the recent literature.

  17. Bayesian data augmentation methods for the synthesis of qualitative and quantitative research findings

    PubMed Central

    Crandell, Jamie L.; Voils, Corrine I.; Chang, YunKyung; Sandelowski, Margarete

    2010-01-01

    The possible utility of Bayesian methods for the synthesis of qualitative and quantitative research has been repeatedly suggested but insufficiently investigated. In this project, we developed and used a Bayesian method for synthesis, with the goal of identifying factors that influence adherence to HIV medication regimens. We investigated the effect of 10 factors on adherence. Recognizing that not all factors were examined in all studies, we considered standard methods for dealing with missing data and chose a Bayesian data augmentation method. We were able to summarize, rank, and compare the effects of each of the 10 factors on medication adherence. This is a promising methodological development in the synthesis of qualitative and quantitative research. PMID:21572970

  18. Acute and impaired wound healing: pathophysiology and current methods for drug delivery, part 2: role of growth factors in normal and pathological wound healing: therapeutic potential and methods of delivery.

    PubMed

    Demidova-Rice, Tatiana N; Hamblin, Michael R; Herman, Ira M

    2012-08-01

    This is the second of 2 articles that discuss the biology and pathophysiology of wound healing, reviewing the role that growth factors play in this process and describing the current methods for growth factor delivery into the wound bed.

  19. Acute and Impaired Wound Healing: Pathophysiology and Current Methods for Drug Delivery, Part 2: Role of Growth Factors in Normal and Pathological Wound Healing: Therapeutic Potential and Methods of Delivery

    PubMed Central

    Demidova-Rice, Tatiana N.; Hamblin, Michael R.; Herman, Ira M.

    2012-01-01

    This is the second of 2 articles that discuss the biology and pathophysiology of wound healing, reviewing the role that growth factors play in this process and describing the current methods for growth factor delivery into the wound bed. PMID:22820962

  20. COMPARING A NEW ALGORITHM WITH THE CLASSIC METHODS FOR ESTIMATING THE NUMBER OF FACTORS. (R825173)

    EPA Science Inventory

    Abstract

    This paper presents and compares a new algorithm for finding the number of factors in a data analytic model. After we describe the new method, called NUMFACT, we compare it with standard methods for finding the number of factors to use in a model. The standard...

  1. Factors Affecting Optimal Surface Roughness of AISI 4140 Steel in Turning Operation Using Taguchi Experiment

    NASA Astrophysics Data System (ADS)

    Novareza, O.; Sulistiyarini, D. H.; Wiradmoko, R.

    2018-02-01

    This paper presents the result of using Taguchi method in turning process of medium carbon steel of AISI 4140. The primary concern is to find the optimal surface roughness after turning process. The taguchi method is used to get a combination of factors and factor levels in order to get the optimum surface roughness level. Four important factors with three levels were used in experiment based on Taguchi method. A number of 27 experiments were carried out during the research and analysed using analysis of variance (ANOVA) method. The result of surface finish was determined in Ra type surface roughness. The depth of cut was found to be the most important factors for reducing the surface roughness of AISI 4140 steel. On the contrary, the other important factors i.e. spindle speed and rake side angle of the tool were proven to be less factors that affecting the surface finish. It is interesting to see the effect of coolant composition that gained the second important factors to reduce the roughness. It may need further research to explain this result.

  2. Spatial smoothing coherence factor for ultrasound computed tomography

    NASA Astrophysics Data System (ADS)

    Lou, Cuijuan; Xu, Mengling; Ding, Mingyue; Yuchi, Ming

    2016-04-01

    In recent years, many research studies have been carried out on ultrasound computed tomography (USCT) for its application prospect in early diagnosis of breast cancer. This paper applies four kinds of coherence-factor-like beamforming methods to improve the image quality of synthetic aperture focusing method for USCT, including the coherence-factor (CF), the phase coherence factor (PCF), the sign coherence factor (SCF) and the spatial smoothing coherence factor (SSCF) (proposed in our previous work). The performance of these methods was tested with simulated raw data which were generated by the ultrasound simulation software PZFlex 2014. The simulated phantom was set to be water of 4cm diameter with three nylon objects of different diameters inside. The ring-type transducer had 72 elements with a center frequency of 1MHz. The results show that all the methods can reveal the biggest nylon circle with the radius of 2.5mm. SSCF gets the highest SNR among the proposed methods and provides a more homogenous background. None of these methods can reveal the two smaller nylon circles with the radius of 0.75mm and 0.25mm. This may be due to the small number of elements.

  3. A systematic review and appraisal of methods of developing and validating lifestyle cardiovascular disease risk factors questionnaires.

    PubMed

    Nse, Odunaiya; Quinette, Louw; Okechukwu, Ogah

    2015-09-01

    Well developed and validated lifestyle cardiovascular disease (CVD) risk factors questionnaires is the key to obtaining accurate information to enable planning of CVD prevention program which is a necessity in developing countries. We conducted this review to assess methods and processes used for development and content validation of lifestyle CVD risk factors questionnaires and possibly develop an evidence based guideline for development and content validation of lifestyle CVD risk factors questionnaires. Relevant databases at the Stellenbosch University library were searched for studies conducted between 2008 and 2012, in English language and among humans. Using the following databases; pubmed, cinahl, psyc info and proquest. Search terms used were CVD risk factors, questionnaires, smoking, alcohol, physical activity and diet. Methods identified for development of lifestyle CVD risk factors were; review of literature either systematic or traditional, involvement of expert and /or target population using focus group discussion/interview, clinical experience of authors and deductive reasoning of authors. For validation, methods used were; the involvement of expert panel, the use of target population and factor analysis. Combination of methods produces questionnaires with good content validity and other psychometric properties which we consider good.

  4. Hypothesis Testing Using Factor Score Regression: A Comparison of Four Methods

    ERIC Educational Resources Information Center

    Devlieger, Ines; Mayer, Axel; Rosseel, Yves

    2016-01-01

    In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and…

  5. Slip and Slide Method of Factoring Trinomials with Integer Coefficients over the Integers

    ERIC Educational Resources Information Center

    Donnell, William A.

    2012-01-01

    In intermediate and college algebra courses there are a number of methods for factoring quadratic trinomials with integer coefficients over the integers. Some of these methods have been given names, such as trial and error, reversing FOIL, AC method, middle term splitting method and slip and slide method. The purpose of this article is to discuss…

  6. Cross-Cultural Adaptation and Validation of the MPAM-R to Brazilian Portuguese and Proposal of a New Method to Calculate Factor Scores

    PubMed Central

    Albuquerque, Maicon R.; Lopes, Mariana C.; de Paula, Jonas J.; Faria, Larissa O.; Pereira, Eveline T.; da Costa, Varley T.

    2017-01-01

    In order to understand the reasons that lead individuals to practice physical activity, researchers developed the Motives for Physical Activity Measure-Revised (MPAM-R) scale. In 2010, a translation of MPAM-R to Portuguese and its validation was performed. However, psychometric measures were not acceptable. In addition, factor scores in some sports psychology scales are calculated by the mean of scores by items of the factor. Nevertheless, it seems appropriate that items with higher factor loadings, extracted by Factor Analysis, have greater weight in the factor score, as items with lower factor loadings have less weight in the factor score. The aims of the present study are to translate, validate the MPAM-R for Portuguese versions, and investigate agreement between two methods used to calculate factor scores. Three hundred volunteers who were involved in physical activity programs for at least 6 months were collected. Confirmatory Factor Analysis of the 30 items indicated that the version did not fit the model. After excluding four items, the final model with 26 items showed acceptable model fit measures by Exploratory Factor Analysis, as well as it conceptually supports the five factors as the original proposal. When two methods are compared to calculate factors scores, our results showed that only “Enjoyment” and “Appearance” factors showed agreement between methods to calculate factor scores. So, the Portuguese version of the MPAM-R can be used in a Brazilian context, and a new proposal for the calculation of the factor score seems to be promising. PMID:28293203

  7. Contraceptive Method Choice Among Young Adults: Influence of Individual and Relationship Factors.

    PubMed

    Harvey, S Marie; Oakley, Lisa P; Washburn, Isaac; Agnew, Christopher R

    2018-01-26

    Because decisions related to contraceptive behavior are often made by young adults in the context of specific relationships, the relational context likely influences use of contraceptives. Data presented here are from in-person structured interviews with 536 Black, Hispanic, and White young adults from East Los Angeles, California. We collected partner-specific relational and contraceptive data on all sexual partnerships for each individual, on four occasions, over one year. Using three-level multinomial logistic regression models, we examined individual and relationship factors predictive of contraceptive use. Results indicated that both individual and relationship factors predicted contraceptive use, but factors varied by method. Participants reporting greater perceived partner exclusivity and relationship commitment were more likely to use hormonal/long-acting methods only or a less effective method/no method versus condoms only. Those with greater participation in sexual decision making were more likely to use any method over a less effective method/no method and were more likely to use condoms only or dual methods versus a hormonal/long-acting method only. In addition, for women only, those who reported greater relationship commitment were more likely to use hormonal/long-acting methods or a less effective method/no method versus a dual method. In summary, interactive relationship qualities and dynamics (commitment and sexual decision making) significantly predicted contraceptive use.

  8. Radium concentration factors and their use in health and environmental risk assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meinhold, A.F.; Hamilton, L.D.

    1991-12-31

    Radium is known to be taken up by aquatic animals, and tends to accumulate in bone, shell and exoskeleton. The most common approach to estimating the uptake of a radionuclide by aquatic animals for use in health and environmental risk assessments is the concentration factor method. The concentration factor method relates the concentration of a contaminant in an organism to the concentration in the surrounding water. Site specific data are not usually available, and generic, default values are often used in risk assessment studies. This paper describes the concentration factor method, summarizes some of the variables which may influence themore » concentration factor for radium, reviews reported concentration factors measured in marine environments and presents concentration factors derived from data collected in a study in coastal Louisiana. The use of generic default values for the concentration factor is also discussed.« less

  9. Radium concentration factors and their use in health and environmental risk assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meinhold, A.F.; Hamilton, L.D.

    1991-01-01

    Radium is known to be taken up by aquatic animals, and tends to accumulate in bone, shell and exoskeleton. The most common approach to estimating the uptake of a radionuclide by aquatic animals for use in health and environmental risk assessments is the concentration factor method. The concentration factor method relates the concentration of a contaminant in an organism to the concentration in the surrounding water. Site specific data are not usually available, and generic, default values are often used in risk assessment studies. This paper describes the concentration factor method, summarizes some of the variables which may influence themore » concentration factor for radium, reviews reported concentration factors measured in marine environments and presents concentration factors derived from data collected in a study in coastal Louisiana. The use of generic default values for the concentration factor is also discussed.« less

  10. A Note on Procrustean Rotation in Exploratory Factor Analysis: A Computer Intensive Approach to Goodness-of-Fit Evaluation.

    ERIC Educational Resources Information Center

    Raykov, Tenko; Little, Todd D.

    1999-01-01

    Describes a method for evaluating results of Procrustean rotation to a target factor pattern matrix in exploratory factor analysis. The approach, based on the bootstrap method, yields empirical approximations of the sampling distributions of: (1) differences between target elements and rotated factor pattern matrices; and (2) the overall…

  11. Climate-dependence of ecosystem services in a nature reserve in northern China

    PubMed Central

    Fang, Jiaohui; Song, Huali; Zhang, Yiran; Li, Yanran

    2018-01-01

    Evaluation of ecosystem services has become a hotspot in terms of research focus, but uncertainties over appropriate methods remain. Evaluation can be based on the unit price of services (services value method) or the unit price of the area (area value method). The former takes meteorological factors into account, while the latter does not. This study uses Kunyu Mountain Nature Reserve as a study site at which to test the effects of climate on the ecosystem services. Measured data and remote sensing imagery processed in a geographic information system were combined to evaluate gas regulation and soil conservation, and the influence of meteorological factors on ecosystem services. Results were used to analyze the appropriateness of the area value method. Our results show that the value of ecosystem services is significantly affected by meteorological factors, especially precipitation. Use of the area value method (which ignores the impacts of meteorological factors) could considerably impede the accuracy of ecosystem services evaluation. Results were also compared with the valuation obtained using the modified equivalent value factor (MEVF) method, which is a modified area value method that considers changes in meteorological conditions. We found that MEVF still underestimates the value of ecosystem services, although it can reflect to some extent the annual variation in meteorological factors. Our findings contribute to increasing the accuracy of evaluation of ecosystem services. PMID:29438427

  12. Climate-dependence of ecosystem services in a nature reserve in northern China.

    PubMed

    Fang, Jiaohui; Song, Huali; Zhang, Yiran; Li, Yanran; Liu, Jian

    2018-01-01

    Evaluation of ecosystem services has become a hotspot in terms of research focus, but uncertainties over appropriate methods remain. Evaluation can be based on the unit price of services (services value method) or the unit price of the area (area value method). The former takes meteorological factors into account, while the latter does not. This study uses Kunyu Mountain Nature Reserve as a study site at which to test the effects of climate on the ecosystem services. Measured data and remote sensing imagery processed in a geographic information system were combined to evaluate gas regulation and soil conservation, and the influence of meteorological factors on ecosystem services. Results were used to analyze the appropriateness of the area value method. Our results show that the value of ecosystem services is significantly affected by meteorological factors, especially precipitation. Use of the area value method (which ignores the impacts of meteorological factors) could considerably impede the accuracy of ecosystem services evaluation. Results were also compared with the valuation obtained using the modified equivalent value factor (MEVF) method, which is a modified area value method that considers changes in meteorological conditions. We found that MEVF still underestimates the value of ecosystem services, although it can reflect to some extent the annual variation in meteorological factors. Our findings contribute to increasing the accuracy of evaluation of ecosystem services.

  13. The X-Factor: an evaluation of common methods used to analyse major inter-segment kinematics during the golf swing.

    PubMed

    Brown, Susan J; Selbie, W Scott; Wallace, Eric S

    2013-01-01

    A common biomechanical feature of a golf swing, described in various ways in the literature, is the interaction between the thorax and pelvis, often termed the X-Factor. There is no consistent method used within golf biomechanics literature however to calculate these segment interactions. The purpose of this study was to examine X-factor data calculated using three reported methods in order to determine the similarity or otherwise of the data calculated using each method. A twelve-camera three-dimensional motion capture system was used to capture the driver swings of 19 participants and a subject specific three-dimensional biomechanical model was created with the position and orientation of each model estimated using a global optimisation algorithm. Comparison of the X-Factor methods showed significant differences for events during the swing (P < 0.05). Data for each kinematic measure were derived as a times series for all three methods and regression analysis of these data showed that whilst one method could be successfully mapped to another, the mappings between methods are subject dependent (P <0.05). Findings suggest that a consistent methodology considering the X-Factor from a joint angle approach is most insightful in describing a golf swing.

  14. Methods for analysis of cracks in three-dimensional solids

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Newman, J. C., Jr.

    1984-01-01

    Analytical and numerical methods evaluating the stress-intensity factors for three-dimensional cracks in solids are presented, with reference to fatigue failure in aerospace structures. The exact solutions for embedded elliptical and circular cracks in infinite solids, and the approximate methods, including the finite-element, the boundary-integral equation, the line-spring models, and the mixed methods are discussed. Among the mixed methods, the superposition of analytical and finite element methods, the stress-difference, the discretization-error, the alternating, and the finite element-alternating methods are reviewed. Comparison of the stress-intensity factor solutions for some three-dimensional crack configurations showed good agreement. Thus, the choice of a particular method in evaluating the stress-intensity factor is limited only to the availability of resources and computer programs.

  15. Comparison of Adaline and Multiple Linear Regression Methods for Rainfall Forecasting

    NASA Astrophysics Data System (ADS)

    Sutawinaya, IP; Astawa, INGA; Hariyanti, NKD

    2018-01-01

    Heavy rainfall can cause disaster, therefore need a forecast to predict rainfall intensity. Main factor that cause flooding is there is a high rainfall intensity and it makes the river become overcapacity. This will cause flooding around the area. Rainfall factor is a dynamic factor, so rainfall is very interesting to be studied. In order to support the rainfall forecasting, there are methods that can be used from Artificial Intelligence (AI) to statistic. In this research, we used Adaline for AI method and Regression for statistic method. The more accurate forecast result shows the method that used is good for forecasting the rainfall. Through those methods, we expected which is the best method for rainfall forecasting here.

  16. Sufficient Forecasting Using Factor Models

    PubMed Central

    Fan, Jianqing; Xue, Lingzhou; Yao, Jiawei

    2017-01-01

    We consider forecasting a single time series when there is a large number of predictors and a possible nonlinear effect. The dimensionality was first reduced via a high-dimensional (approximate) factor model implemented by the principal component analysis. Using the extracted factors, we develop a novel forecasting method called the sufficient forecasting, which provides a set of sufficient predictive indices, inferred from high-dimensional predictors, to deliver additional predictive power. The projected principal component analysis will be employed to enhance the accuracy of inferred factors when a semi-parametric (approximate) factor model is assumed. Our method is also applicable to cross-sectional sufficient regression using extracted factors. The connection between the sufficient forecasting and the deep learning architecture is explicitly stated. The sufficient forecasting correctly estimates projection indices of the underlying factors even in the presence of a nonparametric forecasting function. The proposed method extends the sufficient dimension reduction to high-dimensional regimes by condensing the cross-sectional information through factor models. We derive asymptotic properties for the estimate of the central subspace spanned by these projection directions as well as the estimates of the sufficient predictive indices. We further show that the natural method of running multiple regression of target on estimated factors yields a linear estimate that actually falls into this central subspace. Our method and theory allow the number of predictors to be larger than the number of observations. We finally demonstrate that the sufficient forecasting improves upon the linear forecasting in both simulation studies and an empirical study of forecasting macroeconomic variables. PMID:29731537

  17. Effective Factors in Providing Holistic Care: A Qualitative Study

    PubMed Central

    Zamanzadeh, Vahid; Jasemi, Madineh; Valizadeh, Leila; Keogh, Brian; Taleghani, Fariba

    2015-01-01

    Background: Holistic care is a comprehensive model of caring. Previous studies have shown that most nurses do not apply this method. Examining the effective factors in nurses’ provision of holistic care can help with enhancing it. Studying these factors from the point of view of nurses will generate real and meaningful concepts and can help to extend this method of caring. Materials and Methods: A qualitative study was used to identify effective factors in holistic care provision. Data gathered by interviewing 14 nurses from university hospitals in Iran were analyzed with a conventional qualitative content analysis method and by using MAXQDA (professional software for qualitative and mixed methods data analysis) software. Results: Analysis of data revealed three main themes as effective factors in providing holistic care: The structure of educational system, professional environment, and personality traits. Conclusion: Establishing appropriate educational, management systems, and promoting religiousness and encouragement will induce nurses to provide holistic care and ultimately improve the quality of their caring. PMID:26009677

  18. Application of the Bootstrap Methods in Factor Analysis.

    ERIC Educational Resources Information Center

    Ichikawa, Masanori; Konishi, Sadanori

    1995-01-01

    A Monte Carlo experiment was conducted to investigate the performance of bootstrap methods in normal theory maximum likelihood factor analysis when the distributional assumption was satisfied or unsatisfied. Problems arising with the use of bootstrap methods are highlighted. (SLD)

  19. Comparison of point-of-care methods for preparation of platelet concentrate (platelet-rich plasma).

    PubMed

    Weibrich, Gernot; Kleis, Wilfried K G; Streckbein, Philipp; Moergel, Maximilian; Hitzler, Walter E; Hafner, Gerd

    2012-01-01

    This study analyzed the concentrations of platelets and growth factors in platelet-rich plasma (PRP), which are likely to depend on the method used for its production. The cellular composition and growth factor content of platelet concentrates (platelet-rich plasma) produced by six different procedures were quantitatively analyzed and compared. Platelet and leukocyte counts were determined on an automatic cell counter, and analysis of growth factors was performed using enzyme-linked immunosorbent assay. The principal differences between the analyzed PRP production methods (blood bank method of intermittent flow centrifuge system/platelet apheresis and by the five point-of-care methods) and the resulting platelet concentrates were evaluated with regard to resulting platelet, leukocyte, and growth factor levels. The platelet counts in both whole blood and PRP were generally higher in women than in men; no differences were observed with regard to age. Statistical analysis of platelet-derived growth factor AB (PDGF-AB) and transforming growth factor β1 (TGF-β1) showed no differences with regard to age or gender. Platelet counts and TGF-β1 concentration correlated closely, as did platelet counts and PDGF-AB levels. There were only rare correlations between leukocyte counts and PDGF-AB levels, but comparison of leukocyte counts and PDGF-AB levels demonstrated certain parallel tendencies. TGF-β1 levels derive in substantial part from platelets and emphasize the role of leukocytes, in addition to that of platelets, as a source of growth factors in PRP. All methods of producing PRP showed high variability in platelet counts and growth factor levels. The highest growth factor levels were found in the PRP prepared using the Platelet Concentrate Collection System manufactured by Biomet 3i.

  20. Algorithms for Solvents and Spectral Factors of Matrix Polynomials

    DTIC Science & Technology

    1981-01-01

    spectral factors of matrix polynomials LEANG S. SHIEHt, YIH T. TSAYt and NORMAN P. COLEMANt A generalized Newton method , based on the contracted gradient...of a matrix poly- nomial, is derived for solving the right (left) solvents and spectral factors of matrix polynomials. Two methods of selecting initial...estimates for rapid convergence of the newly developed numerical method are proposed. Also, new algorithms for solving complete sets of the right

  1. An alternative method for centrifugal compressor loading factor modelling

    NASA Astrophysics Data System (ADS)

    Galerkin, Y.; Drozdov, A.; Rekstin, A.; Soldatova, K.

    2017-08-01

    The loading factor at design point is calculated by one or other empirical formula in classical design methods. Performance modelling as a whole is out of consideration. Test data of compressor stages demonstrates that loading factor versus flow coefficient at the impeller exit has a linear character independent of compressibility. Known Universal Modelling Method exploits this fact. Two points define the function - loading factor at design point and at zero flow rate. The proper formulae include empirical coefficients. A good modelling result is possible if the choice of coefficients is based on experience and close analogs. Earlier Y. Galerkin and K. Soldatova had proposed to define loading factor performance by the angle of its inclination to the ordinate axis and by the loading factor at zero flow rate. Simple and definite equations with four geometry parameters were proposed for loading factor performance calculated for inviscid flow. The authors of this publication have studied the test performance of thirteen stages of different types. The equations are proposed with universal empirical coefficients. The calculation error lies in the range of plus to minus 1,5%. The alternative model of a loading factor performance modelling is included in new versions of the Universal Modelling Method.

  2. Proximal risk factors and suicide methods among suicide completers from national suicide mortality data 2004-2006 in Korea.

    PubMed

    Im, Jeong-Soo; Choi, Soon Ho; Hong, Duho; Seo, Hwa Jeong; Park, Subin; Hong, Jin Pyo

    2011-01-01

    This study was conducted to examine differences in proximal risk factors and suicide methods by sex and age in the national suicide mortality data in Korea. Data were collected from the National Police Agency and the National Statistical Office of Korea on suicide completers from 2004 to 2006. The 31,711 suicide case records were used to analyze suicide rates, methods, and proximal risk factors by sex and age. Suicide rate increased with age, especially in men. The most common proximal risk factor for suicide was medical illness in both sexes. The most common proximal risk factor for subjects younger than 30 years was found to be a conflict in relationships with family members, partner, or friends. Medical illness was found to increase in prevalence as a risk factor with age. Hanging/Suffocation was the most common suicide method used by both sexes. The use of drug/pesticide poisoning to suicide increased with age. A fall from height or hanging/suffocation was more popular in the younger age groups. Because proximal risk factors and suicide methods varied with sex and age, different suicide prevention measures are required after consideration of both of these parameters. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. Scale factor measure method without turntable for angular rate gyroscope

    NASA Astrophysics Data System (ADS)

    Qi, Fangyi; Han, Xuefei; Yao, Yanqing; Xiong, Yuting; Huang, Yuqiong; Wang, Hua

    2018-03-01

    In this paper, a scale factor test method without turntable is originally designed for the angular rate gyroscope. A test system which consists of test device, data acquisition circuit and data processing software based on Labview platform is designed. Taking advantage of gyroscope's sensitivity of angular rate, a gyroscope with known scale factor, serves as a standard gyroscope. The standard gyroscope is installed on the test device together with a measured gyroscope. By shaking the test device around its edge which is parallel to the input axis of gyroscope, the scale factor of the measured gyroscope can be obtained in real time by the data processing software. This test method is fast. It helps test system miniaturized, easy to carry or move. Measure quarts MEMS gyroscope's scale factor multi-times by this method, the difference is less than 0.2%. Compare with testing by turntable, the scale factor difference is less than 1%. The accuracy and repeatability of the test system seems good.

  4. An impact analysis of forecasting methods and forecasting parameters on bullwhip effect

    NASA Astrophysics Data System (ADS)

    Silitonga, R. Y. H.; Jelly, N.

    2018-04-01

    Bullwhip effect is an increase of variance of demand fluctuation from downstream to upstream of supply chain. Forecasting methods and forecasting parameters were recognized as some factors that affect bullwhip phenomena. To study these factors, we can develop simulations. There are several ways to simulate bullwhip effect in previous studies, such as mathematical equation modelling, information control modelling, computer program, and many more. In this study a spreadsheet program named Bullwhip Explorer was used to simulate bullwhip effect. Several scenarios were developed to show the change in bullwhip effect ratio because of the difference in forecasting methods and forecasting parameters. Forecasting methods used were mean demand, moving average, exponential smoothing, demand signalling, and minimum expected mean squared error. Forecasting parameters were moving average period, smoothing parameter, signalling factor, and safety stock factor. It showed that decreasing moving average period, increasing smoothing parameter, increasing signalling factor can create bigger bullwhip effect ratio. Meanwhile, safety stock factor had no impact to bullwhip effect.

  5. Product competitiveness analysis for e-commerce platform of special agricultural products

    NASA Astrophysics Data System (ADS)

    Wan, Fucheng; Ma, Ning; Yang, Dongwei; Xiong, Zhangyuan

    2017-09-01

    On the basis of analyzing the influence factors of the product competitiveness of the e-commerce platform of the special agricultural products and the characteristics of the analytical methods for the competitiveness of the special agricultural products, the price, the sales volume, the postage included service, the store reputation, the popularity, etc. were selected in this paper as the dimensionality for analyzing the competitiveness of the agricultural products, and the principal component factor analysis was taken as the competitiveness analysis method. Specifically, the web crawler was adopted to capture the information of various special agricultural products in the e-commerce platform ---- chi.taobao.com. Then, the original data captured thereby were preprocessed and MYSQL database was adopted to establish the information library for the special agricultural products. Then, the principal component factor analysis method was adopted to establish the analysis model for the competitiveness of the special agricultural products, and SPSS was adopted in the principal component factor analysis process to obtain the competitiveness evaluation factor system (support degree factor, price factor, service factor and evaluation factor) of the special agricultural products. Then, the linear regression method was adopted to establish the competitiveness index equation of the special agricultural products for estimating the competitiveness of the special agricultural products.

  6. Piecewise compensation for the nonlinear error of fiber-optic gyroscope scale factor

    NASA Astrophysics Data System (ADS)

    Zhang, Yonggang; Wu, Xunfeng; Yuan, Shun; Wu, Lei

    2013-08-01

    Fiber-Optic Gyroscope (FOG) scale factor nonlinear error will result in errors in Strapdown Inertial Navigation System (SINS). In order to reduce nonlinear error of FOG scale factor in SINS, a compensation method is proposed in this paper based on curve piecewise fitting of FOG output. Firstly, reasons which can result in FOG scale factor error are introduced and the definition of nonlinear degree is provided. Then we introduce the method to divide the output range of FOG into several small pieces, and curve fitting is performed in each output range of FOG to obtain scale factor parameter. Different scale factor parameters of FOG are used in different pieces to improve FOG output precision. These parameters are identified by using three-axis turntable, and nonlinear error of FOG scale factor can be reduced. Finally, three-axis swing experiment of SINS verifies that the proposed method can reduce attitude output errors of SINS by compensating the nonlinear error of FOG scale factor and improve the precision of navigation. The results of experiments also demonstrate that the compensation scheme is easy to implement. It can effectively compensate the nonlinear error of FOG scale factor with slightly increased computation complexity. This method can be used in inertial technology based on FOG to improve precision.

  7. Application of Gray Relational Analysis Method in Comprehensive Evaluation on the Customer Satisfaction of Automobile 4S Enterprises

    NASA Astrophysics Data System (ADS)

    Cenglin, Yao

    The car sales enterprises could continuously boost sales and expand customer groups, an important method is to enhance the customer satisfaction. The customer satisfaction of car sales enterprises (4S enterprises) depends on many factors. By using the grey relational analysis method, we could perfectly combine various factors in terms of customer satisfaction. And through the vertical contrast, car sales enterprises could find specific factors which will improve customer satisfaction, thereby increase sales volume and benefits. Gray relational analysis method has become a kind of good method and means to analyze and evaluate the enterprises.

  8. RM-DEMATEL: a new methodology to identify the key factors in PM2.5.

    PubMed

    Chen, Yafeng; Liu, Jie; Li, Yunpeng; Sadiq, Rehan; Deng, Yong

    2015-04-01

    Weather system is a relative complex dynamic system, the factors of the system are mutually influenced PM2.5 concentration. In this paper, a new method is proposed to quantify the influence on PM2.5 by other factors in the weather system and identify the most important factors for PM2.5 with limited resources. The relation map (RM) is used to figure out the direct relation matrix of 14 factors in PM2.5. The decision making trial and evaluation laboratory(DEMATEL) is applied to calculate the causal relationship and extent to a mutual influence of 14 factors in PM2.5. According to the ranking results of our proposed method, the most important key factors is sulfur dioxide (SO2) and nitrogen oxides (NO(X)). In addition, the other factors, the ambient maximum temperature (T(max)), concentration of PM10, and wind direction (W(dir)), are important factors for PM2.5. The proposed method can also be applied to other environment management systems to identify key factors.

  9. Effective factors in providing holistic care: a qualitative study.

    PubMed

    Zamanzadeh, Vahid; Jasemi, Madineh; Valizadeh, Leila; Keogh, Brian; Taleghani, Fariba

    2015-01-01

    Holistic care is a comprehensive model of caring. Previous studies have shown that most nurses do not apply this method. Examining the effective factors in nurses' provision of holistic care can help with enhancing it. Studying these factors from the point of view of nurses will generate real and meaningful concepts and can help to extend this method of caring. A qualitative study was used to identify effective factors in holistic care provision. Data gathered by interviewing 14 nurses from university hospitals in Iran were analyzed with a conventional qualitative content analysis method and by using MAXQDA (professional software for qualitative and mixed methods data analysis) software. Analysis of data revealed three main themes as effective factors in providing holistic care: The structure of educational system, professional environment, and personality traits. Establishing appropriate educational, management systems, and promoting religiousness and encouragement will induce nurses to provide holistic care and ultimately improve the quality of their caring.

  10. Combining global and local approximations

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.

    1991-01-01

    A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model.

  11. Compositions and methods for improved plant feedstock

    DOEpatents

    Shen, Hui; Chen, Fang; Dixon, Richard A

    2014-12-02

    The invention provides methods for modifying lignin content and composition in plants and achieving associated benefits therefrom involving altered expression of newly discovered MYB4 transcription factors. Nucleic acid constructs for modifying MYB4 transcription factor expression are described. By over-expressing the identified MYB4 transcription factors, for example, an accompanying decrease in lignin content may be achieved. Plants are provided by the invention comprising such modifications, as are methods for their preparation and use.

  12. Determination of K-shell absorption jump factors and jump ratios for La2O3, Ce and Gd using two different methods

    NASA Astrophysics Data System (ADS)

    Akman, Ferdi; Durak, Rıdvan; Kaçal, Mustafa Recep; Turhan, Mehmet Fatih; Akdemir, Fatma

    2015-02-01

    The K shell absorption jump factors and jump ratios for La2O3, Ce and Gd samples have been determined using the gamma or X-ray attenuation and EDXRF methods. It is the first time that the K shell absorption jump factor and jump ratio have been discussed for present elements using two different methods. To detect K X-rays, a high resolution Si(Li) detector was used. The experimental results of K shell absorption jump factors and jump ratios were compared with the theoretically calculated ones.

  13. Correction factors for the NMi free-air ionization chamber for medium-energy x-rays calculated with the Monte Carlo method.

    PubMed

    Grimbergen, T W; van Dijk, E; de Vries, W

    1998-11-01

    A new method is described for the determination of x-ray quality dependent correction factors for free-air ionization chambers. The method is based on weighting correction factors for mono-energetic photons, which are calculated using the Monte Carlo method, with measured air kerma spectra. With this method, correction factors for electron loss, scatter inside the chamber and transmission through the diaphragm and front wall have been calculated for the NMi free-air chamber for medium-energy x-rays for a wide range of x-ray qualities in use at NMi. The newly obtained correction factors were compared with the values in use at present, which are based on interpolation of experimental data for a specific set of x-ray qualities. For x-ray qualities which are similar to this specific set, the agreement between the correction factors determined with the new method and those based on the experimental data is better than 0.1%, except for heavily filtered x-rays generated at 250 kV. For x-ray qualities dissimilar to the specific set, differences up to 0.4% exist, which can be explained by uncertainties in the interpolation procedure of the experimental data. Since the new method does not depend on experimental data for a specific set of x-ray qualities, the new method allows for a more flexible use of the free-air chamber as a primary standard for air kerma for any x-ray quality in the medium-energy x-ray range.

  14. Violent and non-violent methods of attempted and completed suicide in Swedish young men: the role of early risk factors.

    PubMed

    Stenbacka, Marlene; Jokinen, Jussi

    2015-08-14

    There is a paucity of studies on the role of early risk factors for the choice of methods for violent suicide attempts. Adolescent risk factors for the choice of violent or non-violent methods for suicide attempts and the risk of subsequent suicide were studied using a longitudinal design. A national Swedish cohort of 48 834 18-20-year-old young men conscripted for military service from 1969 to 1970 was followed through official registers during a 37-year period. Two questionnaires concerning their psychosocial background were answered by each conscript. Cox proportional hazard regression analyses were used to estimate the risk for different methods of attempted suicide and later suicide. A total of 1195 (2.4 %) men had made a suicide attempt and of these, 133 (11.1 %) committed suicide later. The number of suicide victims among the non-attempters was 482 (1 %). Half of the suicides occurred during the same year as the attempt. Suicide victims had earlier onset of suicidal behaviour and had more often used hanging as a method of attempted suicide than those who did not later commit suicide. The early risk factors for both violent and non-violent methods of suicide attempt were quite similar. Violent suicide attempts, especially by hanging, are associated with a clearly elevated suicide risk in men and require special clinical and public health attention. The early risk factors related to the choice of either a violent or a non-violent suicide attempt method are interlinked and circumstantial factors temporally close to the suicide attempt, such as access to a specific method, may partly explain the choice of method.

  15. Structural Deterministic Safety Factors Selection Criteria and Verification

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1992-01-01

    Though current deterministic safety factors are arbitrarily and unaccountably specified, its ratio is rooted in resistive and applied stress probability distributions. This study approached the deterministic method from a probabilistic concept leading to a more systematic and coherent philosophy and criterion for designing more uniform and reliable high-performance structures. The deterministic method was noted to consist of three safety factors: a standard deviation multiplier of the applied stress distribution; a K-factor for the A- or B-basis material ultimate stress; and the conventional safety factor to ensure that the applied stress does not operate in the inelastic zone of metallic materials. The conventional safety factor is specifically defined as the ratio of ultimate-to-yield stresses. A deterministic safety index of the combined safety factors was derived from which the corresponding reliability proved the deterministic method is not reliability sensitive. The bases for selecting safety factors are presented and verification requirements are discussed. The suggested deterministic approach is applicable to all NASA, DOD, and commercial high-performance structures under static stresses.

  16. 21 CFR 172.860 - Fatty acids.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    .... (2) It is free of chick-edema factor: (i) As evidenced during the bioassay method for determining the chick-edema factor as prescribed in paragraph (c)(2) of this section; or (ii) As evidenced by the... the bioassay method prescribed in paragraph (c)(2) of this section for determining chick-edema factor...

  17. 21 CFR 172.860 - Fatty acids.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    .... (2) It is free of chick-edema factor: (i) As evidenced during the bioassay method for determining the chick-edema factor as prescribed in paragraph (c)(2) of this section; or (ii) As evidenced by the... the bioassay method prescribed in paragraph (c)(2) of this section for determining chick-edema factor...

  18. Carrageenan :the difference between PNG and KCL gel precipitation method as Lactobacillus acidophilus encapsulation material

    NASA Astrophysics Data System (ADS)

    Setijawati, D.; Nursyam, H.; Salis, H.

    2018-04-01

    The study on the effects of using of materials and methods in the preparation of the microcapsules Lactobacillus acidophilus towards the viability has been done. The research method used is experimental laboratory design. Variable research was kind of material (A) as the first factor with sub factor (A1 = Eucheuma cottonii) (A2 = Eucheuma spinosum) (A3 = mixture of Eucheuma cottonii and Eucheuma spinosum 1:1 ratio), while the second factor is a method of extraction to produce caragenan (B) with sub factor (B1 = Philipine Natural Grade modification) (B2 = KCl gel Press Precipitation). Analysis of different influences uses Analysis Of Varians followed by Fisher’s test. Analysis of data uses Mini tab 16. The results shows that the kind of extraction factors and methods gave significantly different effects on the viability of Lactobacillus acidophilus. The highest mean of Viablity obtained in the treatment of materials with a mixture of Eucheuma cottonii and Eucheuma spinosum and used KCl Gel Press method is equal to 7.14 log (CFU / mL). It is ssuggested using of kappa-iota carrageenanmixture asencapsulation material with KCl Gel Press method on Lactobacillus acidophilus microencapsulation process because it treatment gavethe highest average of Lactobacillus acidophilus viability.

  19. Neural Correlates of Biased Responses: The Negative Method Effect in the Rosenberg Self-Esteem Scale Is Associated with Right Amygdala Volume.

    PubMed

    Wang, Yinan; Kong, Feng; Huang, Lijie; Liu, Jia

    2016-10-01

    Self-esteem is a widely studied construct in psychology that is typically measured by the Rosenberg Self-Esteem Scale (RSES). However, a series of cross-sectional and longitudinal studies have suggested that a simple and widely used unidimensional factor model does not provide an adequate explanation of RSES responses due to method effects. To identify the neural correlates of the method effect, we sought to determine whether and how method effects were associated with the RSES and investigate the neural basis of these effects. Two hundred and eighty Chinese college students (130 males; mean age = 22.64 years) completed the RSES and underwent magnetic resonance imaging (MRI). Behaviorally, method effects were linked to both positively and negatively worded items in the RSES. Neurally, the right amygdala volume negatively correlated with the negative method factor, while the hippocampal volume positively correlated with the general self-esteem factor in the RSES. The neural dissociation between the general self-esteem factor and negative method factor suggests that there are different neural mechanisms underlying them. The amygdala is involved in modulating negative affectivity; therefore, the current study sheds light on the nature of method effects that are related to self-report with a mix of positively and negatively worded items. © 2015 Wiley Periodicals, Inc.

  20. Calibration of resistance factors for drilled shafts for the new FHWA design method.

    DOT National Transportation Integrated Search

    2013-01-01

    The Load and Resistance Factor Design (LRFD) calibration of deep foundation in Louisiana was first completed for driven piles (LTRC Final Report 449) in May 2009 and then for drilled shafts using 1999 FHWA design method (ONeill and Reese method) (...

  1. Methods for Improving Information from ’Undesigned’ Human Factors Experiments.

    DTIC Science & Technology

    Human factors engineering, Information processing, Regression analysis , Experimental design, Least squares method, Analysis of variance, Correlation techniques, Matrices(Mathematics), Multiple disciplines, Mathematical prediction

  2. On the Preferred Flesh Color of Japanese and Chinese and the Determining Factors —Investigation of the Younger Generation Using Method of Successive Categories and Semantic Differential Method—

    NASA Astrophysics Data System (ADS)

    Fan, Ying; Deng, Pei; Tsuruoka, Hideki; Aoki, Naokazu; Kobayashi, Hiroyuki

    The preferred flesh color was surveyed by the successive five categories method and the SD method in Japan and China to investigate its determining factors. The Chinese most preferred flesh color was more reddish than the Japanese one, while the flesh color accepted by 50% and more of the observers in China was larger in chromaticness and more yellowish than in Japan. In the determining factors for selection of the preferred color extracted by a factor analysis, a big difference between Japanese and Chinese men was observed. The first factor of the former was kind personality, whereas that of the latter was showy appearance.

  3. Designing Feature and Data Parallel Stochastic Coordinate Descent Method forMatrix and Tensor Factorization

    DTIC Science & Technology

    2016-05-11

    AFRL-AFOSR-JP-TR-2016-0046 Designing Feature and Data Parallel Stochastic Coordinate Descent Method for Matrix and Tensor Factorization U Kang Korea...maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or   any other aspect...Designing Feature and Data Parallel Stochastic Coordinate Descent Method for Matrix and Tensor Factorization 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA2386

  4. Hierarchical and coupling model of factors influencing vessel traffic flow.

    PubMed

    Liu, Zhao; Liu, Jingxian; Li, Huanhuan; Li, Zongzhi; Tan, Zhirong; Liu, Ryan Wen; Liu, Yi

    2017-01-01

    Understanding the characteristics of vessel traffic flow is crucial in maintaining navigation safety, efficiency, and overall waterway transportation management. Factors influencing vessel traffic flow possess diverse features such as hierarchy, uncertainty, nonlinearity, complexity, and interdependency. To reveal the impact mechanism of the factors influencing vessel traffic flow, a hierarchical model and a coupling model are proposed in this study based on the interpretative structural modeling method. The hierarchical model explains the hierarchies and relationships of the factors using a graph. The coupling model provides a quantitative method that explores interaction effects of factors using a coupling coefficient. The coupling coefficient is obtained by determining the quantitative indicators of the factors and their weights. Thereafter, the data obtained from Port of Tianjin is used to verify the proposed coupling model. The results show that the hierarchical model of the factors influencing vessel traffic flow can explain the level, structure, and interaction effect of the factors; the coupling model is efficient in analyzing factors influencing traffic volumes. The proposed method can be used for analyzing increases in vessel traffic flow in waterway transportation system.

  5. Hierarchical and coupling model of factors influencing vessel traffic flow

    PubMed Central

    Liu, Jingxian; Li, Huanhuan; Li, Zongzhi; Tan, Zhirong; Liu, Ryan Wen; Liu, Yi

    2017-01-01

    Understanding the characteristics of vessel traffic flow is crucial in maintaining navigation safety, efficiency, and overall waterway transportation management. Factors influencing vessel traffic flow possess diverse features such as hierarchy, uncertainty, nonlinearity, complexity, and interdependency. To reveal the impact mechanism of the factors influencing vessel traffic flow, a hierarchical model and a coupling model are proposed in this study based on the interpretative structural modeling method. The hierarchical model explains the hierarchies and relationships of the factors using a graph. The coupling model provides a quantitative method that explores interaction effects of factors using a coupling coefficient. The coupling coefficient is obtained by determining the quantitative indicators of the factors and their weights. Thereafter, the data obtained from Port of Tianjin is used to verify the proposed coupling model. The results show that the hierarchical model of the factors influencing vessel traffic flow can explain the level, structure, and interaction effect of the factors; the coupling model is efficient in analyzing factors influencing traffic volumes. The proposed method can be used for analyzing increases in vessel traffic flow in waterway transportation system. PMID:28414747

  6. SEMICONDUCTOR TECHNOLOGY: An efficient dose-compensation method for proximity effect correction

    NASA Astrophysics Data System (ADS)

    Ying, Wang; Weihua, Han; Xiang, Yang; Renping, Zhang; Yang, Zhang; Fuhua, Yang

    2010-08-01

    A novel simple dose-compensation method is developed for proximity effect correction in electron-beam lithography. The sizes of exposed patterns depend on dose factors while other exposure parameters (including accelerate voltage, resist thickness, exposing step size, substrate material, and so on) remain constant. This method is based on two reasonable assumptions in the evaluation of the compensated dose factor: one is that the relation between dose factors and circle-diameters is linear in the range under consideration; the other is that the compensated dose factor is only affected by the nearest neighbors for simplicity. Four-layer-hexagon photonic crystal structures were fabricated as test patterns to demonstrate this method. Compared to the uncorrected structures, the homogeneity of the corrected hole-size in photonic crystal structures was clearly improved.

  7. Obesity as a risk factor for developing functional limitation among older adults: A conditional inference tree analysis

    USDA-ARS?s Scientific Manuscript database

    Objective: To examine the risk factors of developing functional decline and make probabilistic predictions by using a tree-based method that allows higher order polynomials and interactions of the risk factors. Methods: The conditional inference tree analysis, a data mining approach, was used to con...

  8. Methods for Estimating Uncertainty in PMF Solutions: Examples with Ambient Air and Water Quality Data and Guidance on Reporting PMF Results

    EPA Science Inventory

    The new version of EPA’s positive matrix factorization (EPA PMF) software, 5.0, includes three error estimation (EE) methods for analyzing factor analytic solutions: classical bootstrap (BS), displacement of factor elements (DISP), and bootstrap enhanced by displacement (BS-DISP)...

  9. Determination of correction factors in beta radiation beams using Monte Carlo method.

    PubMed

    Polo, Ivón Oramas; Santos, William de Souza; Caldas, Linda V E

    2018-06-15

    The absorbed dose rate is the main characterization quantity for beta radiation. The extrapolation chamber is considered the primary standard instrument. To determine absorbed dose rates in beta radiation beams, it is necessary to establish several correction factors. In this work, the correction factors for the backscatter due to the collecting electrode and to the guard ring, and the correction factor for Bremsstrahlung in beta secondary standard radiation beams are presented. For this purpose, the Monte Carlo method was applied. The results obtained are considered acceptable, and they agree within the uncertainties. The differences between the backscatter factors determined by the Monte Carlo method and those of the ISO standard were 0.6%, 0.9% and 2.04% for 90 Sr/ 90 Y, 85 Kr and 147 Pm sources respectively. The differences between the Bremsstrahlung factors determined by the Monte Carlo method and those of the ISO were 0.25%, 0.6% and 1% for 90 Sr/ 90 Y, 85 Kr and 147 Pm sources respectively. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. A comparison study on detection of key geochemical variables and factors through three different types of factor analysis

    NASA Astrophysics Data System (ADS)

    Hoseinzade, Zohre; Mokhtari, Ahmad Reza

    2017-10-01

    Large numbers of variables have been measured to explain different phenomena. Factor analysis has widely been used in order to reduce the dimension of datasets. Additionally, the technique has been employed to highlight underlying factors hidden in a complex system. As geochemical studies benefit from multivariate assays, application of this method is widespread in geochemistry. However, the conventional protocols in implementing factor analysis have some drawbacks in spite of their advantages. In the present study, a geochemical dataset including 804 soil samples collected from a mining area in central Iran in order to search for MVT type Pb-Zn deposits was considered to outline geochemical analysis through various fractal methods. Routine factor analysis, sequential factor analysis, and staged factor analysis were applied to the dataset after opening the data with (additive logratio) alr-transformation to extract mineralization factor in the dataset. A comparison between these methods indicated that sequential factor analysis has more clearly revealed MVT paragenesis elements in surface samples with nearly 50% variation in F1. In addition, staged factor analysis has given acceptable results while it is easy to practice. It could detect mineralization related elements while larger factor loadings are given to these elements resulting in better pronunciation of mineralization.

  11. A Method for Derivation of Areas for Assessment in Marital Relationships.

    ERIC Educational Resources Information Center

    Broderick, Joan E.

    1981-01-01

    Expands upon factor-analytic and rational methods and introduces a third method for determining content areas to be assessed in marital relationships. Definitions of a "good marriage" were content analyzed, and a number of areas were added. Demographic subgroup differences were found not to be influential factors. (Author)

  12. Source apportionment of PAH in Hamilton Harbour suspended sediments: comparison of two factor analysis methods.

    PubMed

    Sofowote, Uwayemi M; McCarry, Brian E; Marvin, Christopher H

    2008-08-15

    A total of 26 suspended sediment samples collected over a 5-year period in Hamilton Harbour, Ontario, Canada and surrounding creeks were analyzed for a suite of polycyclic aromatic hydrocarbons and sulfur heterocycles. Hamilton Harbour sediments contain relatively high levels of polycyclic aromatic compounds and heavy metals due to emissions from industrial and mobile sources. Two receptor modeling methods using factor analyses were compared to determine the profiles and relative contributions of pollution sources to the harbor; these methods are principal component analyses (PCA) with multiple linear regression analysis (MLR) and positive matrix factorization (PMF). Both methods identified four factors and gave excellent correlation coefficients between predicted and measured levels of 25 aromatic compounds; both methods predicted similar contributions from coal tar/coal combustion sources to the harbor (19 and 26%, respectively). One PCA factor was identified as contributions from vehicular emissions (61%); PMF was able to differentiate vehicular emissions into two factors, one attributed to gasoline emissions sources (28%) and the other to diesel emissions sources (24%). Overall, PMF afforded better source identification than PCA with MLR. This work constitutes one of the few examples of the application of PMF to the source apportionment of sediments; the addition of sulfur heterocycles to the analyte list greatly aided in the source identification process.

  13. Using Bayes factors for multi-factor, biometric authentication

    NASA Astrophysics Data System (ADS)

    Giffin, A.; Skufca, J. D.; Lao, P. A.

    2015-01-01

    Multi-factor/multi-modal authentication systems are becoming the de facto industry standard. Traditional methods typically use rates that are point estimates and lack a good measure of uncertainty. Additionally, multiple factors are typically fused together in an ad hoc manner. To be consistent, as well as to establish and make proper use of uncertainties, we use a Bayesian method that will update our estimates and uncertainties as new information presents itself. Our algorithm compares competing classes (such as genuine vs. imposter) using Bayes Factors (BF). The importance of this approach is that we not only accept or reject one model (class), but compare it to others to make a decision. We show using a Receiver Operating Characteristic (ROC) curve that using BF for determining class will always perform at least as well as the traditional combining of factors, such as a voting algorithm. As the uncertainty decreases, the BF result continues to exceed the traditional methods result.

  14. SU-E-T-491: Importance of Energy Dependent Protons Per MU Calibration Factors in IMPT Dose Calculations Using Monte Carlo Technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Randeniya, S; Mirkovic, D; Titt, U

    2014-06-01

    Purpose: In intensity modulated proton therapy (IMPT), energy dependent, protons per monitor unit (MU) calibration factors are important parameters that determine absolute dose values from energy deposition data obtained from Monte Carlo (MC) simulations. Purpose of this study was to assess the sensitivity of MC-computed absolute dose distributions to the protons/MU calibration factors in IMPT. Methods: A “verification plan” (i.e., treatment beams applied individually to water phantom) of a head and neck patient plan was calculated using MC technique. The patient plan had three beams; one posterior-anterior (PA); two anterior oblique. Dose prescription was 66 Gy in 30 fractions. Ofmore » the total MUs, 58% was delivered in PA beam, 25% and 17% in other two. Energy deposition data obtained from the MC simulation were converted to Gy using energy dependent protons/MU calibrations factors obtained from two methods. First method is based on experimental measurements and MC simulations. Second is based on hand calculations, based on how many ion pairs were produced per proton in the dose monitor and how many ion pairs is equal to 1 MU (vendor recommended method). Dose distributions obtained from method one was compared with those from method two. Results: Average difference of 8% in protons/MU calibration factors between method one and two converted into 27 % difference in absolute dose values for PA beam; although dose distributions preserved the shape of 3D dose distribution qualitatively, they were different quantitatively. For two oblique beams, significant difference in absolute dose was not observed. Conclusion: Results demonstrate that protons/MU calibration factors can have a significant impact on absolute dose values in IMPT depending on the fraction of MUs delivered. When number of MUs increases the effect due to the calibration factors amplify. In determining protons/MU calibration factors, experimental method should be preferred in MC dose calculations. Research supported by National Cancer Institute grant P01CA021239.« less

  15. Methods for Stem Cell Production and Therapy

    NASA Technical Reports Server (NTRS)

    Valluri, Jagan V. (Inventor); Claudio, Pier Paolo (Inventor)

    2015-01-01

    The present invention relates to methods for rapidly expanding a stem cell population with or without culture supplements in simulated microgravity conditions. The present invention relates to methods for rapidly increasing the life span of stem cell populations without culture supplements in simulated microgravity conditions. The present invention also relates to methods for increasing the sensitivity of cancer stem cells to chemotherapeutic agents by culturing the cancer stem cells under microgravity conditions and in the presence of omega-3 fatty acids. The methods of the present invention can also be used to proliferate cancer cells by culturing them in the presence of omega-3 fatty acids. The present invention also relates to methods for testing the sensitivity of cancer cells and cancer stem cells to chemotherapeutic agents by culturing the cancer cells and cancer stem cells under microgravity conditions. The methods of the present invention can also be used to produce tissue for use in transplantation by culturing stem cells or cancer stem cells under microgravity conditions. The methods of the present invention can also be used to produce cellular factors and growth factors by culturing stem cells or cancer stem cells under microgravity conditions. The methods of the present invention can also be used to produce cellular factors and growth factors to promote differentiation of cancer stem cells under microgravity conditions.

  16. Multilevel poisson regression modelling for determining factors of dengue fever cases in bandung

    NASA Astrophysics Data System (ADS)

    Arundina, Davila Rubianti; Tantular, Bertho; Pontoh, Resa Septiani

    2017-03-01

    Scralatina or Dengue Fever is a kind of fever caused by serotype virus which Flavivirus genus and be known as Dengue Virus. Dengue Fever caused by Aedes Aegipty Mosquito bites who infected by a dengue virus. The study was conducted in 151 villages in Bandung. Health Analysts believes that there are two factors that affect the dengue cases, Internal factor (individual) and external factor (environment). The data who used in this research is hierarchical data. The method is used for hierarchical data modelling is multilevel method. Which is, the level 1 is village and level 2 is sub-district. According exploration data analysis, the suitable Multilevel Method is Random Intercept Model. Penalized Quasi Likelihood (PQL) approach on multilevel Poisson is a proper analysis to determine factors that affecting dengue cases in the city of Bandung. Clean and Healthy Behavior factor from the village level have an effect on the number of cases of dengue fever in the city of Bandung. Factor from the sub-district level has no effect.

  17. An assessment system for rating scientific journals in the field of ergonomics and human factors.

    PubMed

    Dul, Jan; Karwowski, Waldemar

    2004-05-01

    A method for selecting and rating scientific and professional journals representing the discipline of ergonomics and human factors is proposed. The method is based upon the journal list, impact factors and citations provided by the Institute of Scientific Information (ISI), and the journal list published in the Ergonomics Abstracts. Three groups of journals were distinguished. The 'ergonomics journals' focus exclusively on ergonomics and human factors. The 'related journals' focus on other disciplines than ergonomics and human factors, but regularly publish ergonomics/human factors papers. The 'basic journals' focus on other technical, medical or social sciences than ergonomics, but are important for the development of ergonomics/human factors. Journal quality was rated using a maximum of four categories: top quality (A-level), high quality (B-level), good quality (C-level)) and professional (P-level). The above methods were applied to develop the Ergonomics Journal List 2004. A total of 25 'ergonomics journals', 58 'related journals' and 142 'basic journals' were classified.

  18. Fast sweeping method for the factored eikonal equation

    NASA Astrophysics Data System (ADS)

    Fomel, Sergey; Luo, Songting; Zhao, Hongkai

    2009-09-01

    We develop a fast sweeping method for the factored eikonal equation. By decomposing the solution of a general eikonal equation as the product of two factors: the first factor is the solution to a simple eikonal equation (such as distance) or a previously computed solution to an approximate eikonal equation. The second factor is a necessary modification/correction. Appropriate discretization and a fast sweeping strategy are designed for the equation of the correction part. The key idea is to enforce the causality of the original eikonal equation during the Gauss-Seidel iterations. Using extensive numerical examples we demonstrate that (1) the convergence behavior of the fast sweeping method for the factored eikonal equation is the same as for the original eikonal equation, i.e., the number of iterations for the Gauss-Seidel iterations is independent of the mesh size, (2) the numerical solution from the factored eikonal equation is more accurate than the numerical solution directly computed from the original eikonal equation, especially for point sources.

  19. Factors influencing infant-feeding choices selected by HIV-infected mothers: perspectives from Zimbabwe.

    PubMed

    Marembo, Joan; Zvinavashe, Mathilda; Nyamakura, Rudo; Shaibu, Sheila; Mogobe, Keitshokile Dintle

    2014-10-01

    To assess factors influencing infant-feeding methods selected by HIV-infected mothers. A descriptive quantitative study was conducted among 80 mothers with babies aged 0-6 months who were randomly selected and interviewed. Descriptive statistics were used to summarize the findings. Factors considered by women in choosing the infant-feeding methods included sociocultural acceptability (58.8%), feasibility and support from significant others (35%), knowledge of the selected method (55%), affordability (61.2%), implementation of the infant-feeding method without interference (62.5%), and safety (47.5%). Exclusive breast-feeding was the most preferred method of infant feeding. Disclosure of HIV status by a woman to her partner is a major condition for successful replacement feeding method, especially within the African cultural context. However, disclosure of HIV status to the partner was feared by most women as only 16.2% of the women disclosed their HIV status to partners. The factors considered by women in choosing the infant-feeding option were ability to implement the options without interference from significant others, affordability, and sociocultural acceptability. Knowledge of the selected option, its advantages and disadvantages, safety, and feasibility were also important factors. Nurses and midwives have to educate clients and support them in their choice of infant-feeding methods. © 2013 The Authors. Japan Journal of Nursing Science © 2013 Japan Academy of Nursing Science.

  20. Numerical Computation of Subsonic Conical Diffuser Flows with Nonuniform Turbulent Inlet Conditions

    DTIC Science & Technology

    1977-09-01

    Gauss - Seidel Point Iteration Method . . . . . . . . . . . . . . . 7.0 FACTORS AFFECTING THE RATE OF CONVERGENCE OF THE POINT...can be solved in several ways. For simplicity, a standard Gauss - Seidel iteration method is used to obtain the solution . The method updates the...FACTORS AFFECTING THE RATE OF CONVERGENCE OF THE POINT ITERATION ,ŘETHOD The advantage of using the Gauss - Seidel point iteration method to

  1. A comparative study of validated spectrophotometric and TLC- spectrodensitometric methods for the determination of sodium cromoglicate and fluorometholone in ophthalmic solution

    PubMed Central

    Saleh, Sarah S.; Lotfy, Hayam M.; Hassan, Nagiba Y.; Elgizawy, Samia M.

    2013-01-01

    The determination of sodium cromoglicate (SCG) and fluorometholone (FLU) in ophthalmic solution was developed by simple, sensitive and precise methods. Three spectrophotometric methods were applied: absorptivity factor (a-Factor method), absorption factor (AFM) and mean centering of ratio spectra (MCR). The linearity ranges of SCG were found to be (2.5–35 μg/mL) for (a-Factor method) and (MCR); while for (AFM), it was found to be (7.5–50 μg/mL). The linearity ranges of FLU were found to be (4–16 μg/mL) for (a-Factor method) and (AFM); while for (MCR), it was found to be (2–16 μg/mL). The mean percentage recoveries/RSD for SCG were found to be 100.31/0.90, 100.23/0.57 and 100.43/1.21; while for FLU, they were found to be 100.11/0.56, 99.97/0.35 and 99.94/0.88 using (a-Factor method), (AFM) and (MCR), respectively. A TLC-spectrodensitometric method was developed by separation of SCG and FLU on silica gel 60 F254 using chloroform:methanol:toluene:triethylamine in the ratio of (5:2:4:1 v/v/v/v) as developing system, followed by spectrodensitometric measurement of the bands at 241 nm. The linearity ranges and the mean percentage recoveries/RSD were found to be (0.4–4.4 μg/band), 100.24/1.44 and (0.2–1.6 μg/band), 99.95/1.50 for SCG and FLU, respectively. A comparative study was conducted between the proposed methods to discuss the advantage of each method. The suggested methods were validated in compliance with the ICH guidelines and were successfully applied for the determination of SCG and FLU in their laboratory prepared mixtures and commercial ophthalmic solution in the presence of benzalkonium chloride as a preservative. These methods could be an alternative to different HPLC techniques in quality control laboratories lacking the required facilities for those expensive techniques. PMID:24227962

  2. A brief measure of attitudes toward mixed methods research in psychology.

    PubMed

    Roberts, Lynne D; Povee, Kate

    2014-01-01

    The adoption of mixed methods research in psychology has trailed behind other social science disciplines. Teaching psychology students, academics, and practitioners about mixed methodologies may increase the use of mixed methods within the discipline. However, tailoring and evaluating education and training in mixed methodologies requires an understanding of, and way of measuring, attitudes toward mixed methods research in psychology. To date, no such measure exists. In this article we present the development and initial validation of a new measure: Attitudes toward Mixed Methods Research in Psychology. A pool of 42 items developed from previous qualitative research on attitudes toward mixed methods research along with validation measures was administered via an online survey to a convenience sample of 274 psychology students, academics and psychologists. Principal axis factoring with varimax rotation on a subset of the sample produced a four-factor, 12-item solution. Confirmatory factor analysis on a separate subset of the sample indicated that a higher order four factor model provided the best fit to the data. The four factors; 'Limited Exposure,' '(in)Compatibility,' 'Validity,' and 'Tokenistic Qualitative Component'; each have acceptable internal reliability. Known groups validity analyses based on preferred research orientation and self-rated mixed methods research skills, and convergent and divergent validity analyses based on measures of attitudes toward psychology as a science and scientist and practitioner orientation, provide initial validation of the measure. This brief, internally reliable measure can be used in assessing attitudes toward mixed methods research in psychology, measuring change in attitudes as part of the evaluation of mixed methods education, and in larger research programs.

  3. Completed Suicide with Violent and Non-Violent Methods in Rural Shandong, China: A Psychological Autopsy Study

    PubMed Central

    Sun, Shi-Hua; Jia, Cun-Xian

    2014-01-01

    Background This study aims to describe the specific characteristics of completed suicides by violent methods and non-violent methods in rural Chinese population, and to explore the related factors for corresponding methods. Methods Data of this study came from investigation of 199 completed suicide cases and their paired controls of rural areas in three different counties in Shandong, China, by interviewing one informant of each subject using the method of Psychological Autopsy (PA). Results There were 78 (39.2%) suicides with violent methods and 121 (60.8%) suicides with non-violent methods. Ingesting pesticides, as a non-violent method, appeared to be the most common suicide method (103, 51.8%). Hanging (73 cases, 36.7%) and drowning (5 cases, 2.5%) were the only violent methods observed. Storage of pesticides at home and higher suicide intent score were significantly associated with choice of violent methods while committing suicide. Risk factors related to suicide death included negative life events and hopelessness. Conclusions Suicide with violent methods has different factors from suicide with non-violent methods. Suicide methods should be considered in suicide prevention and intervention strategies. PMID:25111835

  4. Prediction of quality attributes of chicken breast fillets by using Vis/NIR spectroscopy combined with factor analysis method

    USDA-ARS?s Scientific Manuscript database

    Visible/near-infrared (Vis/NIR) spectroscopy with wavelength range between 400 and 2500 nm combined with factor analysis method was tested to predict quality attributes of chicken breast fillets. Quality attributes, including color (L*, a*, b*), pH, and drip loss were analyzed using factor analysis ...

  5. A Primer on Bootstrap Factor Analysis as Applied to Health Studies Research

    ERIC Educational Resources Information Center

    Lu, Wenhua; Miao, Jingang; McKyer, E. Lisako J.

    2014-01-01

    Objectives: To demonstrate how the bootstrap method could be conducted in exploratory factor analysis (EFA) with a syntax written in SPSS. Methods: The data obtained from the Texas Childhood Obesity Prevention Policy Evaluation project (T-COPPE project) were used for illustration. A 5-step procedure to conduct bootstrap factor analysis (BFA) was…

  6. Socioeconomic factors associated with contraceptive use and method choice in urban slums of Bangladesh.

    PubMed

    Kamal, S M Mostafa

    2015-03-01

    This article explores the socioeconomic factors affecting contraceptive use and method choice among women of urban slums using the nationally representative 2006 Bangladesh Urban Health Survey. Both bivariate and multivariate statistical analyses were applied to examine the relationship between a set of sociodemographic factors and the dependent variables. Overall, the contraceptive prevalence rate was 58.1%, of which 53.2% were modern methods. Women's age, access to TV, number of unions, nongovernmental organization membership, working status of women, number of living children, child mortality, and wealth index were important determinants of contraceptive use and method preference. Sex composition of surviving children and women's education were the most important determinants of contraceptive use and method choice. Programs should be strengthened to provide nonclinical modern methods free of cost among the slum dwellers. Doorstep delivery services of modern contraceptive methods may raise the contraceptive prevalence rate among the slum dwellers in Bangladesh. © 2011 APJPH.

  7. Self-assembling peptide amphiphiles and related methods for growth factor delivery

    DOEpatents

    Stupp, Samuel I [Chicago, IL; Donners, Jack J. J. M.; Silva, Gabriel A [Chicago, IL; Behanna, Heather A [Chicago, IL; Anthony, Shawn G [New Stanton, PA

    2009-06-09

    Amphiphilic peptide compounds comprising one or more epitope sequences for binding interaction with one or more corresponding growth factors, micellar assemblies of such compounds and related methods of use.

  8. Self-assembling peptide amphiphiles and related methods for growth factor delivery

    DOEpatents

    Stupp, Samuel I [Chicago, IL; Donners, Jack J. J. M.; Silva, Gabriel A [Chicago, IL; Behanna, Heather A [Chicago, IL; Anthony, Shawn G [New Stanton, PA

    2012-03-20

    Amphiphilic peptide compounds comprising one or more epitope sequences for binding interaction with one or more corresponding growth factors, micellar assemblies of such compounds and related methods of use.

  9. Self-assembling peptide amphiphiles and related methods for growth factor delivery

    DOEpatents

    Stupp, Samuel I; Donners, Jack J.J.M.; Silva, Gabriel A; Behanna, Heather A; Anthony, Shawn G

    2013-11-12

    Amphiphilic peptide compounds comprising one or more epitope sequences for binding interaction with one or more corresponding growth factors, micellar assemblies of such compounds and related methods of use.

  10. Comparison of Cliff-Lorimer-Based Methods of Scanning Transmission Electron Microscopy (STEM) Quantitative X-Ray Microanalysis for Application to Silicon Oxycarbides Thin Films.

    PubMed

    Parisini, Andrea; Frabboni, Stefano; Gazzadi, Gian Carlo; Rosa, Rodolfo; Armigliato, Aldo

    2018-06-01

    In this work, we compare the results of different Cliff-Lorimer (Cliff & Lorimer 1975) based methods in the case of a quantitative energy dispersive spectrometry investigation of light elements in ternary C-O-Si thin films. To determine the Cliff-Lorimer (C-L) k-factors, we fabricated, by focused ion beam, a standard consisting of a wedge lamella with a truncated tip, composed of two parallel SiO2 and 4H-SiC stripes. In 4H-SiC, it was not possible to obtain reliable k-factors from standard extrapolation methods owing to the strong CK-photon absorption. To overcome this problem, an extrapolation method exploiting the shape of the truncated tip of the lamella is proposed herein. The k-factors thus determined, were then used in an application of the C-L quantification procedure to a defect found at the SiO2/4H-SiC interface in the channel region of a metal-oxide field-effect-transistor device. As in this procedure, the sample thickness is required, a method to determine this quantity from the averaged and normalized scanning transmission electron microscopy intensity is also detailed. Monte Carlo simulations were used to investigate the discrepancy between experimental and theoretical k-factors and to bridge the gap between the k-factor and the Watanabe and Williams ζ-factor methods (Watanabe & Williams, 2006).

  11. Simultaneous tensor decomposition and completion using factor priors.

    PubMed

    Chen, Yi-Lei; Hsu, Chiou-Ting; Liao, Hong-Yuan Mark

    2014-03-01

    The success of research on matrix completion is evident in a variety of real-world applications. Tensor completion, which is a high-order extension of matrix completion, has also generated a great deal of research interest in recent years. Given a tensor with incomplete entries, existing methods use either factorization or completion schemes to recover the missing parts. However, as the number of missing entries increases, factorization schemes may overfit the model because of incorrectly predefined ranks, while completion schemes may fail to interpret the model factors. In this paper, we introduce a novel concept: complete the missing entries and simultaneously capture the underlying model structure. To this end, we propose a method called simultaneous tensor decomposition and completion (STDC) that combines a rank minimization technique with Tucker model decomposition. Moreover, as the model structure is implicitly included in the Tucker model, we use factor priors, which are usually known a priori in real-world tensor objects, to characterize the underlying joint-manifold drawn from the model factors. By exploiting this auxiliary information, our method leverages two classic schemes and accurately estimates the model factors and missing entries. We conducted experiments to empirically verify the convergence of our algorithm on synthetic data and evaluate its effectiveness on various kinds of real-world data. The results demonstrate the efficacy of the proposed method and its potential usage in tensor-based applications. It also outperforms state-of-the-art methods on multilinear model analysis and visual data completion tasks.

  12. Investigating Test Equating Methods in Small Samples through Various Factors

    ERIC Educational Resources Information Center

    Asiret, Semih; Sünbül, Seçil Ömür

    2016-01-01

    In this study, equating methods for random group design using small samples through factors such as sample size, difference in difficulty between forms, and guessing parameter was aimed for comparison. Moreover, which method gives better results under which conditions was also investigated. In this study, 5,000 dichotomous simulated data…

  13. A Comparison of Imputation Methods for Bayesian Factor Analysis Models

    ERIC Educational Resources Information Center

    Merkle, Edgar C.

    2011-01-01

    Imputation methods are popular for the handling of missing data in psychology. The methods generally consist of predicting missing data based on observed data, yielding a complete data set that is amiable to standard statistical analyses. In the context of Bayesian factor analysis, this article compares imputation under an unrestricted…

  14. Technical Notes on the Multifactor Method of Elementary School Closing.

    ERIC Educational Resources Information Center

    Puleo, Vincent T.

    This report provides preliminary technical information on a method for analyzing the factors involved in the closing of elementary schools. Included is a presentation of data and a brief discussion bearing on descriptive statistics, reliability, and validity. An intercorrelation matrix is also examined. The method employs 9 factors that have a…

  15. A Transformational Approach to Slip-Slide Factoring

    ERIC Educational Resources Information Center

    Steckroth, Jeffrey

    2015-01-01

    In this "Delving Deeper" article, the author introduces the slip-slide method for solving Algebra 1 mathematics problems. This article compares the traditional method approach of trial and error to the slip-slide method of factoring. Tools that used to be taken for granted now make it possible to investigate relationships visually,…

  16. Rosenberg's Self-Esteem Scale: Two Factors or Method Effects.

    ERIC Educational Resources Information Center

    Tomas, Jose M.; Oliver, Amparo

    1999-01-01

    Results of a study with 640 Spanish high school students suggest the existence of a global self-esteem factor underlying responses to Rosenberg's (M. Rosenberg, 1965) Self-Esteem Scale, although the inclusion of method effects is needed to achieve a good model fit. Method effects are associated with item wording. (SLD)

  17. Project-Method Fit: Exploring Factors That Influence Agile Method Use

    ERIC Educational Resources Information Center

    Young, Diana K.

    2013-01-01

    While the productivity and quality implications of agile software development methods (SDMs) have been demonstrated, research concerning the project contexts where their use is most appropriate has yielded less definitive results. Most experts agree that agile SDMs are not suited for all project contexts. Several project and team factors have been…

  18. Towards a Probabilistic Preliminary Design Criterion for Buckling Critical Composite Shells

    NASA Technical Reports Server (NTRS)

    Arbocz, Johann; Hilburger, Mark W.

    2003-01-01

    A probability-based analysis method for predicting buckling loads of compression-loaded laminated-composite shells is presented, and its potential as a basis for a new shell-stability design criterion is demonstrated and discussed. In particular, a database containing information about specimen geometry, material properties, and measured initial geometric imperfections for a selected group of laminated-composite cylindrical shells is used to calculate new buckling-load "knockdown factors". These knockdown factors are shown to be substantially improved, and hence much less conservative than the corresponding deterministic knockdown factors that are presently used by industry. The probability integral associated with the analysis is evaluated by using two methods; that is, by using the exact Monte Carlo method and by using an approximate First-Order Second- Moment method. A comparison of the results from these two methods indicates that the First-Order Second-Moment method yields results that are conservative for the shells considered. Furthermore, the results show that the improved, reliability-based knockdown factor presented always yields a safe estimate of the buckling load for the shells examined.

  19. Quantitative influence of risk factors on blood glucose level.

    PubMed

    Chen, Songjing; Luo, Senlin; Pan, Limin; Zhang, Tiemei; Han, Longfei; Zhao, Haixiu

    2014-01-01

    The aim of this study is to quantitatively analyze the influence of risk factors on the blood glucose level, and to provide theory basis for understanding the characteristics of blood glucose change and confirming the intervention index for type 2 diabetes. The quantitative method is proposed to analyze the influence of risk factors on blood glucose using back propagation (BP) neural network. Ten risk factors are screened first. Then the cohort is divided into nine groups by gender and age. According to the minimum error principle, nine BP models are trained respectively. The quantitative values of the influence of different risk factors on the blood glucose change can be obtained by sensitivity calculation. The experiment results indicate that weight is the leading cause of blood glucose change (0.2449). The second factors are cholesterol, age and triglyceride. The total ratio of these four factors reaches to 77% of the nine screened risk factors. And the sensitivity sequences can provide judgment method for individual intervention. This method can be applied to risk factors quantitative analysis of other diseases and potentially used for clinical practitioners to identify high risk populations for type 2 diabetes as well as other disease.

  20. Ultrasound-assisted extraction for total sulphur measurement in mine tailings.

    PubMed

    Khan, Adnan Hossain; Shang, Julie Q; Alam, Raquibul

    2012-10-15

    A sample preparation method for percentage recovery of total sulphur (%S) in reactive mine tailings based on ultrasound-assisted digestion (USAD) and inductively coupled plasma-optical emission spectroscopy (ICP-OES) was developed. The influence of various methodological factors was screened by employing a two-level and three-factor (2(3)) full factorial design and using KZK-1, a sericite schist certified reference material (CRM), to find the optimal combination of studied factors and %S. Factors such as the sonication time, temperature and acid combination were studied, with the best result identified as 20 min of sonication, 80°C temperature and 1 ml of HNO(3):1 ml of HCl, which can achieve 100% recovery for the selected CRM. Subsequently a fraction of the 2(3) full factorial design was applied to mine tailings. The percentage relative standard deviation (%RSD) for the ultrasound method is less than 3.0% for CRM and less than 6% for the mine tailings. The investigated method was verified by X-ray diffraction analysis. The USAD method compared favorably with existing methods such as hot plate assisted digestion method, X-ray fluorescence and LECO™-CNS method. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. Multiplication factor versus regression analysis in stature estimation from hand and foot dimensions.

    PubMed

    Krishan, Kewal; Kanchan, Tanuj; Sharma, Abhilasha

    2012-05-01

    Estimation of stature is an important parameter in identification of human remains in forensic examinations. The present study is aimed to compare the reliability and accuracy of stature estimation and to demonstrate the variability in estimated stature and actual stature using multiplication factor and regression analysis methods. The study is based on a sample of 246 subjects (123 males and 123 females) from North India aged between 17 and 20 years. Four anthropometric measurements; hand length, hand breadth, foot length and foot breadth taken on the left side in each subject were included in the study. Stature was measured using standard anthropometric techniques. Multiplication factors were calculated and linear regression models were derived for estimation of stature from hand and foot dimensions. Derived multiplication factors and regression formula were applied to the hand and foot measurements in the study sample. The estimated stature from the multiplication factors and regression analysis was compared with the actual stature to find the error in estimated stature. The results indicate that the range of error in estimation of stature from regression analysis method is less than that of multiplication factor method thus, confirming that the regression analysis method is better than multiplication factor analysis in stature estimation. Copyright © 2012 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  2. Simulation of tropical cyclone activity over the western North Pacific based on CMIP5 models

    NASA Astrophysics Data System (ADS)

    Shen, Haibo; Zhou, Weican; Zhao, Haikun

    2017-09-01

    Based on the Coupled Model Inter-comparison Project 5 (CMIP5) models, the tropical cyclone (TC) activity in the summers of 1965-2005 over the western North Pacific (WNP) is simulated by a TC dynamically downscaling system. In consideration of diversity among climate models, Bayesian model averaging (BMA) and equal-weighed model averaging (EMA) methods are applied to produce the ensemble large-scale environmental factors of the CMIP5 model outputs. The environmental factors generated by BMA and EMA methods are compared, as well as the corresponding TC simulations by the downscaling system. Results indicate that BMA method shows a significant advantage over the EMA. In addition, impacts of model selections on BMA method are examined. To each factor, ten models with better performance are selected from 30 CMIP5 models and then conduct BMA, respectively. As a consequence, the ensemble environmental factors and simulated TC activity are similar with the results from the 30 models' BMA, which verifies the BMA method can afford corresponding weight for each model in the ensemble based on the model's predictive skill. Thereby, the existence of poor performance models will not particularly affect the BMA effectiveness and the ensemble outcomes are improved. Finally, based upon the BMA method and downscaling system, we analyze the sensitivity of TC activity to three important environmental factors, i.e., sea surface temperature (SST), large-scale steering flow, and vertical wind shear. Among three factors, SST and large-scale steering flow greatly affect TC tracks, while average intensity distribution is sensitive to all three environmental factors. Moreover, SST and vertical wind shear jointly play a critical role in the inter-annual variability of TC lifetime maximum intensity and frequency of intense TCs.

  3. Mechanical microencapsulation: The best technique in taste masking for the manufacturing scale - Effect of polymer encapsulation on drug targeting.

    PubMed

    Al-Kasmi, Basheer; Alsirawan, Mhd Bashir; Bashimam, Mais; El-Zein, Hind

    2017-08-28

    Drug taste masking is a crucial process for the preparation of pediatric and geriatric formulations as well as fast dissolving tablets. Taste masking techniques aim to prevent drug release in saliva and at the same time to obtain the desired release profile in gastrointestinal tract. Several taste masking methods are reported, however this review has focused on a group of promising methods; complexation, encapsulation, and hot melting. The effects of each method on the physicochemical properties of the drug are described in details. Furthermore, a scoring system was established to evaluate each process using recent published data of selected factors. These include, input, process, and output factors that are related to each taste masking method. Input factors include the attributes of the materials used for taste masking. Process factors include equipment type and process parameters. Finally, output factors, include taste masking quality and yield. As a result, Mechanical microencapsulation obtained the highest score (5/8) along with complexation with cyclodextrin suggesting that these methods are the most preferable for drug taste masking. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Total Ambient Dose Equivalent Buildup Factor Determination for Nbs04 Concrete.

    PubMed

    Duckic, Paulina; Hayes, Robert B

    2018-06-01

    Buildup factors are dimensionless multiplicative factors required by the point kernel method to account for scattered radiation through a shielding material. The accuracy of the point kernel method is strongly affected by the correspondence of analyzed parameters to experimental configurations, which is attempted to be simplified here. The point kernel method has not been found to have widespread practical use for neutron shielding calculations due to the complex neutron transport behavior through shielding materials (i.e. the variety of interaction mechanisms that neutrons may undergo while traversing the shield) as well as non-linear neutron total cross section energy dependence. In this work, total ambient dose buildup factors for NBS04 concrete are calculated in terms of neutron and secondary gamma ray transmission factors. The neutron and secondary gamma ray transmission factors are calculated using MCNP6™ code with updated cross sections. Both transmission factors and buildup factors are given in a tabulated form. Practical use of neutron transmission and buildup factors warrants rigorously calculated results with all associated uncertainties. In this work, sensitivity analysis of neutron transmission factors and total buildup factors with varying water content has been conducted. The analysis showed significant impact of varying water content in concrete on both neutron transmission factors and total buildup factors. Finally, support vector regression, a machine learning technique, has been engaged to make a model based on the calculated data for calculation of the buildup factors. The developed model can predict most of the data with 20% relative error.

  5. The Research of Regression Method for Forecasting Monthly Electricity Sales Considering Coupled Multi-factor

    NASA Astrophysics Data System (ADS)

    Wang, Jiangbo; Liu, Junhui; Li, Tiantian; Yin, Shuo; He, Xinhui

    2018-01-01

    The monthly electricity sales forecasting is a basic work to ensure the safety of the power system. This paper presented a monthly electricity sales forecasting method which comprehensively considers the coupled multi-factors of temperature, economic growth, electric power replacement and business expansion. The mathematical model is constructed by using regression method. The simulation results show that the proposed method is accurate and effective.

  6. Comparison of cluster-based and source-attribution methods for estimating transmission risk using large HIV sequence databases.

    PubMed

    Le Vu, Stéphane; Ratmann, Oliver; Delpech, Valerie; Brown, Alison E; Gill, O Noel; Tostevin, Anna; Fraser, Christophe; Volz, Erik M

    2018-06-01

    Phylogenetic clustering of HIV sequences from a random sample of patients can reveal epidemiological transmission patterns, but interpretation is hampered by limited theoretical support and statistical properties of clustering analysis remain poorly understood. Alternatively, source attribution methods allow fitting of HIV transmission models and thereby quantify aspects of disease transmission. A simulation study was conducted to assess error rates of clustering methods for detecting transmission risk factors. We modeled HIV epidemics among men having sex with men and generated phylogenies comparable to those that can be obtained from HIV surveillance data in the UK. Clustering and source attribution approaches were applied to evaluate their ability to identify patient attributes as transmission risk factors. We find that commonly used methods show a misleading association between cluster size or odds of clustering and covariates that are correlated with time since infection, regardless of their influence on transmission. Clustering methods usually have higher error rates and lower sensitivity than source attribution method for identifying transmission risk factors. But neither methods provide robust estimates of transmission risk ratios. Source attribution method can alleviate drawbacks from phylogenetic clustering but formal population genetic modeling may be required to estimate quantitative transmission risk factors. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  7. A Fatigue Life Prediction Method Based on Strain Intensity Factor

    PubMed Central

    Zhang, Wei; Liu, Huili; Wang, Qiang; He, Jingjing

    2017-01-01

    In this paper, a strain-intensity-factor-based method is proposed to calculate the fatigue crack growth under the fully reversed loading condition. A theoretical analysis is conducted in detail to demonstrate that the strain intensity factor is likely to be a better driving parameter correlated with the fatigue crack growth rate than the stress intensity factor (SIF), especially for some metallic materials (such as 316 austenitic stainless steel) in the low cycle fatigue region with negative stress ratios R (typically R = −1). For fully reversed cyclic loading, the constitutive relation between stress and strain should follow the cyclic stress-strain curve rather than the monotonic one (it is a nonlinear function even within the elastic region). Based on that, a transformation algorithm between the SIF and the strain intensity factor is developed, and the fatigue crack growth rate testing data of 316 austenitic stainless steel and AZ31 magnesium alloy are employed to validate the proposed model. It is clearly observed that the scatter band width of crack growth rate vs. strain intensity factor is narrower than that vs. the SIF for different load ranges (which indicates that the strain intensity factor is a better parameter than the stress intensity factor under the fully reversed load condition). It is also shown that the crack growth rate is not uniquely determined by the SIF range even under the same R, but is also influenced by the maximum loading. Additionally, the fatigue life data (strain-life curve) of smooth cylindrical specimens are also used for further comparison, where a modified Paris equation and the equivalent initial flaw size (EIFS) are involved. The results of the proposed method have a better agreement with the experimental data compared to the stress intensity factor based method. Overall, the strain intensity factor method shows a fairly good ability in calculating the fatigue crack propagation, especially for the fully reversed cyclic loading condition. PMID:28773049

  8. HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.

    PubMed

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2011-01-01

    The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.

  9. Risk Factors of Falls in Community-Dwelling Older Adults: Logistic Regression Tree Analysis

    ERIC Educational Resources Information Center

    Yamashita, Takashi; Noe, Douglas A.; Bailer, A. John

    2012-01-01

    Purpose of the Study: A novel logistic regression tree-based method was applied to identify fall risk factors and possible interaction effects of those risk factors. Design and Methods: A nationally representative sample of American older adults aged 65 years and older (N = 9,592) in the Health and Retirement Study 2004 and 2006 modules was used.…

  10. Guidelines for Analysis of Socio-Cultural Factors in Health. Volume 4: Socio-Cultural Factors in Health Planning. International Health Planning Methods Series.

    ERIC Educational Resources Information Center

    Fraser, Renee White

    Intended to assist Agency for International Development (AID) officers, advisors, and health officials in incorporating health planning into national plans for economic development, this fourth of ten manuals in the International Health Planning Methods Series deals with sociocultural, psychological, and behavioral factors that affect the planning…

  11. New classification methods on singularity of mechanism

    NASA Astrophysics Data System (ADS)

    Luo, Jianguo; Han, Jianyou

    2010-07-01

    Based on the analysis of base and methods of singularity of mechanism, four methods obtained according to the factors of moving states of mechanism and cause of singularity and property of linear complex of singularity and methods in studying singularity, these bases and methods can't reflect the direct property and systematic property and controllable property of the structure of mechanism in macro, thus can't play an excellent role in guiding to evade the configuration before the appearance of singularity. In view of the shortcomings of forementioned four bases and methods, six new methods combined with the structure and exterior phenomena and motion control of mechanism directly and closely, classfication carried out based on the factors of moving base and joint component and executor and branch and acutating source and input parameters, these factors display the systemic property in macro, excellent guiding performance can be expected in singularity evasion and machine design and machine control based on these new bases and methods.

  12. A new method for testing the scale-factor performance of fiber optical gyroscope

    NASA Astrophysics Data System (ADS)

    Zhao, Zhengxin; Yu, Haicheng; Li, Jing; Li, Chao; Shi, Haiyang; Zhang, Bingxin

    2015-10-01

    Fiber optical gyro (FOG) is a kind of solid-state optical gyroscope with good environmental adaptability, which has been widely used in national defense, aviation, aerospace and other civilian areas. In some applications, FOG will experience environmental conditions such as vacuum, radiation, vibration and so on, and the scale-factor performance is concerned as an important accuracy indicator. However, the scale-factor performance of FOG under these environmental conditions is difficult to test using conventional methods, as the turntable can't work under these environmental conditions. According to the phenomenon that the physical effects of FOG produced by the sawtooth voltage signal under static conditions is consistent with the physical effects of FOG produced by a turntable in uniform rotation, a new method for the scale-factor performance test of FOG without turntable is proposed in this paper. In this method, the test system of the scale-factor performance is constituted by an external operational amplifier circuit and a FOG which the modulation signal and Y waveguied are disconnected. The external operational amplifier circuit is used to superimpose the externally generated sawtooth voltage signal and the modulation signal of FOG, and to exert the superimposed signal on the Y waveguide of the FOG. The test system can produce different equivalent angular velocities by changing the cycle of the sawtooth signal in the scale-factor performance test. In this paper, the system model of FOG superimposed with an externally generated sawtooth is analyzed, and a conclusion that the effect of the equivalent input angular velocity produced by the sawtooth voltage signal is consistent with the effect of input angular velocity produced by the turntable is obtained. The relationship between the equivalent angular velocity and the parameters such as sawtooth cycle and so on is presented, and the correction method for the equivalent angular velocity is also presented by analyzing the influence of each parameter error on the equivalent angular velocity. A comparative experiment of the method proposed in this paper and the method of turntable calibration was conducted, and the scale-factor performance test results of the same FOG using the two methods were consistent. Using the method proposed in this paper to test the scale-factor performance of FOG, the input angular velocity is the equivalent effect produced by a sawtooth voltage signal, and there is no need to use a turntable to produce mechanical rotation, so this method can be used to test the performance of FOG at the ambient conditions which turntable can not work.

  13. Factors influencing contraceptive use and non-use among women of advanced reproductive age in Nigeria.

    PubMed

    Solanke, Bola Lukman

    2017-01-07

    Factors influencing contraceptive use and non-use among women of advanced reproductive age have been insufficiently researched in Nigeria. This study examines factors influencing contraceptive use and non-use among women of advanced reproductive age in Nigeria. Secondary data were pooled and extracted from 2008 and 2013 Nigeria Demographic and Health Surveys (NDHS). The weighted sample size was 14,450 women of advanced reproductive age. The dependent variable was current contraceptive use. The explanatory variables were selected socio-demographic characteristics and three control variables. Analyses were performed using Stata version 12. Multinomial logistic regression was applied in four models. Majority of the respondents are not using any method of contraceptive; the expected risk of using modern contraceptive relative to traditional method reduces by a factor of 0.676 for multiparous women (rrr = 0.676; CI: 0.464-0.985); the expected risk of using modern contraceptive relative to traditional method reduces by a factor of 0.611 for women who want more children (rrr = 0.611; CI: 0.493-0.757); the relative risk for using modern contraceptive relative to traditional method increases by a factor of 1.637 as maternal education reaches secondary education (rrr = 1.637; CI: 1.173-2.285); the relative risk for using modern contraceptive relative to traditional method increases by a factor of 1.726 for women in richest households (rrr = 1.726; CI: 1.038-2.871); and the expected risk of using modern contraceptive relative to traditional method increases by a factor of 1.250 for southern women (rrr = 1.250; CI: 1.200-1.818). Socio-demographic characteristics exert more influence on non-use than modern contraceptive use. The scope, content and coverage of existing BCC messages should be extended to cover the contraceptive needs and challenges of women of advanced reproductive age in the country.

  14. Extension of a GIS procedure for calculating the RUSLE equation LS factor

    NASA Astrophysics Data System (ADS)

    Zhang, Hongming; Yang, Qinke; Li, Rui; Liu, Qingrui; Moore, Demie; He, Peng; Ritsema, Coen J.; Geissen, Violette

    2013-03-01

    The Universal Soil Loss Equation (USLE) and revised USLE (RUSLE) are often used to estimate soil erosion at regional landscape scales, however a major limitation is the difficulty in extracting the LS factor. The geographic information system-based (GIS-based) methods which have been developed for estimating the LS factor for USLE and RUSLE also have limitations. The unit contributing area-based estimation method (UCA) converts slope length to unit contributing area for considering two-dimensional topography, however is not able to predict the different zones of soil erosion and deposition. The flowpath and cumulative cell length-based method (FCL) overcomes this disadvantage but does not consider channel networks and flow convergence in two-dimensional topography. The purpose of this research was to overcome these limitations and extend the FCL method through inclusion of channel networks and convergence flow. We developed LS-TOOL in Microsoft's.NET environment using C♯ with a user-friendly interface. Comparing the LS factor calculated with the three methodologies (UCA, FCL and LS-TOOL), LS-TOOL delivers encouraging results. In particular, LS-TOOL uses breaks in slope identified from the DEM to locate soil erosion and deposition zones, channel networks and convergence flow areas. Comparing slope length and LS factor values generated using LS-TOOL with manual methods, LS-TOOL corresponds more closely with the reality of the Xiannangou catchment than results using UCA or FCL. The LS-TOOL algorithm can automatically calculate slope length, slope steepness, L factor, S factor, and LS factors, providing the results as ASCII files which can be easily used in some GIS software. This study is an important step forward in conducting more accurate large area erosion evaluation.

  15. Franck-Condon Factors for Diatomics: Insights and Analysis Using the Fourier Grid Hamiltonian Method

    ERIC Educational Resources Information Center

    Ghosh, Supriya; Dixit, Mayank Kumar; Bhattacharyya, S. P.; Tembe, B. L.

    2013-01-01

    Franck-Condon factors (FCFs) play a crucial role in determining the intensities of the vibrational bands in electronic transitions. In this article, a relatively simple method to calculate the FCFs is illustrated. An algorithm for the Fourier Grid Hamiltonian (FGH) method for computing the vibrational wave functions and the corresponding energy…

  16. ADHD and Method Variance: A Latent Variable Approach Applied to a Nationally Representative Sample of College Freshmen

    ERIC Educational Resources Information Center

    Konold, Timothy R.; Glutting, Joseph J.

    2008-01-01

    This study employed a correlated trait-correlated method application of confirmatory factor analysis to disentangle trait and method variance from measures of attention-deficit/hyperactivity disorder obtained at the college level. The two trait factors were "Diagnostic and Statistical Manual of Mental Disorders-Fourth Edition" ("DSM-IV")…

  17. Non-negative matrix factorization in texture feature for classification of dementia with MRI data

    NASA Astrophysics Data System (ADS)

    Sarwinda, D.; Bustamam, A.; Ardaneswari, G.

    2017-07-01

    This paper investigates applications of non-negative matrix factorization as feature selection method to select the features from gray level co-occurrence matrix. The proposed approach is used to classify dementia using MRI data. In this study, texture analysis using gray level co-occurrence matrix is done to feature extraction. In the feature extraction process of MRI data, we found seven features from gray level co-occurrence matrix. Non-negative matrix factorization selected three features that influence of all features produced by feature extractions. A Naïve Bayes classifier is adapted to classify dementia, i.e. Alzheimer's disease, Mild Cognitive Impairment (MCI) and normal control. The experimental results show that non-negative factorization as feature selection method able to achieve an accuracy of 96.4% for classification of Alzheimer's and normal control. The proposed method also compared with other features selection methods i.e. Principal Component Analysis (PCA).

  18. Factor models for cancer signatures

    NASA Astrophysics Data System (ADS)

    Kakushadze, Zura; Yu, Willie

    2016-11-01

    We present a novel method for extracting cancer signatures by applying statistical risk models (http://ssrn.com/abstract=2732453) from quantitative finance to cancer genome data. Using 1389 whole genome sequenced samples from 14 cancers, we identify an ;overall; mode of somatic mutational noise. We give a prescription for factoring out this noise and source code for fixing the number of signatures. We apply nonnegative matrix factorization (NMF) to genome data aggregated by cancer subtype and filtered using our method. The resultant signatures have substantially lower variability than those from unfiltered data. Also, the computational cost of signature extraction is cut by about a factor of 10. We find 3 novel cancer signatures, including a liver cancer dominant signature (96% contribution) and a renal cell carcinoma signature (70% contribution). Our method accelerates finding new cancer signatures and improves their overall stability. Reciprocally, the methods for extracting cancer signatures could have interesting applications in quantitative finance.

  19. G =  MAT: linking transcription factor expression and DNA binding data.

    PubMed

    Tretyakov, Konstantin; Laur, Sven; Vilo, Jaak

    2011-01-31

    Transcription factors are proteins that bind to motifs on the DNA and thus affect gene expression regulation. The qualitative description of the corresponding processes is therefore important for a better understanding of essential biological mechanisms. However, wet lab experiments targeted at the discovery of the regulatory interplay between transcription factors and binding sites are expensive. We propose a new, purely computational method for finding putative associations between transcription factors and motifs. This method is based on a linear model that combines sequence information with expression data. We present various methods for model parameter estimation and show, via experiments on simulated data, that these methods are reliable. Finally, we examine the performance of this model on biological data and conclude that it can indeed be used to discover meaningful associations. The developed software is available as a web tool and Scilab source code at http://biit.cs.ut.ee/gmat/.

  20. G = MAT: Linking Transcription Factor Expression and DNA Binding Data

    PubMed Central

    Tretyakov, Konstantin; Laur, Sven; Vilo, Jaak

    2011-01-01

    Transcription factors are proteins that bind to motifs on the DNA and thus affect gene expression regulation. The qualitative description of the corresponding processes is therefore important for a better understanding of essential biological mechanisms. However, wet lab experiments targeted at the discovery of the regulatory interplay between transcription factors and binding sites are expensive. We propose a new, purely computational method for finding putative associations between transcription factors and motifs. This method is based on a linear model that combines sequence information with expression data. We present various methods for model parameter estimation and show, via experiments on simulated data, that these methods are reliable. Finally, we examine the performance of this model on biological data and conclude that it can indeed be used to discover meaningful associations. The developed software is available as a web tool and Scilab source code at http://biit.cs.ut.ee/gmat/. PMID:21297945

  1. Conditional Random Fields for Fast, Large-Scale Genome-Wide Association Studies

    PubMed Central

    Huang, Jim C.; Meek, Christopher; Kadie, Carl; Heckerman, David

    2011-01-01

    Understanding the role of genetic variation in human diseases remains an important problem to be solved in genomics. An important component of such variation consist of variations at single sites in DNA, or single nucleotide polymorphisms (SNPs). Typically, the problem of associating particular SNPs to phenotypes has been confounded by hidden factors such as the presence of population structure, family structure or cryptic relatedness in the sample of individuals being analyzed. Such confounding factors lead to a large number of spurious associations and missed associations. Various statistical methods have been proposed to account for such confounding factors such as linear mixed-effect models (LMMs) or methods that adjust data based on a principal components analysis (PCA), but these methods either suffer from low power or cease to be tractable for larger numbers of individuals in the sample. Here we present a statistical model for conducting genome-wide association studies (GWAS) that accounts for such confounding factors. Our method scales in runtime quadratic in the number of individuals being studied with only a modest loss in statistical power as compared to LMM-based and PCA-based methods when testing on synthetic data that was generated from a generalized LMM. Applying our method to both real and synthetic human genotype/phenotype data, we demonstrate the ability of our model to correct for confounding factors while requiring significantly less runtime relative to LMMs. We have implemented methods for fitting these models, which are available at http://www.microsoft.com/science. PMID:21765897

  2. A brief measure of attitudes toward mixed methods research in psychology

    PubMed Central

    Roberts, Lynne D.; Povee, Kate

    2014-01-01

    The adoption of mixed methods research in psychology has trailed behind other social science disciplines. Teaching psychology students, academics, and practitioners about mixed methodologies may increase the use of mixed methods within the discipline. However, tailoring and evaluating education and training in mixed methodologies requires an understanding of, and way of measuring, attitudes toward mixed methods research in psychology. To date, no such measure exists. In this article we present the development and initial validation of a new measure: Attitudes toward Mixed Methods Research in Psychology. A pool of 42 items developed from previous qualitative research on attitudes toward mixed methods research along with validation measures was administered via an online survey to a convenience sample of 274 psychology students, academics and psychologists. Principal axis factoring with varimax rotation on a subset of the sample produced a four-factor, 12-item solution. Confirmatory factor analysis on a separate subset of the sample indicated that a higher order four factor model provided the best fit to the data. The four factors; ‘Limited Exposure,’ ‘(in)Compatibility,’ ‘Validity,’ and ‘Tokenistic Qualitative Component’; each have acceptable internal reliability. Known groups validity analyses based on preferred research orientation and self-rated mixed methods research skills, and convergent and divergent validity analyses based on measures of attitudes toward psychology as a science and scientist and practitioner orientation, provide initial validation of the measure. This brief, internally reliable measure can be used in assessing attitudes toward mixed methods research in psychology, measuring change in attitudes as part of the evaluation of mixed methods education, and in larger research programs. PMID:25429281

  3. Automated processing of first-pass radionuclide angiocardiography by factor analysis of dynamic structures.

    PubMed

    Cavailloles, F; Bazin, J P; Capderou, A; Valette, H; Herbert, J L; Di Paola, R

    1987-05-01

    A method for automatic processing of cardiac first-pass radionuclide study is presented. This technique, factor analysis of dynamic structures (FADS) provides an automatic separation of anatomical structures according to their different temporal behaviour, even if they are superimposed. FADS has been applied to 76 studies. A description of factor patterns obtained in various pathological categories is presented. FADS provides easy diagnosis of shunts and tricuspid insufficiency. Quantitative information derived from the factors (cardiac output and mean transit time) were compared to those obtained by the region of interest method. Using FADS, a higher correlation with cardiac catheterization was found for cardiac output calculation. Thus compared to the ROI method, FADS presents obvious advantages: a good separation of overlapping cardiac chambers is obtained; this operator independant method provides more objective and reproducible results. A number of parameters of the cardio-pulmonary function can be assessed by first-pass radionuclide angiocardiography (RNA) [1,2]. Usually, they are calculated using time-activity curves (TAC) from regions of interest (ROI) drawn on the cardiac chambers and the lungs. This method has two main drawbacks: (1) the lack of inter and intra-observers reproducibility; (2) the problem of crosstalk which affects the evaluation of the cardio-pulmonary performance. The crosstalk on planar imaging is due to anatomical superimposition of the cardiac chambers and lungs. The activity measured in any ROI is the sum of the activity in several organs and 'decontamination' of the TAC cannot easily be performed using the ROI method [3]. Factor analysis of dynamic structures (FADS) [4,5] can solve the two problems mentioned above. It provides an automatic separation of anatomical structures according to their different temporal behaviour, even if they are superimposed. The resulting factors are estimates of the time evolution of the activity in each structure (underlying physiological components), and the associated factor images are estimates of the spatial distribution of each factor. The aim of this study was to assess the reliability of FADS in first pass RNA and compare the results to those obtained by the ROI method which is generally considered as the routine procedure.

  4. An investigation into the psychometric properties of the Hospital Anxiety and Depression Scale in patients with breast cancer

    PubMed Central

    Rodgers, Jacqui; Martin, Colin R; Morse, Rachel C; Kendell, Kate; Verrill, Mark

    2005-01-01

    Background To determine the psychometric properties of the Hospital Anxiety and Depression Scale (HADS) in patients with breast cancer and determine the suitability of the instrument for use with this clinical group. Methods A cross-sectional design was used. The study used a pooled data set from three breast cancer clinical groups. The dependent variables were HADS anxiety and depression sub-scale scores. Exploratory and confirmatory factor analyses were conducted on the HADS to determine its psychometric properties in 110 patients with breast cancer. Seven models were tested to determine model fit to the data. Results Both factor analysis methods indicated that three-factor models provided a better fit to the data compared to two-factor (anxiety and depression) models for breast cancer patients. Clark and Watson's three factor tripartite and three factor hierarchical models provided the best fit. Conclusion The underlying factor structure of the HADS in breast cancer patients comprises three distinct, but correlated factors, negative affectivity, autonomic anxiety and anhedonic depression. The clinical utility of the HADS in screening for anxiety and depression in breast cancer patients may be enhanced by using a modified scoring procedure based on a three-factor model of psychological distress. This proposed alternate scoring method involving regressing autonomic anxiety and anhedonic depression factors onto the third factor (negative affectivity) requires further investigation in order to establish its efficacy. PMID:16018801

  5. On the stability analysis of approximate factorization methods for 3D Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.; Ibraheem, S. O.

    1993-01-01

    The convergence characteristics of various approximate factorizations for the 3D Euler and Navier-Stokes equations are examined using the von-Neumann stability analysis method. Three upwind-difference based factorizations and several central-difference based factorizations are considered for the Euler equations. In the upwind factorizations both the flux-vector splitting methods of Steger and Warming and van Leer are considered. Analysis of the Navier-Stokes equations is performed only on the Beam and Warming central-difference scheme. The range of CFL numbers over which each factorization is stable is presented for one-, two-, and three-dimensional flow. Also presented for each factorization is the CFL number at which the maximum eigenvalue is minimized, for all Fourier components, as well as for the high frequency range only. The latter is useful for predicting the effectiveness of multigrid procedures with these schemes as smoothers. Further, local mode analysis is performed to test the suitability of using a uniform flow field in the stability analysis. Some inconsistencies in the results from previous analyses are resolved.

  6. Stress-intensity factors for small surface and corner cracks in plates

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Atluri, S. N.; Newman, J. C., Jr.

    1988-01-01

    Three-dimensional finite-element and finite-alternating methods were used to obtain the stress-intensity factors for small surface and corner cracked plates subjected to remote tension and bending loads. The crack-depth-to-crack-length ratios (a/c) ranged from 0.2 to 1 and the crack-depth-to-plate-thickness ratios (a/t) ranged from 0.05 to 0.2. The performance of the finite-element alternating method was studied on these crack configurations. A study of the computational effort involved in the finite-element alternating method showed that several crack configurations could be analyzed with a single rectangular mesh idealization, whereas the conventional finite-element method requires a different mesh for each configuration. The stress-intensity factors obtained with the finite-element-alternating method agreed well (within 5 percent) with those calculated from the finite-element method with singularity elements.

  7. Measuring signal-to-noise ratio in partially parallel imaging MRI

    PubMed Central

    Goerner, Frank L.; Clarke, Geoffrey D.

    2011-01-01

    Purpose: To assess five different methods of signal-to-noise ratio (SNR) measurement for partially parallel imaging (PPI) acquisitions. Methods: Measurements were performed on a spherical phantom and three volunteers using a multichannel head coil a clinical 3T MRI system to produce echo planar, fast spin echo, gradient echo, and balanced steady state free precession image acquisitions. Two different PPI acquisitions, generalized autocalibrating partially parallel acquisition algorithm and modified sensitivity encoding with acceleration factors (R) of 2–4, were evaluated and compared to nonaccelerated acquisitions. Five standard SNR measurement techniques were investigated and Bland–Altman analysis was used to determine agreement between the various SNR methods. The estimated g-factor values, associated with each method of SNR calculation and PPI reconstruction method, were also subjected to assessments that considered the effects on SNR due to reconstruction method, phase encoding direction, and R-value. Results: Only two SNR measurement methods produced g-factors in agreement with theoretical expectations (g ≥ 1). Bland–Altman tests demonstrated that these two methods also gave the most similar results relative to the other three measurements. R-value was the only factor of the three we considered that showed significant influence on SNR changes. Conclusions: Non-signal methods used in SNR evaluation do not produce results consistent with expectations in the investigated PPI protocols. Two of the methods studied provided the most accurate and useful results. Of these two methods, it is recommended, when evaluating PPI protocols, the image subtraction method be used for SNR calculations due to its relative accuracy and ease of implementation. PMID:21978049

  8. Measurement and evaluation practices of factors that contribute to effective health promotion collaboration functioning: A scoping review.

    PubMed

    Stolp, Sean; Bottorff, Joan L; Seaton, Cherisse L; Jones-Bricker, Margaret; Oliffe, John L; Johnson, Steven T; Errey, Sally; Medhurst, Kerensa; Lamont, Sonia

    2017-04-01

    The purpose of this scoping review was to identify promising factors that underpin effective health promotion collaborations, measurement approaches, and evaluation practices. Measurement approaches and evaluation practices employed in 14 English-language articles published between January 2001 and October 2015 were considered. Data extraction included research design, health focus of the collaboration, factors being evaluated, how factors were conceptualized and measured, and outcome measures. Studies were methodologically diverse employing either quantitative methods (n=9), mixed methods (n=4), or qualitative methods (n=1). In total, these 14 studies examined 113 factors, 88 of which were only measured once. Leadership was the most commonly studied factor but was conceptualized differently across studies. Six factors were significantly associated with outcome measures across studies; leadership (n=3), gender (n=2), trust (n=2), length of the collaboration (n=2), budget (n=2) and changes in organizational model (n=2). Since factors were often conceptualized differently, drawing conclusions about their impact on collaborative functioning remains difficult. The use of reliable and validated tools would strengthen evaluation of health promotion collaborations and would support and enhance the effectiveness of collaboration. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. On-line method of determining utilization factor in Hg-196 photochemical separation process

    DOEpatents

    Grossman, Mark W.; Moskowitz, Philip E.

    1992-01-01

    The present invention is directed to a method for determining the utilization factor [U] in a photochemical mercury enrichment process (.sup.196 Hg) by measuring relative .sup.196 Hg densities using absorption spectroscopy.

  10. Determination of small-field correction factors for cylindrical ionization chambers using a semiempirical method

    NASA Astrophysics Data System (ADS)

    Park, Kwangwoo; Bak, Jino; Park, Sungho; Choi, Wonhoon; Park, Suk Won

    2016-02-01

    A semiempirical method based on the averaging effect of the sensitive volumes of different air-filled ionization chambers (ICs) was employed to approximate the correction factors for beam quality produced from the difference in the sizes of the reference field and small fields. We measured the output factors using several cylindrical ICs and calculated the correction factors using a mathematical method similar to deconvolution; in the method, we modeled the variable and inhomogeneous energy fluence function within the chamber cavity. The parameters of the modeled function and the correction factors were determined by solving a developed system of equations as well as on the basis of the measurement data and the geometry of the chambers. Further, Monte Carlo (MC) computations were performed using the Monaco® treatment planning system to validate the proposed method. The determined correction factors (k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} ) were comparable to the values derived from the MC computations performed using Monaco®. For example, for a 6 MV photon beam and a field size of 1  ×  1 cm2, k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} was calculated to be 1.125 for a PTW 31010 chamber and 1.022 for a PTW 31016 chamber. On the other hand, the k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} values determined from the MC computations were 1.121 and 1.031, respectively; the difference between the proposed method and the MC computation is less than 2%. In addition, we determined the k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} values for PTW 30013, PTW 31010, PTW 31016, IBA FC23-C, and IBA CC13 chambers as well. We devised a method for determining k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} from both the measurement of the output factors and model-based mathematical computation. The proposed method can be useful in case the MC simulation would not be applicable for the clinical settings.

  11. Motivating Teachers towards Expertise Development: A Mixed-Methods Study of the Relationships between School Culture, Internal Factors, and State of Flow

    ERIC Educational Resources Information Center

    Mayeaux, Amanda Shuford

    2013-01-01

    The purpose of this sequential mixed-methods research was to discover the impact school culture, internal factors, and the state of flow has upon motivating a teacher to develop teaching expertise. This research was designed to find answers concerning why and how individual teachers can nurture their existing internal factors to increase their…

  12. Examining the Factor Structure and Discriminant Validity of the 12-Item General Health Questionnaire (GHQ-12) Among Spanish Postpartum Women

    ERIC Educational Resources Information Center

    Aguado, Jaume; Campbell, Alistair; Ascaso, Carlos; Navarro, Purificacion; Garcia-Esteve, Lluisa; Luciano, Juan V.

    2012-01-01

    In this study, the authors tested alternative factor models of the 12-item General Health Questionnaire (GHQ-12) in a sample of Spanish postpartum women, using confirmatory factor analysis. The authors report the results of modeling three different methods for scoring the GHQ-12 using estimation methods recommended for categorical and binary data.…

  13. Nodal weighting factor method for ex-core fast neutron fluence evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiang, R. T.

    The nodal weighting factor method is developed for evaluating ex-core fast neutron flux in a nuclear reactor by utilizing adjoint neutron flux, a fictitious unit detector cross section for neutron energy above 1 or 0.1 MeV, the unit fission source, and relative assembly nodal powers. The method determines each nodal weighting factor for ex-core neutron fast flux evaluation by solving the steady-state adjoint neutron transport equation with a fictitious unit detector cross section for neutron energy above 1 or 0.1 MeV as the adjoint source, by integrating the unit fission source with a typical fission spectrum to the solved adjointmore » flux over all energies, all angles and given nodal volume, and by dividing it with the sum of all nodal weighting factors, which is a normalization factor. Then, the fast neutron flux can be obtained by summing the various relative nodal powers times the corresponding nodal weighting factors of the adjacent significantly contributed peripheral assembly nodes and times a proper fast neutron attenuation coefficient over an operating period. A generic set of nodal weighting factors can be used to evaluate neutron fluence at the same location for similar core design and fuel cycles, but the set of nodal weighting factors needs to be re-calibrated for a transition-fuel-cycle. This newly developed nodal weighting factor method should be a useful and simplified tool for evaluating fast neutron fluence at selected locations of interest in ex-core components of contemporary nuclear power reactors. (authors)« less

  14. Representation learning via Dual-Autoencoder for recommendation.

    PubMed

    Zhuang, Fuzhen; Zhang, Zhiqiang; Qian, Mingda; Shi, Chuan; Xie, Xing; He, Qing

    2017-06-01

    Recommendation has provoked vast amount of attention and research in recent decades. Most previous works employ matrix factorization techniques to learn the latent factors of users and items. And many subsequent works consider external information, e.g., social relationships of users and items' attributions, to improve the recommendation performance under the matrix factorization framework. However, matrix factorization methods may not make full use of the limited information from rating or check-in matrices, and achieve unsatisfying results. Recently, deep learning has proven able to learn good representation in natural language processing, image classification, and so on. Along this line, we propose a new representation learning framework called Recommendation via Dual-Autoencoder (ReDa). In this framework, we simultaneously learn the new hidden representations of users and items using autoencoders, and minimize the deviations of training data by the learnt representations of users and items. Based on this framework, we develop a gradient descent method to learn hidden representations. Extensive experiments conducted on several real-world data sets demonstrate the effectiveness of our proposed method compared with state-of-the-art matrix factorization based methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Ratio-based vs. model-based methods to correct for urinary creatinine concentrations.

    PubMed

    Jain, Ram B

    2016-08-01

    Creatinine-corrected urinary analyte concentration is usually computed as the ratio of the observed level of analyte concentration divided by the observed level of the urinary creatinine concentration (UCR). This ratio-based method is flawed since it implicitly assumes that hydration is the only factor that affects urinary creatinine concentrations. On the contrary, it has been shown in the literature, that age, gender, race/ethnicity, and other factors also affect UCR. Consequently, an optimal method to correct for UCR should correct for hydration as well as other factors like age, gender, and race/ethnicity that affect UCR. Model-based creatinine correction in which observed UCRs are used as an independent variable in regression models has been proposed. This study was conducted to evaluate the performance of ratio-based and model-based creatinine correction methods when the effects of gender, age, and race/ethnicity are evaluated one factor at a time for selected urinary analytes and metabolites. It was observed that ratio-based method leads to statistically significant pairwise differences, for example, between males and females or between non-Hispanic whites (NHW) and non-Hispanic blacks (NHB), more often than the model-based method. However, depending upon the analyte of interest, the reverse is also possible. The estimated ratios of geometric means (GM), for example, male to female or NHW to NHB, were also compared for the two methods. When estimated UCRs were higher for the group (for example, males) in the numerator of this ratio, these ratios were higher for the model-based method, for example, male to female ratio of GMs. When estimated UCR were lower for the group (for example, NHW) in the numerator of this ratio, these ratios were higher for the ratio-based method, for example, NHW to NHB ratio of GMs. Model-based method is the method of choice if all factors that affect UCR are to be accounted for.

  16. HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS

    PubMed Central

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2012-01-01

    The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied. PMID:22661790

  17. An approach to compute the C factor for universal soil loss equation using EOS-MODIS vegetation index (VI)

    NASA Astrophysics Data System (ADS)

    Li, Hui; He, Huizhong; Chen, Xiaoling; Zhang, Lihua

    2008-12-01

    C factor, known as cover and management factor in USLE, is one of the most important factors since it represents the combined effects of plant, soil cover and management on erosion, whereas it also most easily changed variables by men for it itself is time-variant and the uncertainty nature. So it's vital to compute C factor properly in order to model erosion effectively. In this paper we attempt to present a new method for calculating C value using Vegetation Index (VI) derived from multi-temporal MODIS imagery, which can estimate C factor in a more scientific way. Based on the theory that C factor is strongly correlated with VI, the average annual C value is estimated by adding the VI value of three growth phases within a year with different weights. Modified Fournier Index (MFI) is employed to determine the weight of each growth phase for the vegetation growth and agricultural activities are significantly influenced by precipitation. The C values generated by the proposed method were compared with that of other method, and the results showed that the results of our method is highly correlated with the others. This study is helpful to extract C value from satellite data in a scientific and efficient way, which in turn could be used to facilitate the prediction of erosion.

  18. Comparison of Seven Methods for Boolean Factor Analysis and Their Evaluation by Information Gain.

    PubMed

    Frolov, Alexander A; Húsek, Dušan; Polyakov, Pavel Yu

    2016-03-01

    An usual task in large data set analysis is searching for an appropriate data representation in a space of fewer dimensions. One of the most efficient methods to solve this task is factor analysis. In this paper, we compare seven methods for Boolean factor analysis (BFA) in solving the so-called bars problem (BP), which is a BFA benchmark. The performance of the methods is evaluated by means of information gain. Study of the results obtained in solving BP of different levels of complexity has allowed us to reveal strengths and weaknesses of these methods. It is shown that the Likelihood maximization Attractor Neural Network with Increasing Activity (LANNIA) is the most efficient BFA method in solving BP in many cases. Efficacy of the LANNIA method is also shown, when applied to the real data from the Kyoto Encyclopedia of Genes and Genomes database, which contains full genome sequencing for 1368 organisms, and to text data set R52 (from Reuters 21578) typically used for label categorization.

  19. Combining Knowledge and Data Driven Insights for Identifying Risk Factors using Electronic Health Records

    PubMed Central

    Sun, Jimeng; Hu, Jianying; Luo, Dijun; Markatou, Marianthi; Wang, Fei; Edabollahi, Shahram; Steinhubl, Steven E.; Daar, Zahra; Stewart, Walter F.

    2012-01-01

    Background: The ability to identify the risk factors related to an adverse condition, e.g., heart failures (HF) diagnosis, is very important for improving care quality and reducing cost. Existing approaches for risk factor identification are either knowledge driven (from guidelines or literatures) or data driven (from observational data). No existing method provides a model to effectively combine expert knowledge with data driven insight for risk factor identification. Methods: We present a systematic approach to enhance known knowledge-based risk factors with additional potential risk factors derived from data. The core of our approach is a sparse regression model with regularization terms that correspond to both knowledge and data driven risk factors. Results: The approach is validated using a large dataset containing 4,644 heart failure cases and 45,981 controls. The outpatient electronic health records (EHRs) for these patients include diagnosis, medication, lab results from 2003–2010. We demonstrate that the proposed method can identify complementary risk factors that are not in the existing known factors and can better predict the onset of HF. We quantitatively compare different sets of risk factors in the context of predicting onset of HF using the performance metric, the Area Under the ROC Curve (AUC). The combined risk factors between knowledge and data significantly outperform knowledge-based risk factors alone. Furthermore, those additional risk factors are confirmed to be clinically meaningful by a cardiologist. Conclusion: We present a systematic framework for combining knowledge and data driven insights for risk factor identification. We demonstrate the power of this framework in the context of predicting onset of HF, where our approach can successfully identify intuitive and predictive risk factors beyond a set of known HF risk factors. PMID:23304365

  20. Insight into dementia care management using social-behavioral theory and mixed methods.

    PubMed

    Connor, Karen; McNeese-Smith, Donna; van Servellen, Gwen; Chang, Betty; Lee, Martin; Cheng, Eric; Hajar, Abdulrahman; Vickrey, Barbara G

    2009-01-01

    For health organizations (private and public) to advance their care-management programs, to use resources effectively and efficiently, and to improve patient outcomes, it is germane to isolate and quantify care-management activities and to identify overarching domains. The aims of this study were to identify and report on an application of mixed methods of qualitative statistical techniques, based on a theoretical framework, and to construct variables for factor analysis and exploratory factor analytic steps for identifying domains of dementia care management. Care-management activity data were extracted from the care plans of 181 pairs of individuals (with dementia and their informal caregivers) who had participated in the intervention arm of a randomized controlled trial of a dementia care-management program. Activities were organized into types, using card-sorting methods, influenced by published theoretical constructs on self-efficacy and general strain theory. These activity types were mapped in the initial data set to construct variables for exploratory factor analysis. Principal components extraction with varimax and promax rotations was used to estimate the number of factors. Cronbach's alpha was calculated for the items in each factor to assess internal consistency reliability. The two-phase card-sorting technique yielded 45 activity types out of 450 unique activities. Exploratory factor analysis produced four care-management domains (factors): behavior management, clinical strategies and caregiver support, community agency, and safety. Internal consistency reliability (Cronbach's alpha) of items for each factor ranged from.63 for the factor "safety" to.89 for the factor "behavior management" (Factor 1). Applying a systematic method to a large set of care-management activities can identify a parsimonious number of higher order categories of variables and factors to guide the understanding of dementia care-management processes. Further application of this methodology in outcome analyses and to other data sets is necessary to test its practicality.

  1. Multi-method Assessment of Psychopathy in Relation to Factors of Internalizing and Externalizing from the Personality Assessment Inventory: The Impact of Method Variance and Suppressor Effects

    PubMed Central

    Blonigen, Daniel M.; Patrick, Christopher J.; Douglas, Kevin S.; Poythress, Norman G.; Skeem, Jennifer L.; Lilienfeld, Scott O.; Edens, John F.; Krueger, Robert F.

    2010-01-01

    Research to date has revealed divergent relations across factors of psychopathy measures with criteria of internalizing (INT; anxiety, depression) and externalizing (EXT; antisocial behavior, substance use). However, failure to account for method variance and suppressor effects has obscured the consistency of these findings across distinct measures of psychopathy. Using a large correctional sample, the current study employed a multi-method approach to psychopathy assessment (self-report, interview/file review) to explore convergent and discriminant relations between factors of psychopathy measures and latent criteria of INT and EXT derived from the Personality Assessment Inventory (PAI; L. Morey, 2007). Consistent with prediction, scores on the affective-interpersonal factor of psychopathy were negatively associated with INT and negligibly related to EXT, whereas scores on the social deviance factor exhibited positive associations (moderate and large, respectively) with both INT and EXT. Notably, associations were highly comparable across the psychopathy measures when accounting for method variance (in the case of EXT) and when assessing for suppressor effects (in the case of INT). Findings are discussed in terms of implications for clinical assessment and evaluation of the validity of interpretations drawn from scores on psychopathy measures. PMID:20230156

  2. Optical factors determined by the T-matrix method in turbidity measurement of absolute coagulation rate constants.

    PubMed

    Xu, Shenghua; Liu, Jie; Sun, Zhiwei

    2006-12-01

    Turbidity measurement for the absolute coagulation rate constants of suspensions has been extensively adopted because of its simplicity and easy implementation. A key factor in deriving the rate constant from experimental data is how to theoretically evaluate the so-called optical factor involved in calculating the extinction cross section of doublets formed during aggregation. In a previous paper, we have shown that compared with other theoretical approaches, the T-matrix method provides a robust solution to this problem and is effective in extending the applicability range of the turbidity methodology, as well as increasing measurement accuracy. This paper will provide a more comprehensive discussion of the physical insight for using the T-matrix method in turbidity measurement and associated technical details. In particular, the importance of ensuring the correct value for the refractive indices for colloidal particles and the surrounding medium used in the calculation is addressed, because the indices generally vary with the wavelength of the incident light. The comparison of calculated results with experiments shows that the T-matrix method can correctly calculate optical factors even for large particles, whereas other existing theories cannot. In addition, the data of the optical factor calculated by the T-matrix method for a range of particle radii and incident light wavelengths are listed.

  3. Socio-economic and cultural differentials in contraceptive usage among Ghanaian women.

    PubMed

    Oheneba-sakyi, Y

    1990-01-01

    Data from the Ghana Fertility Survey of 2001 married women in 1979-1980 were subjected to logistic regression to determine the factors influencing contraceptive use. In this Ghanaian sample only 22 women and no men were sterilized, 11% used an efficient contraceptive method and 8% were using an inefficient method. The most prevalent methods were abstinence by 6% and pill by 5%. The variables analyzed were birth cohort, age at 1st marriage, education, occupation, religion, ethnicity, rural/urban residence, northern/southern residence and number of children desired number of living children. All these factors were dichotomized, e.g., cohort: born before or after 1950. Factors positively significant for contraceptive use were younger women (20% more likely), married at age 20 or older (82% more), education (150% for any method, 67% for an efficient method), professional occupations, protestants, urban residence, southern residence, desire fewer children. Factors negatively associated with contraception were agricultural work (50% as likely), non-Christian religion, both traditional and Moslems (75%), desiring more children and living in the north. Unexpectedly, living in the northern undeveloped region was strongly linked with use of an efficient contraceptive. A factor without significant effect was ethnicity, Akan or non-Akan. These results were discussed with a general review of the literature on determinants of contraceptive use.

  4. Driver Performance Problems of Intercity Bus Public Transportation Safety in Indonesia

    NASA Astrophysics Data System (ADS)

    Suraji, A.; Harnen, S.; Wicaksono, A.; Djakfar, L.

    2017-11-01

    The risk of an inter-city bus public accident can be influenced by various factors such as the driver’s performance. Therefore, knowing the various influential factors related to driver’s performance is very necessary as an effort to realize road traffic safety. This study aims to determine the factors that fall on the accident associated with the driver’s performance and make mathematical modeling factors that affect the accident. Methods of data retrieval were obtained from NTSC secondary data. The data was processed by identifying factors that cause the accident. Furthermore data processing and analysis used the PCA method to obtain mathematical modeling of factors influencing the inter-city bus accidents. The results showed that the main factors that cause accidents are health, discipline, and driver competence.

  5. Comparative Assessment of Models and Methods To Calculate Grid Electricity Emissions.

    PubMed

    Ryan, Nicole A; Johnson, Jeremiah X; Keoleian, Gregory A

    2016-09-06

    Due to the complexity of power systems, tracking emissions attributable to a specific electrical load is a daunting challenge but essential for many environmental impact studies. Currently, no consensus exists on appropriate methods for quantifying emissions from particular electricity loads. This paper reviews a wide range of the existing methods, detailing their functionality, tractability, and appropriate use. We identified and reviewed 32 methods and models and classified them into two distinct categories: empirical data and relationship models and power system optimization models. To illustrate the impact of method selection, we calculate the CO2 combustion emissions factors associated with electric-vehicle charging using 10 methods at nine charging station locations around the United States. Across the methods, we found an up to 68% difference from the mean CO2 emissions factor for a given charging site among both marginal and average emissions factors and up to a 63% difference from the average across average emissions factors. Our results underscore the importance of method selection and the need for a consensus on approaches appropriate for particular loads and research questions being addressed in order to achieve results that are more consistent across studies and allow for soundly supported policy decisions. The paper addresses this issue by offering a set of recommendations for determining an appropriate model type on the basis of the load characteristics and study objectives.

  6. Assessment Methods of Groundwater Overdraft Area and Its Application

    NASA Astrophysics Data System (ADS)

    Dong, Yanan; Xing, Liting; Zhang, Xinhui; Cao, Qianqian; Lan, Xiaoxun

    2018-05-01

    Groundwater is an important source of water, and long-term large demand make groundwater over-exploited. Over-exploitation cause a lot of environmental and geological problems. This paper explores the concept of over-exploitation area, summarizes the natural and social attributes of over-exploitation area, as well as expounds its evaluation methods, including single factor evaluation, multi-factor system analysis and numerical method. At the same time, the different methods are compared and analyzed. And then taking Northern Weifang as an example, this paper introduces the practicality of appraisal method.

  7. Universal first-order reliability concept applied to semistatic structures

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1994-01-01

    A reliability design concept was developed for semistatic structures which combines the prevailing deterministic method with the first-order reliability method. The proposed method surmounts deterministic deficiencies in providing uniformly reliable structures and improved safety audits. It supports risk analyses and reliability selection criterion. The method provides a reliability design factor derived from the reliability criterion which is analogous to the current safety factor for sizing structures and verifying reliability response. The universal first-order reliability method should also be applicable for air and surface vehicles semistatic structures.

  8. Universal first-order reliability concept applied to semistatic structures

    NASA Astrophysics Data System (ADS)

    Verderaime, V.

    1994-07-01

    A reliability design concept was developed for semistatic structures which combines the prevailing deterministic method with the first-order reliability method. The proposed method surmounts deterministic deficiencies in providing uniformly reliable structures and improved safety audits. It supports risk analyses and reliability selection criterion. The method provides a reliability design factor derived from the reliability criterion which is analogous to the current safety factor for sizing structures and verifying reliability response. The universal first-order reliability method should also be applicable for air and surface vehicles semistatic structures.

  9. Ultrasonic evaluation of the strength of unidirectional graphite-polyimide composites

    NASA Technical Reports Server (NTRS)

    Vary, A.; Bowles, K. J.

    1977-01-01

    An acoustic-ultrasonic method is described that was successful in ranking unidirectional graphite-polyimide composite specimens according to variations in interlaminar shear strength. Using this method, a quantity termed the stress wave factor was determined. It was found that this factor increases directly with interlaminar shear strength. The key variables in this investigation were composite density, fiber weight fraction, and void content. The stress wave factor and other ultrasonic factors that were studied were found to provide a powerful means for nondestructive evaluation of mechanical strength properties.

  10. Identification of suitable sites for mountain ginseng cultivation using GIS and geo-temperature.

    PubMed

    Kang, Hag Mo; Choi, Soo Im; Kim, Hyun

    2016-01-01

    This study was conducted to explore an accurate site identification technique using a geographic information system (GIS) and geo-temperature (gT) for locating suitable sites for growing cultivated mountain ginseng (CMG; Panax ginseng), which is highly sensitive to the environmental conditions in which it grows. The study site was Jinan-gun, South Korea. The spatial resolution for geographic data was set at 10 m × 10 m, and the temperatures for various climatic factors influencing CMG growth were calculated by averaging the 3-year temperatures obtained from the automatic weather stations of the Korea Meteorological Administration. Identification of suitable sites for CMG cultivation was undertaken using both a conventional method and a new method, in which the gT was added as one of the most important factors for crop cultivation. The results yielded by the 2 methods were then compared. When the gT was added as an additional factor (new method), the proportion of suitable sites identified decreased by 0.4 % compared with the conventional method. However, the proportion matching real CMG cultivation sites increased by 3.5 %. Moreover, only 68.2 % corresponded with suitable sites identified using the conventional factors; i.e., 31.8 % were newly detected suitable sites. The accuracy of GIS-based identification of suitable CMG cultivation sites improved by applying the temperature factor (i.e., gT) in addition to the conventionally used factors.

  11. Choice of Postpartum Contraception: Factors Predisposing Pregnant Adolescents to Choose Less Effective Methods Over Long-Acting Reversible Contraception.

    PubMed

    Chacko, Mariam R; Wiemann, Constance M; Buzi, Ruth S; Kozinetz, Claudia A; Peskin, Melissa; Smith, Peggy B

    2016-06-01

    The purposes were to determine contraceptive methods pregnant adolescents intend to use postpartum and to understand factors that predispose intention to use less effective birth control than long-acting reversible contraception (LARC). Participants were 247 pregnant minority adolescents in a prenatal program. Intention was assessed by asking "Which of the following methods of preventing pregnancy do you intend to use after you deliver?" Multinomial logistic regression analysis was used to determine factors associated with intent to use nonhormonal (NH) contraception (male/female condoms, abstinence, withdrawal and no method) or short-/medium-acting hormonal (SMH) contraception (birth control pill, patch, vaginal ring, injectable medroxyprogesterone acetate) compared with LARC (implant and intrauterine device) postpartum. Twenty-three percent intended to use LARC, 53% an SMH method, and 24% an NH method. Participants who intended to use NH or SMH contraceptive methods over LARC were significantly more likely to believe that LARC is not effective at preventing pregnancy, to report that they do not make decisions to help reach their goals and that partners are not important when making contraceptive decisions. Other important factors were having a mother who was aged >19 years at first birth and had not graduated from high school, not having experienced a prior pregnancy or talked with parents about birth control options, and the perception of having limited financial resources. Distinct profiles of factors associated with intending to use NH or SMH contraceptive methods over LARC postpartum were identified and may inform future interventions to promote the use of LARC to prevent repeat pregnancy. Copyright © 2015 The Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  12. Assessment of composite motif discovery methods.

    PubMed

    Klepper, Kjetil; Sandve, Geir K; Abul, Osman; Johansen, Jostein; Drablos, Finn

    2008-02-26

    Computational discovery of regulatory elements is an important area of bioinformatics research and more than a hundred motif discovery methods have been published. Traditionally, most of these methods have addressed the problem of single motif discovery - discovering binding motifs for individual transcription factors. In higher organisms, however, transcription factors usually act in combination with nearby bound factors to induce specific regulatory behaviours. Hence, recent focus has shifted from single motifs to the discovery of sets of motifs bound by multiple cooperating transcription factors, so called composite motifs or cis-regulatory modules. Given the large number and diversity of methods available, independent assessment of methods becomes important. Although there have been several benchmark studies of single motif discovery, no similar studies have previously been conducted concerning composite motif discovery. We have developed a benchmarking framework for composite motif discovery and used it to evaluate the performance of eight published module discovery tools. Benchmark datasets were constructed based on real genomic sequences containing experimentally verified regulatory modules, and the module discovery programs were asked to predict both the locations of these modules and to specify the single motifs involved. To aid the programs in their search, we provided position weight matrices corresponding to the binding motifs of the transcription factors involved. In addition, selections of decoy matrices were mixed with the genuine matrices on one dataset to test the response of programs to varying levels of noise. Although some of the methods tested tended to score somewhat better than others overall, there were still large variations between individual datasets and no single method performed consistently better than the rest in all situations. The variation in performance on individual datasets also shows that the new benchmark datasets represents a suitable variety of challenges to most methods for module discovery.

  13. A new method of time difference measurement: The time difference method by dual phase coincidence points detection

    NASA Technical Reports Server (NTRS)

    Zhou, Wei

    1993-01-01

    In the high accurate measurement of periodic signals, the greatest common factor frequency and its characteristics have special functions. A method of time difference measurement - the time difference method by dual 'phase coincidence points' detection is described. This method utilizes the characteristics of the greatest common factor frequency to measure time or phase difference between periodic signals. It can suit a very wide frequency range. Measurement precision and potential accuracy of several picoseconds were demonstrated with this new method. The instrument based on this method is very simple, and the demand for the common oscillator is low. This method and instrument can be used widely.

  14. Simultaneous Tensor Decomposition and Completion Using Factor Priors.

    PubMed

    Chen, Yi-Lei; Hsu, Chiou-Ting Candy; Liao, Hong-Yuan Mark

    2013-08-27

    Tensor completion, which is a high-order extension of matrix completion, has generated a great deal of research interest in recent years. Given a tensor with incomplete entries, existing methods use either factorization or completion schemes to recover the missing parts. However, as the number of missing entries increases, factorization schemes may overfit the model because of incorrectly predefined ranks, while completion schemes may fail to interpret the model factors. In this paper, we introduce a novel concept: complete the missing entries and simultaneously capture the underlying model structure. To this end, we propose a method called Simultaneous Tensor Decomposition and Completion (STDC) that combines a rank minimization technique with Tucker model decomposition. Moreover, as the model structure is implicitly included in the Tucker model, we use factor priors, which are usually known a priori in real-world tensor objects, to characterize the underlying joint-manifold drawn from the model factors. We conducted experiments to empirically verify the convergence of our algorithm on synthetic data, and evaluate its effectiveness on various kinds of real-world data. The results demonstrate the efficacy of the proposed method and its potential usage in tensor-based applications. It also outperforms state-of-the-art methods on multilinear model analysis and visual data completion tasks.

  15. A Comparison of Rule-based Analysis with Regression Methods in Understanding the Risk Factors for Study Withdrawal in a Pediatric Study.

    PubMed

    Haghighi, Mona; Johnson, Suzanne Bennett; Qian, Xiaoning; Lynch, Kristian F; Vehik, Kendra; Huang, Shuai

    2016-08-26

    Regression models are extensively used in many epidemiological studies to understand the linkage between specific outcomes of interest and their risk factors. However, regression models in general examine the average effects of the risk factors and ignore subgroups with different risk profiles. As a result, interventions are often geared towards the average member of the population, without consideration of the special health needs of different subgroups within the population. This paper demonstrates the value of using rule-based analysis methods that can identify subgroups with heterogeneous risk profiles in a population without imposing assumptions on the subgroups or method. The rules define the risk pattern of subsets of individuals by not only considering the interactions between the risk factors but also their ranges. We compared the rule-based analysis results with the results from a logistic regression model in The Environmental Determinants of Diabetes in the Young (TEDDY) study. Both methods detected a similar suite of risk factors, but the rule-based analysis was superior at detecting multiple interactions between the risk factors that characterize the subgroups. A further investigation of the particular characteristics of each subgroup may detect the special health needs of the subgroup and lead to tailored interventions.

  16. Estimate Soil Erodibility Factors Distribution for Maioli Block

    NASA Astrophysics Data System (ADS)

    Lee, Wen-Ying

    2014-05-01

    The natural conditions in Taiwan are poor. Because of the steep slopes, rushing river and fragile geology, soil erosion turn into a serious problem. Not only undermine the sloping landscape, but also created sediment disaster like that reservoir sedimentation, river obstruction…etc. Therefore, predict and control the amount of soil erosion has become an important research topic. Soil erodibility factor (K) is a quantitative index of distinguish the ability of soil to resist the erosion separation and handling. Taiwan soil erodibility factors have been calculated 280 soil samples' erodibility factors by Wann and Huang (1989) use the Wischmeier and Smith nomorgraph. 221 samples were collected at the Maioli block in Miaoli. The coordinates of every sample point and the land use situations were recorded. The physical properties were analyzed for each sample. Three estimation methods, consist of Kriging, Inverse Distance Weighted (IDW) and Spline, were applied to estimate soil erodibility factors distribution for Maioli block by using 181 points data, and the remaining 40 points for the validation. Then, the SPSS regression analysis was used to comparison of the accuracy of the training data and validation data by three different methods. Then, the best method can be determined. In the future, we can used this method to predict the soil erodibility factors in other areas.

  17. Assessment of four calculation methods proposed by the EC for waste hazardous property HP 14 'Ecotoxic'.

    PubMed

    Hennebert, Pierre; Humez, Nicolas; Conche, Isabelle; Bishop, Ian; Rebischung, Flore

    2016-02-01

    Legislation published in December 2014 revised both the List of Waste (LoW) and amended Appendix III of the revised Waste Framework Directive 2008/98/EC; the latter redefined hazardous properties HP 1 to HP 13 and HP 15 but left the assessment of HP 14 unchanged to allow time for the Directorate General of the Environment of the European Commission to complete a study that is examining the impacts of four different calculation methods for the assessment of HP 14. This paper is a contribution to the assessment of the four calculation methods. It also includes the results of a fifth calculation method; referred to as "Method 2 with extended M-factors". Two sets of data were utilised in the assessment; the first (Data Set #1) comprised analytical data for 32 different waste streams (16 hazardous (H), 9 non-hazardous (NH) and 7 mirror entries, as classified by the LoW) while the second data set (Data Set #2), supplied by the eco industries, comprised analytical data for 88 waste streams, all classified as hazardous (H) by the LoW. Two approaches were used to assess the five calculation methods. The first approach assessed the relative ranking of the five calculation methods by the frequency of their classification of waste streams as H. The relative ranking of the five methods (from most severe to less severe) is: Method 3>Method 1>Method 2 with extended M-factors>Method 2>Method 4. This reflects the arithmetic ranking of the concentration limits of each method when assuming M=10, and is independent of the waste streams, or the H/NH/Mirror status of the waste streams. A second approach is the absolute matching or concordance with the LoW. The LoW is taken as a reference method and the H wastes are all supposed to be HP 14. This point is discussed in the paper. The concordance for one calculation method is established by the number of wastes with identical classification by the considered calculation method and the LoW (i.e. H to H, NH to NH). The discordance is established as well, that is when the waste is classified "H" in the LoW and "NH" by calculation (i.e. an under-estimation of the hazard). For Data Set #1, Method 2 with extended M-factors matches best with the LoW (80% concordant H and non-H by LoW, and 13% discordant for H waste by LoW). This method more correctly classifies wastes containing substances with high ecotoxicity. Methods 1 and 3 have nearly as good matches (76% and 72% concordant H and non-H by LoW, and 13% and 6% respectively discordant for H waste by LoW). Method 2 with extended M-factors, but limited to the M-factors published in the CLP has insufficient concordance (64% concordant H and non-H by LoW, and 50% discordant for H waste by LoW). As the same method with extended M-factors gives the best performance, the lower performance is due to the limited set of M-factors in the CLP. Method 4 is divergent (60% concordant H and non-H by LoW, and 56% discordant for H waste by LoW). For Data Set #2, Methods 2 and 4 do not correctly classify 24 air pollution control residues from incineration 19 01 07(∗) (3/24 and 2/24 respectively), and should not be used, while Methods 3, 1 and 2 with extended M-factors successfully classify 100% of them as hazardous. From the two sets of data, Method 2 with extended M-factors (corresponding more closely to the CLP methods used for products) matches best with the LoW when the LoW code is safely known, and Method 3 and 1 will deviate from the LoW if the samples contain substances with high ecotoxicity (in particular PAHs). Methods 2 and 4 are not recommended. Formally, this conclusion depends on the waste streams that are used for the comparison of methods and the relevancy of the classification as hazardous for ecotoxicity in the LoW. Since the set is large (120 waste streams) and no selection has been made here in the available data, the conclusion should be robust. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. DEVELOPMENT AND IMPROVEMENT OF TEMPORAL ALLOCATION FACTOR FILES

    EPA Science Inventory

    The report gives results of a project to: (1) evaluate the quality and completeness of data and methods being used for temporal allocation of emissions data, (2) identify and prioritize needed improvements to current methods for developing temporal allocation factors (TAFs), and ...

  19. Contextual Factors, Methodological Principles and Teacher Cognition

    ERIC Educational Resources Information Center

    Walsh, Robert; Wyatt, Mark

    2014-01-01

    Teachers in various contexts worldwide are sometimes unfairly criticized for not putting teaching methods developed for the well-resourced classrooms of Western countries into practice. Factors such as the teachers' "misconceptualizations" of "imported" methods, including Communicative Language Teaching (CLT), are often blamed,…

  20. EVALUATION OF TWO METHODS FOR PREDICTION OF BIOACCUMULATION FACTORS

    EPA Science Inventory

    Two methods for deriving bioaccumulation factors (BAFs) used by the U.S. Environmental Protection Agency (EPA) in development of water quality criteria were evaluated using polychlorinated biphenyls (PCB) data from the Hudson River and Green Bay ecosystems. Greater than 90% of th...

  1. The Implementation of APIQ Creative Mathematics Game Method in the Subject Matter of Greatest Common Factor and Least Common Multiple in Elementary School

    NASA Astrophysics Data System (ADS)

    Rahman, Abdul; Saleh Ahmar, Ansari; Arifin, A. Nurani M.; Upu, Hamzah; Mulbar, Usman; Alimuddin; Arsyad, Nurdin; Ruslan; Rusli; Djadir; Sutamrin; Hamda; Minggi, Ilham; Awi; Zaki, Ahmad; Ahmad, Asdar; Ihsan, Hisyam

    2018-01-01

    One of causal factors for uninterested feeling of the students in learning mathematics is a monotonous learning method, like in traditional learning method. One of the ways for motivating students to learn mathematics is by implementing APIQ (Aritmetika Plus Intelegensi Quantum) creative mathematics game method. The purposes of this research are (1) to describe students’ responses toward the implementation of APIQ creative mathematics game method on the subject matter of Greatest Common Factor (GCF) and Least Common Multiple (LCM) and (2) to find out whether by implementing this method, the student’s learning completeness will improve or not. Based on the results of this research, it is shown that the responses of the students toward the implementation of APIQ creative mathematics game method in the subject matters of GCF and LCM were good. It is seen in the percentage of the responses were between 76-100%. (2) The implementation of APIQ creative mathematics game method on the subject matters of GCF and LCM improved the students’ learning.

  2. Methods and compositions for regulating gene expression in plant cells

    NASA Technical Reports Server (NTRS)

    Dai, Shunhong (Inventor); Beachy, Roger N. (Inventor); Luis, Maria Isabel Ordiz (Inventor)

    2010-01-01

    Novel chimeric plant promoter sequences are provided, together with plant gene expression cassettes comprising such sequences. In certain preferred embodiments, the chimeric plant promoters comprise the BoxII cis element and/or derivatives thereof. In addition, novel transcription factors are provided, together with nucleic acid sequences encoding such transcription factors and plant gene expression cassettes comprising such nucleic acid sequences. In certain preferred embodiments, the novel transcription factors comprise the acidic domain, or fragments thereof, of the RF2a transcription factor. Methods for using the chimeric plant promoter sequences and novel transcription factors in regulating the expression of at least one gene of interest are provided, together with transgenic plants comprising such chimeric plant promoter sequences and novel transcription factors.

  3. Quasi-Static Probabilistic Structural Analyses Process and Criteria

    NASA Technical Reports Server (NTRS)

    Goldberg, B.; Verderaime, V.

    1999-01-01

    Current deterministic structural methods are easily applied to substructures and components, and analysts have built great design insights and confidence in them over the years. However, deterministic methods cannot support systems risk analyses, and it was recently reported that deterministic treatment of statistical data is inconsistent with error propagation laws that can result in unevenly conservative structural predictions. Assuming non-nal distributions and using statistical data formats throughout prevailing stress deterministic processes lead to a safety factor in statistical format, which integrated into the safety index, provides a safety factor and first order reliability relationship. The embedded safety factor in the safety index expression allows a historically based risk to be determined and verified over a variety of quasi-static metallic substructures consistent with the traditional safety factor methods and NASA Std. 5001 criteria.

  4. Use of fibroblast growth factor 2 for expansion of chondrocytes and tissue engineering

    NASA Technical Reports Server (NTRS)

    Vunjak-Novakovic, Gordana (Inventor); Martin, Ivan (Inventor); Freed, Lisa E. (Inventor); Langer, Robert (Inventor)

    2003-01-01

    The present invention provides an improved method for expanding cells for use in tissue engineering. In particular the method provides specific biochemical factors to supplement cell culture medium during the expansion process in order to reproduce events occurring during embryonic development with the goal of regenerating tissue equivalents that resemble natural tissues both structurally and functionally. These specific biochemical factors improve proliferation of the cells and are capable of de-differentiation mature cells isolated from tissue so that the differentiation potential of the cells is preserved. The bioactive molecules also maintain the responsiveness of the cells to other bioactive molecules. Specifically, the invention provides methods for expanding chondrocytes in the presence of fibroblast growth factor 2 for use in regeneration of cartilage tissue.

  5. Model reduction of dynamical systems by proper orthogonal decomposition: Error bounds and comparison of methods using snapshots from the solution and the time derivatives [Proper orthogonal decomposition model reduction of dynamical systems: error bounds and comparison of methods using snapshots from the solution and the time derivatives

    DOE PAGES

    Kostova-Vassilevska, Tanya; Oxberry, Geoffrey M.

    2017-09-17

    In this study, we consider two proper orthogonal decomposition (POD) methods for dimension reduction of dynamical systems. The first method (M1) uses only time snapshots of the solution, while the second method (M2) augments the snapshot set with time-derivative snapshots. The goal of the paper is to analyze and compare the approximation errors resulting from the two methods by using error bounds. We derive several new bounds of the error from POD model reduction by each of the two methods. The new error bounds involve a multiplicative factor depending on the time steps between the snapshots. For method M1 themore » factor depends on the second power of the time step, while for method 2 the dependence is on the fourth power of the time step, suggesting that method M2 can be more accurate for small between-snapshot intervals. However, three other factors also affect the size of the error bounds. These include (i) the norm of the second (for M1) and fourth derivatives (M2); (ii) the first neglected singular value and (iii) the spectral properties of the projection of the system’s Jacobian in the reduced space. Because of the interplay of these factors neither method is more accurate than the other in all cases. Finally, we present numerical examples demonstrating that when the number of collected snapshots is small and the first neglected singular value has a value of zero, method M2 results in a better approximation.« less

  6. Model reduction of dynamical systems by proper orthogonal decomposition: Error bounds and comparison of methods using snapshots from the solution and the time derivatives [Proper orthogonal decomposition model reduction of dynamical systems: error bounds and comparison of methods using snapshots from the solution and the time derivatives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostova-Vassilevska, Tanya; Oxberry, Geoffrey M.

    In this study, we consider two proper orthogonal decomposition (POD) methods for dimension reduction of dynamical systems. The first method (M1) uses only time snapshots of the solution, while the second method (M2) augments the snapshot set with time-derivative snapshots. The goal of the paper is to analyze and compare the approximation errors resulting from the two methods by using error bounds. We derive several new bounds of the error from POD model reduction by each of the two methods. The new error bounds involve a multiplicative factor depending on the time steps between the snapshots. For method M1 themore » factor depends on the second power of the time step, while for method 2 the dependence is on the fourth power of the time step, suggesting that method M2 can be more accurate for small between-snapshot intervals. However, three other factors also affect the size of the error bounds. These include (i) the norm of the second (for M1) and fourth derivatives (M2); (ii) the first neglected singular value and (iii) the spectral properties of the projection of the system’s Jacobian in the reduced space. Because of the interplay of these factors neither method is more accurate than the other in all cases. Finally, we present numerical examples demonstrating that when the number of collected snapshots is small and the first neglected singular value has a value of zero, method M2 results in a better approximation.« less

  7. The adjusting factor method for weight-scaling truckloads of mixed hardwood sawlogs

    Treesearch

    Edward L. Adams

    1976-01-01

    A new method of weight-scaling truckloads of mixed hardwood sawlogs systematically adjusts for changes in the weight/volume ratio of logs coming into a sawmill. It uses a conversion factor based on the running average of weight/volume ratios of randomly selected sample loads. A test of the method indicated that over a period of time the weight-scaled volume should...

  8. Assessing and improving health in the workplace: an integration of subjective and objective measures with the STress Assessment and Research Toolkit (St.A.R.T.) method

    PubMed Central

    2012-01-01

    Background The aim of this work was to introduce a new combined method of subjective and objective measures to assess psychosocial risk factors at work and improve workers’ health and well-being. In the literature most of the research on work-related stress focuses on self-report measures and this work represents the first methodology capable of integrating different sources of data. Method An integrated method entitled St.A.R.T. (STress Assessment and Research Toolkit) was used in order to assess psychosocial risk factors and two health outcomes. In particular, a self-report questionnaire combined with an observational structured checklist was administered to 113 workers from an Italian retail company. Results The data showed a correlation between subjective data and the rating data of the observational checklist for the psychosocial risk factors related to work contexts such as customer relationship management and customer queue. Conversely, the factors related to work content (workload and boredom) measured with different methods (subjective vs. objective) showed a discrepancy. Furthermore, subjective measures of psychosocial risk factors were more predictive of workers’ psychological health and exhaustion than rating data. The different objective measures played different roles, however, in terms of their influence on the two health outcomes considered. Conclusions It is important to integrate self-related assessment of stressors with objective measures for a better understanding of workers’ conditions in the workplace. The method presented could be considered a useful methodology for combining the two measures and differentiating the impact of different psychological risk factors related to work content and context on workers’ health. PMID:22995286

  9. A fast marching algorithm for the factored eikonal equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Treister, Eran, E-mail: erantreister@gmail.com; Haber, Eldad, E-mail: haber@math.ubc.ca; Department of Mathematics, The University of British Columbia, Vancouver, BC

    The eikonal equation is instrumental in many applications in several fields ranging from computer vision to geoscience. This equation can be efficiently solved using the iterative Fast Sweeping (FS) methods and the direct Fast Marching (FM) methods. However, when used for a point source, the original eikonal equation is known to yield inaccurate numerical solutions, because of a singularity at the source. In this case, the factored eikonal equation is often preferred, and is known to yield a more accurate numerical solution. One application that requires the solution of the eikonal equation for point sources is travel time tomography. Thismore » inverse problem may be formulated using the eikonal equation as a forward problem. While this problem has been solved using FS in the past, the more recent choice for applying it involves FM methods because of the efficiency in which sensitivities can be obtained using them. However, while several FS methods are available for solving the factored equation, the FM method is available only for the original eikonal equation. In this paper we develop a Fast Marching algorithm for the factored eikonal equation, using both first and second order finite-difference schemes. Our algorithm follows the same lines as the original FM algorithm and requires the same computational effort. In addition, we show how to obtain sensitivities using this FM method and apply travel time tomography, formulated as an inverse factored eikonal equation. Numerical results in two and three dimensions show that our algorithm solves the factored eikonal equation efficiently, and demonstrate the achieved accuracy for computing the travel time. We also demonstrate a recovery of a 2D and 3D heterogeneous medium by travel time tomography using the eikonal equation for forward modeling and inversion by Gauss–Newton.« less

  10. A methodological approach to identify external factors for indicator-based risk adjustment illustrated by a cataract surgery register

    PubMed Central

    2014-01-01

    Background Risk adjustment is crucial for comparison of outcome in medical care. Knowledge of the external factors that impact measured outcome but that cannot be influenced by the physician is a prerequisite for this adjustment. To date, a universal and reproducible method for identification of the relevant external factors has not been published. The selection of external factors in current quality assurance programmes is mainly based on expert opinion. We propose and demonstrate a methodology for identification of external factors requiring risk adjustment of outcome indicators and we apply it to a cataract surgery register. Methods Defined test criteria to determine the relevance for risk adjustment are “clinical relevance” and “statistical significance”. Clinical relevance of the association is presumed when observed success rates of the indicator in the presence and absence of the external factor exceed a pre-specified range of 10%. Statistical significance of the association between the external factor and outcome indicators is assessed by univariate stratification and multivariate logistic regression adjustment. The cataract surgery register was set up as part of a German multi-centre register trial for out-patient cataract surgery in three high-volume surgical sites. A total of 14,924 patient follow-ups have been documented since 2005. Eight external factors potentially relevant for risk adjustment were related to the outcome indicators “refractive accuracy” and “visual rehabilitation” 2–5 weeks after surgery. Results The clinical relevance criterion confirmed 2 (“refractive accuracy”) and 5 (“visual rehabilitation”) external factors. The significance criterion was verified in two ways. Univariate and multivariate analyses revealed almost identical external factors: 4 were related to “refractive accuracy” and 7 (6) to “visual rehabilitation”. Two (“refractive accuracy”) and 5 (“visual rehabilitation”) factors conformed to both criteria and were therefore relevant for risk adjustment. Conclusion In a practical application, the proposed method to identify relevant external factors for risk adjustment for comparison of outcome in healthcare proved to be feasible and comprehensive. The method can also be adapted to other quality assurance programmes. However, the cut-off score for clinical relevance needs to be individually assessed when applying the proposed method to other indications or indicators. PMID:24965949

  11. Tensor-GMRES method for large sparse systems of nonlinear equations

    NASA Technical Reports Server (NTRS)

    Feng, Dan; Pulliam, Thomas H.

    1994-01-01

    This paper introduces a tensor-Krylov method, the tensor-GMRES method, for large sparse systems of nonlinear equations. This method is a coupling of tensor model formation and solution techniques for nonlinear equations with Krylov subspace projection techniques for unsymmetric systems of linear equations. Traditional tensor methods for nonlinear equations are based on a quadratic model of the nonlinear function, a standard linear model augmented by a simple second order term. These methods are shown to be significantly more efficient than standard methods both on nonsingular problems and on problems where the Jacobian matrix at the solution is singular. A major disadvantage of the traditional tensor methods is that the solution of the tensor model requires the factorization of the Jacobian matrix, which may not be suitable for problems where the Jacobian matrix is large and has a 'bad' sparsity structure for an efficient factorization. We overcome this difficulty by forming and solving the tensor model using an extension of a Newton-GMRES scheme. Like traditional tensor methods, we show that the new tensor method has significant computational advantages over the analogous Newton counterpart. Consistent with Krylov subspace based methods, the new tensor method does not depend on the factorization of the Jacobian matrix. As a matter of fact, the Jacobian matrix is never needed explicitly.

  12. Method and computer program product for maintenance and modernization backlogging

    DOEpatents

    Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M

    2013-02-19

    According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.

  13. RECEPTOR MODELING OF AMBIENT PARTICULATE MATTER DATA USING POSITIVE MATRIX FACTORIZATION REVIEW OF EXISTING METHODS

    EPA Science Inventory

    Methods for apportioning sources of ambient particulate matter (PM) using the positive matrix factorization (PMF) algorithm are reviewed. Numerous procedural decisions must be made and algorithmic parameters selected when analyzing PM data with PMF. However, few publications docu...

  14. Ergonomics research methods

    NASA Technical Reports Server (NTRS)

    Uspenskiy, S. I.; Yermakova, S. V.; Chaynova, L. D.; Mitkin, A. A.; Gushcheva, T. M.; Strelkov, Y. K.; Tsvetkova, N. F.

    1973-01-01

    Various factors used in ergonomic research are given. They are: (1) anthrometric measurement, (2) polyeffector method of assessing the functional state of man, (3) galvanic skin reaction, (4) pneumography, (5) electromyography, (6) electrooculography, and (7) tachestoscopy. A brief summary is given of each factor and includes instrumentation and results.

  15. Triangular covariance factorizations for. Ph.D. Thesis. - Calif. Univ.

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.

    1976-01-01

    An improved computational form of the discrete Kalman filter is derived using an upper triangular factorization of the error covariance matrix. The covariance P is factored such that P = UDUT where U is unit upper triangular and D is diagonal. Recursions are developed for propagating the U-D covariance factors together with the corresponding state estimate. The resulting algorithm, referred to as the U-D filter, combines the superior numerical precision of square root filtering techniques with an efficiency comparable to that of Kalman's original formula. Moreover, this method is easily implemented and involves no more computer storage than the Kalman algorithm. These characteristics make the U-D method an attractive realtime filtering technique. A new covariance error analysis technique is obtained from an extension of the U-D filter equations. This evaluation method is flexible and efficient and may provide significantly improved numerical results. Cost comparisons show that for a large class of problems the U-D evaluation algorithm is noticeably less expensive than conventional error analysis methods.

  16. First-excited state g factor of Te 136 by the recoil in vacuum method

    DOE PAGES

    Stuchbery, A. E.; Allmond, J. M.; Danchev, M.; ...

    2017-07-27

    The g factor of the first 2 + state of radioactive 136Te with two valence protons and two valence neutrons beyond double-magic 132Sn has been measured by the recoil in vacuum (RIV) method. The lifetime of this state is an order of magnitude longer than the lifetimes of excited states recently measured by the RIV method in Sn and Te isotopes, requiring a new evaluation of the free-ion hyperfine interactions and methodology used to determine the g factor. In this paper, the calibration data are reported and the analysis procedures are described in detail. The resultant g factor has amore » similar magnitude to the g factors of other nuclei with an equal number of valence protons and neutrons in the major shell. However, an unexpected trend is found in the g factors of the N = 84 isotones, which decrease from 136Te to 144Nd. Finally, shell model calculations with interactions derived from the CD Bonn potential show good agreement with the g factors and E2 transition rates of 2 + states around 132Sn, confirming earlier indications that 132Sn is a good doubly magic core.« less

  17. Estimation of Image Sensor Fill Factor Using a Single Arbitrary Image

    PubMed Central

    Wen, Wei; Khatibi, Siamak

    2017-01-01

    Achieving a high fill factor is a bottleneck problem for capturing high-quality images. There are hardware and software solutions to overcome this problem. In the solutions, the fill factor is known. However, this is an industrial secrecy by most image sensor manufacturers due to its direct effect on the assessment of the sensor quality. In this paper, we propose a method to estimate the fill factor of a camera sensor from an arbitrary single image. The virtual response function of the imaging process and sensor irradiance are estimated from the generation of virtual images. Then the global intensity values of the virtual images are obtained, which are the result of fusing the virtual images into a single, high dynamic range radiance map. A non-linear function is inferred from the original and global intensity values of the virtual images. The fill factor is estimated by the conditional minimum of the inferred function. The method is verified using images of two datasets. The results show that our method estimates the fill factor correctly with significant stability and accuracy from one single arbitrary image according to the low standard deviation of the estimated fill factors from each of images and for each camera. PMID:28335459

  18. First-excited state g factor of Te 136 by the recoil in vacuum method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stuchbery, A. E.; Allmond, J. M.; Danchev, M.

    The g factor of the first 2 + state of radioactive 136Te with two valence protons and two valence neutrons beyond double-magic 132Sn has been measured by the recoil in vacuum (RIV) method. The lifetime of this state is an order of magnitude longer than the lifetimes of excited states recently measured by the RIV method in Sn and Te isotopes, requiring a new evaluation of the free-ion hyperfine interactions and methodology used to determine the g factor. In this paper, the calibration data are reported and the analysis procedures are described in detail. The resultant g factor has amore » similar magnitude to the g factors of other nuclei with an equal number of valence protons and neutrons in the major shell. However, an unexpected trend is found in the g factors of the N = 84 isotones, which decrease from 136Te to 144Nd. Finally, shell model calculations with interactions derived from the CD Bonn potential show good agreement with the g factors and E2 transition rates of 2 + states around 132Sn, confirming earlier indications that 132Sn is a good doubly magic core.« less

  19. Assessing Management Support for Worksite Health Promotion: Psychometric Analysis of the Leading by Example (LBE) Instrument

    PubMed Central

    Della, Lindsay J.; DeJoy, David M.; Goetzel, Ron Z.; Ozminkowski, Ronald J.; Wilson, Mark G.

    2009-01-01

    Objective This paper describes the development of the Leading by Example (LBE) instrument. Methods Exploratory factor analysis was used to obtain an initial factor structure. Factor validity was evaluated using confirmatory factor analysis methods. Cronbach’s alpha and item-total correlations provided information on the reliability of the factor subscales. Results Four subscales were identified: business alignment with health promotion objectives; awareness of the health-productivity link; worksite support for health promotion; leadership support for health promotion. Factor by group comparisons revealed that the initial factor structure is effective in detecting differences in organizational support for health promotion across different employee groups Conclusions Management support for health promotion can be assessed using the LBE, a brief, self-report questionnaire. Researchers can use the LBE to diagnose, track, and evaluate worksite health promotion programs. PMID:18517097

  20. Scientific evaluation of the safety factor for the acceptable daily intake (ADI). Case study: butylated hydroxyanisole (BHA).

    PubMed

    Würtzen, G

    1993-01-01

    The principles of 'data-derived safety factors' are applied to toxicological and biochemical information on butylated hydroxyanisole (BHA). The calculated safety factor for an ADI is, by this method, comparable to the existing internationally recognized safety evaluations. Relevance for humans of forestomach tumours in rodents is discussed. The method provides a basis for organizing data in a way that permits an explicit assessment of its relevance.

  1. On the logarithmic-singularity correction in the kernel function method of subsonic lifting-surface theory

    NASA Technical Reports Server (NTRS)

    Lan, C. E.; Lamar, J. E.

    1977-01-01

    A logarithmic-singularity correction factor is derived for use in kernel function methods associated with Multhopp's subsonic lifting-surface theory. Because of the form of the factor, a relation was formulated between the numbers of chordwise and spanwise control points needed for good accuracy. This formulation is developed and discussed. Numerical results are given to show the improvement of the computation with the new correction factor.

  2. A multifactorial analysis of obesity as CVD risk factor: use of neural network based methods in a nutrigenetics context.

    PubMed

    Valavanis, Ioannis K; Mougiakakou, Stavroula G; Grimaldi, Keith A; Nikita, Konstantina S

    2010-09-08

    Obesity is a multifactorial trait, which comprises an independent risk factor for cardiovascular disease (CVD). The aim of the current work is to study the complex etiology beneath obesity and identify genetic variations and/or factors related to nutrition that contribute to its variability. To this end, a set of more than 2300 white subjects who participated in a nutrigenetics study was used. For each subject a total of 63 factors describing genetic variants related to CVD (24 in total), gender, and nutrition (38 in total), e.g. average daily intake in calories and cholesterol, were measured. Each subject was categorized according to body mass index (BMI) as normal (BMI ≤ 25) or overweight (BMI > 25). Two artificial neural network (ANN) based methods were designed and used towards the analysis of the available data. These corresponded to i) a multi-layer feed-forward ANN combined with a parameter decreasing method (PDM-ANN), and ii) a multi-layer feed-forward ANN trained by a hybrid method (GA-ANN) which combines genetic algorithms and the popular back-propagation training algorithm. PDM-ANN and GA-ANN were comparatively assessed in terms of their ability to identify the most important factors among the initial 63 variables describing genetic variations, nutrition and gender, able to classify a subject into one of the BMI related classes: normal and overweight. The methods were designed and evaluated using appropriate training and testing sets provided by 3-fold Cross Validation (3-CV) resampling. Classification accuracy, sensitivity, specificity and area under receiver operating characteristics curve were utilized to evaluate the resulted predictive ANN models. The most parsimonious set of factors was obtained by the GA-ANN method and included gender, six genetic variations and 18 nutrition-related variables. The corresponding predictive model was characterized by a mean accuracy equal of 61.46% in the 3-CV testing sets. The ANN based methods revealed factors that interactively contribute to obesity trait and provided predictive models with a promising generalization ability. In general, results showed that ANNs and their hybrids can provide useful tools for the study of complex traits in the context of nutrigenetics.

  3. A Method for the Constrained Design of Natural Laminar Flow Airfoils

    NASA Technical Reports Server (NTRS)

    Green, Bradford E.; Whitesides, John L.; Campbell, Richard L.; Mineck, Raymond E.

    1996-01-01

    A fully automated iterative design method has been developed by which an airfoil with a substantial amount of natural laminar flow can be designed, while maintaining other aerodynamic and geometric constraints. Drag reductions have been realized using the design method over a range of Mach numbers, Reynolds numbers and airfoil thicknesses. The thrusts of the method are its ability to calculate a target N-Factor distribution that forces the flow to undergo transition at the desired location; the target-pressure-N-Factor relationship that is used to reduce the N-Factors in order to prolong transition; and its ability to design airfoils to meet lift, pitching moment, thickness and leading-edge radius constraints while also being able to meet the natural laminar flow constraint. The method uses several existing CFD codes and can design a new airfoil in only a few days using a Silicon Graphics IRIS workstation.

  4. TRASYS form factor matrix normalization

    NASA Technical Reports Server (NTRS)

    Tsuyuki, Glenn T.

    1992-01-01

    A method has been developed for adjusting a TRASYS enclosure form factor matrix to unity. This approach is not limited to closed geometries, and in fact, it is primarily intended for use with open geometries. The purpose of this approach is to prevent optimistic form factors to space. In this method, nodal form factor sums are calculated within 0.05 of unity using TRASYS, although deviations as large as 0.10 may be acceptable, and then, a process is employed to distribute the difference amongst the nodes. A specific example has been analyzed with this method, and a comparison was performed with a standard approach for calculating radiation conductors. In this comparison, hot and cold case temperatures were determined. Exterior nodes exhibited temperature differences as large as 7 C and 3 C for the hot and cold cases, respectively when compared with the standard approach, while interior nodes demonstrated temperature differences from 0 C to 5 C. These results indicate that temperature predictions can be artificially biased if the form factor computation error is lumped into the individual form factors to space.

  5. A comparison of Q-factor estimation methods for marine seismic data

    NASA Astrophysics Data System (ADS)

    Kwon, J.; Ha, J.; Shin, S.; Chung, W.; Lim, C.; Lee, D.

    2016-12-01

    The seismic imaging technique draws information from inside the earth using seismic reflection and transmission data. This technique is an important method in geophysical exploration. Also, it has been employed widely as a means of locating oil and gas reservoirs because it offers information on geological media. There is much recent and active research into seismic attenuation and how it determines the quality of seismic imaging. Seismic attenuation is determined by various geological characteristics, through the absorption or scattering that occurs when the seismic wave passes through a geological medium. The seismic attenuation can be defined using an attenuation coefficient and represented as a non-dimensional variable known as the Q-factor. Q-factor is a unique characteristic of a geological medium. It is a very important material property for oil and gas resource development. Q-factor can be used to infer other characteristics of a medium, such as porosity, permeability and viscosity, and can directly indicate the presence of hydrocarbons to identify oil and gas bearing areas from the seismic data. There are various ways to estimate Q-factor in three different domains. In the time domain, pulse amplitude decay, pulse rising time, and pulse broadening are representative. Logarithm spectral ratio (LSR), centroid frequency shift (CFS), and peak frequency shift (PFS) are used in the frequency domain. In the time-frequency domain, Wavelet's Envelope Peak Instantaneous Frequency (WEPIF) is most frequently employed. In this study, we estimated and analyzed the Q-factor through the numerical model test and used 4 methods: the LSR, CFS, PFS, and WEPIF. Before we applied these 4 methods to observed data, we experimented with the numerical model test. The numerical model test data is derived from Norsar-2D, which is the basis of the ray-tracing algorithm, and we used reflection and normal incidence surveys to calculate Q-factor according to the array of sources and receivers. After the numerical model test, we chose the most accurate of the 4 methods by comparing Q-factor through reflection and normal incidence surveys. We applied the method to the observed data and proved its accuracy.

  6. Fuzzy comprehensive evaluation of multiple environmental factors for swine building assessment and control.

    PubMed

    Xie, Qiuju; Ni, Ji-Qin; Su, Zhongbin

    2017-10-15

    In confined swine buildings, temperature, humidity, and air quality are all important for animal health and productivity. However, the current swine building environmental control is only based on temperature; and evaluation and control methods based on multiple environmental factors are needed. In this paper, fuzzy comprehensive evaluation (FCE) theory was adopted for multi-factor assessment of environmental quality in two commercial swine buildings using real measurement data. An assessment index system and membership functions were established; and predetermined weights were given using analytic hierarchy process (AHP) combined with knowledge of experts. The results show that multi-factors such as temperature, humidity, and concentrations of ammonia (NH 3 ), carbon dioxide (CO 2 ), and hydrogen sulfide (H 2 S) can be successfully integrated in FCE for swine building environment assessment. The FCE method has a high correlation coefficient of 0.737 compared with the method of single-factor evaluation (SFE). The FCE method can significantly increase the sensitivity and perform an effective and integrative assessment. It can be used as part of environmental controlling and warning systems for swine building environment management to improve swine production and welfare. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Risk Factor Assessment Branch (RFAB)

    Cancer.gov

    The Risk Factor Assessment Branch (RFAB) focuses on the development, evaluation, and dissemination of high-quality risk factor metrics, methods, tools, technologies, and resources for use across the cancer research continuum, and the assessment of cancer-related risk factors in the population.

  8. SU-G-201-05: Comparison of Different Methods for Output Verification of Eleckta Nucletron’s Valencia Skin Applicators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrett, J; Yudelev, M

    2016-06-15

    Purpose: The provided output factors for Elekta Nucletron’s skin applicators are based on Monte Carlo simulations. These outputs have not been independently verified, and there is no recognized method for output verification of the vendor’s applicators. The purpose of this work is to validate the outputs provided by the vendor experimentally. Methods: Using a Flexitron Ir-192 HDR unit, three experimental methods were employed to determine dose with the 30 mm diameter Valencia applicator: first a gradient method using extrapolation ionization chamber (Far West Technology, EIC-1) measurements in solid water phantom at 3 mm SCD was used. The dose was derivedmore » based on first principles. Secondly a combination of a parallel plate chamber (Exradin A-10) and the EIC-1 was used to determine air kerma at 3 mm SCD. The air kerma was converted to dose to water in line with TG-61 formalism by using a muen ratio and a scatter factor measured with the skin applicators. Similarly a combination of the A-10 parallel plate chamber and gafchromic film (EBT 3) was also used. The Nk factor for the A-10 chamber was obtained through linear interpolation between ADCL supplied Nk factors for Cs-137 and M250. Results: EIC-1 measurements in solid water defined the outputs factor at 3 mm as 0.1343 cGy/U hr. The combination of A-10/ EIC-1 and A-10/EBT3 lead to output factors of 0.1383 and 0.1568 cGy/U hr, respectively. For comparison the output recommended by the vendor is 0.1659 cGy/U hr. Conclusion: All determined dose rates were lower than the vendor supplied values. The observed discrepancy between extrapolation chamber and film methods can be ascribed to extracameral gradient effects that may not be fully accounted for by the former method.« less

  9. Tensor Factorization for Low-Rank Tensor Completion.

    PubMed

    Zhou, Pan; Lu, Canyi; Lin, Zhouchen; Zhang, Chao

    2018-03-01

    Recently, a tensor nuclear norm (TNN) based method was proposed to solve the tensor completion problem, which has achieved state-of-the-art performance on image and video inpainting tasks. However, it requires computing tensor singular value decomposition (t-SVD), which costs much computation and thus cannot efficiently handle tensor data, due to its natural large scale. Motivated by TNN, we propose a novel low-rank tensor factorization method for efficiently solving the 3-way tensor completion problem. Our method preserves the low-rank structure of a tensor by factorizing it into the product of two tensors of smaller sizes. In the optimization process, our method only needs to update two smaller tensors, which can be more efficiently conducted than computing t-SVD. Furthermore, we prove that the proposed alternating minimization algorithm can converge to a Karush-Kuhn-Tucker point. Experimental results on the synthetic data recovery, image and video inpainting tasks clearly demonstrate the superior performance and efficiency of our developed method over state-of-the-arts including the TNN and matricization methods.

  10. LDRD final report : leveraging multi-way linkages on heterogeneous data.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunlavy, Daniel M.; Kolda, Tamara Gibson

    2010-09-01

    This report is a summary of the accomplishments of the 'Leveraging Multi-way Linkages on Heterogeneous Data' which ran from FY08 through FY10. The goal was to investigate scalable and robust methods for multi-way data analysis. We developed a new optimization-based method called CPOPT for fitting a particular type of tensor factorization to data; CPOPT was compared against existing methods and found to be more accurate than any faster method and faster than any equally accurate method. We extended this method to computing tensor factorizations for problems with incomplete data; our results show that you can recover scientifically meaningfully factorizations withmore » large amounts of missing data (50% or more). The project has involved 5 members of the technical staff, 2 postdocs, and 1 summer intern. It has resulted in a total of 13 publications, 2 software releases, and over 30 presentations. Several follow-on projects have already begun, with more potential projects in development.« less

  11. Breath Figure Method for Construction of Honeycomb Films

    PubMed Central

    Dou, Yingying; Jin, Mingliang; Zhou, Guofu; Shui, Lingling

    2015-01-01

    Honeycomb films with various building units, showing potential applications in biological, medical, physicochemical, photoelectric, and many other areas, could be prepared by the breath figure method. The ordered hexagonal structures formed by the breath figure process are related to the building units, solvents, substrates, temperature, humidity, air flow, and other factors. Therefore, by adjusting these factors, the honeycomb structures could be tuned properly. In this review, we summarized the development of the breath figure method of fabricating honeycomb films and the factors of adjusting honeycomb structures. The organic-inorganic hybrid was taken as the example building unit to discuss the preparation, mechanism, properties, and applications of the honeycomb films. PMID:26343734

  12. Fast sparse recovery and coherence factor weighting in optoacoustic tomography

    NASA Astrophysics Data System (ADS)

    He, Hailong; Prakash, Jaya; Buehler, Andreas; Ntziachristos, Vasilis

    2017-03-01

    Sparse recovery algorithms have shown great potential to reconstruct images with limited view datasets in optoacoustic tomography, with a disadvantage of being computational expensive. In this paper, we improve the fast convergent Split Augmented Lagrangian Shrinkage Algorithm (SALSA) method based on least square QR (LSQR) formulation for performing accelerated reconstructions. Further, coherence factor is calculated to weight the final reconstruction result, which can further reduce artifacts arising in limited-view scenarios and acoustically heterogeneous mediums. Several phantom and biological experiments indicate that the accelerated SALSA method with coherence factor (ASALSA-CF) can provide improved reconstructions and much faster convergence compared to existing sparse recovery methods.

  13. Factors that influence utilisation of HIV/AIDS prevention methods among university students residing at a selected university campus.

    PubMed

    Ndabarora, Eléazar; Mchunu, Gugu

    2014-01-01

    Various studies have reported that university students, who are mostly young people, rarely use existing HIV/AIDS preventive methods. Although studies have shown that young university students have a high degree of knowledge about HIV/AIDS and HIV modes of transmission, they are still not utilising the existing HIV prevention methods and still engage in risky sexual practices favourable to HIV. Some variables, such as awareness of existing HIV/AIDS prevention methods, have been associated with utilisation of such methods. The study aimed to explore factors that influence use of existing HIV/AIDS prevention methods among university students residing in a selected campus, using the Health Belief Model (HBM) as a theoretical framework. A quantitative research approach and an exploratory-descriptive design were used to describe perceived factors that influence utilisation by university students of HIV/AIDS prevention methods. A total of 335 students completed online and manual questionnaires. Study findings showed that the factors which influenced utilisation of HIV/AIDS prevention methods were mainly determined by awareness of the existing university-based HIV/AIDS prevention strategies. Most utilised prevention methods were voluntary counselling and testing services and free condoms. Perceived susceptibility and perceived threat of HIV/AIDS score was also found to correlate with HIV risk index score. Perceived susceptibility and perceived threat of HIV/AIDS showed correlation with self-efficacy on condoms and their utilisation. Most HBM variables were not predictors of utilisation of HIV/AIDS prevention methods among students. Intervention aiming to improve the utilisation of HIV/AIDS prevention methods among students at the selected university should focus on removing identified barriers, promoting HIV/AIDS prevention services and providing appropriate resources to implement such programmes.

  14. FACTORING TO FIT OFF DIAGONALS.

    DTIC Science & Technology

    imply an upper bound on the number of factors. When applied to somatotype data, the method improved substantially on centroid solutions and indicated a reinterpretation of earlier factoring studies. (Author)

  15. Procedural Factors That Affect Psychophysical Measures of Spatial Selectivity in Cochlear Implant Users

    PubMed Central

    Deeks, John M.; Carlyon, Robert P.

    2015-01-01

    Behavioral measures of spatial selectivity in cochlear implants are important both for guiding the programing of individual users’ implants and for the evaluation of different stimulation methods. However, the methods used are subject to a number of confounding factors that can contaminate estimates of spatial selectivity. These factors include off-site listening, charge interactions between masker and probe pulses in interleaved masking paradigms, and confusion effects in forward masking. We review the effects of these confounds and discuss methods for minimizing them. We describe one such method in which the level of a 125-pps masker is adjusted so as to mask a 125-pps probe, and where the masker and probe pulses are temporally interleaved. Five experiments describe the method and evaluate the potential roles of the different potential confounding factors. No evidence was obtained for off-site listening of the type observed in acoustic hearing. The choice of the masking paradigm was shown to alter the measured spatial selectivity. For short gaps between masker and probe pulses, both facilitation and refractory mechanisms had an effect on masking; this finding should inform the choice of stimulation rate in interleaved masking experiments. No evidence for confusion effects in forward masking was revealed. It is concluded that the proposed method avoids many potential confounds but that the choice of method should depend on the research question under investigation. PMID:26420785

  16. A method to measure the ozone penetration factor in residences under infiltration conditions: application in a multifamily apartment unit.

    PubMed

    Zhao, H; Stephens, B

    2016-08-01

    Recent experiments have demonstrated that outdoor ozone reacts with materials inside residential building enclosures, potentially reducing indoor exposures to ozone or altering ozone reaction byproducts. However, test methods to measure ozone penetration factors in residences (P) remain limited. We developed a method to measure ozone penetration factors in residences under infiltration conditions and applied it in an unoccupied apartment unit. Twenty-four repeated measurements were made, and results were explored to (i) evaluate the accuracy and repeatability of the new procedure using multiple solution methods, (ii) compare results from 'interference-free' and conventional UV absorbance ozone monitors, and (iii) compare results against those from a previously published test method requiring artificial depressurization. The mean (±s.d.) estimate of P was 0.54 ± 0.10 across a wide range of conditions using the new method with an interference-free monitor; the conventional monitor was unable to yield meaningful results due to relatively high limits of detection. Estimates of P were not clearly influenced by any indoor or outdoor environmental conditions or changes in indoor decay rate constants. This work represents the first known measurements of ozone penetration factors in a residential building operating under natural infiltration conditions and provides a new method for widespread application in buildings. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  17. Enhancing non-refractory aerosol apportionment from an urban industrial site through receptor modeling of complete high time-resolution aerosol mass spectra

    NASA Astrophysics Data System (ADS)

    McGuire, M. L.; Chang, R. Y.-W.; Slowik, J. G.; Jeong, C.-H.; Healy, R. M.; Lu, G.; Mihele, C.; Abbatt, J. P. D.; Brook, J. R.; Evans, G. J.

    2014-08-01

    Receptor modeling was performed on quadrupole unit mass resolution aerosol mass spectrometer (Q-AMS) sub-micron particulate matter (PM) chemical speciation measurements from Windsor, Ontario, an industrial city situated across the Detroit River from Detroit, Michigan. Aerosol and trace gas measurements were collected on board Environment Canada's Canadian Regional and Urban Investigation System for Environmental Research (CRUISER) mobile laboratory. Positive matrix factorization (PMF) was performed on the AMS full particle-phase mass spectrum (PMFFull MS) encompassing both organic and inorganic components. This approach compared to the more common method of analyzing only the organic mass spectra (PMFOrg MS). PMF of the full mass spectrum revealed that variability in the non-refractory sub-micron aerosol concentration and composition was best explained by six factors: an amine-containing factor (Amine); an ammonium sulfate- and oxygenated organic aerosol-containing factor (Sulfate-OA); an ammonium nitrate- and oxygenated organic aerosol-containing factor (Nitrate-OA); an ammonium chloride-containing factor (Chloride); a hydrocarbon-like organic aerosol (HOA) factor; and a moderately oxygenated organic aerosol factor (OOA). PMF of the organic mass spectrum revealed three factors of similar composition to some of those revealed through PMFFull MS: Amine, HOA and OOA. Including both the inorganic and organic mass proved to be a beneficial approach to analyzing the unit mass resolution AMS data for several reasons. First, it provided a method for potentially calculating more accurate sub-micron PM mass concentrations, particularly when unusual factors are present, in this case the Amine factor. As this method does not rely on a priori knowledge of chemical species, it circumvents the need for any adjustments to the traditional AMS species fragmentation patterns to account for atypical species, and can thus lead to more complete factor profiles. It is expected that this method would be even more useful for HR-ToF-AMS data, due to the ability to understand better the chemical nature of atypical factors from high-resolution mass spectra. Second, utilizing PMF to extract factors containing inorganic species allowed for the determination of the extent of neutralization, which could have implications for aerosol parameterization. Third, subtler differences in organic aerosol components were resolved through the incorporation of inorganic mass into the PMF matrix. The additional temporal features provided by the inorganic aerosol components allowed for the resolution of more types of oxygenated organic aerosol than could be reliably resolved from PMF of organics alone. Comparison of findings from the PMFFull MS and PMFOrg MS methods showed that for the Windsor airshed, the PMFFull MS method enabled additional conclusions to be drawn in terms of aerosol sources and chemical processes. While performing PMFOrg MS can provide important distinctions between types of organic aerosol, it is shown that including inorganic species in the PMF analysis can permit further apportionment of organics for unit mass resolution AMS mass spectra.

  18. Enhancing non-refractory aerosol apportionment from an urban industrial site through receptor modelling of complete high time-resolution aerosol mass spectra

    NASA Astrophysics Data System (ADS)

    McGuire, M. L.; Chang, R. Y.-W.; Slowik, J. G.; Jeong, C.-H.; Healy, R. M.; Lu, G.; Mihele, C.; Abbatt, J. P. D.; Brook, J. R.; Evans, G. J.

    2014-02-01

    Receptor modelling was performed on quadrupole unit mass resolution aerosol mass spectrometer (Q-AMS) sub-micron particulate matter (PM) chemical speciation measurements from Windsor, Ontario, an industrial city situated across the Detroit River from Detroit, Michigan. Aerosol and trace gas measurements were collected on board Environment Canada's CRUISER mobile laboratory. Positive matrix factorization (PMF) was performed on the AMS full particle-phase mass spectrum (PMFFull MS) encompassing both organic and inorganic components. This approach was compared to the more common method of analysing only the organic mass spectra (PMFOrg MS). PMF of the full mass spectrum revealed that variability in the non-refractory sub-micron aerosol concentration and composition was best explained by six factors: an amine-containing factor (Amine); an ammonium sulphate and oxygenated organic aerosol containing factor (Sulphate-OA); an ammonium nitrate and oxygenated organic aerosol containing factor (Nitrate-OA); an ammonium chloride containing factor (Chloride); a hydrocarbon-like organic aerosol (HOA) factor; and a moderately oxygenated organic aerosol factor (OOA). PMF of the organic mass spectrum revealed three factors of similar composition to some of those revealed through PMFFull MS: Amine, HOA and OOA. Including both the inorganic and organic mass proved to be a beneficial approach to analysing the unit mass resolution AMS data for several reasons. First, it provided a method for potentially calculating more accurate sub-micron PM mass concentrations, particularly when unusual factors are present, in this case, an Amine factor. As this method does not rely on a priori knowledge of chemical species, it circumvents the need for any adjustments to the traditional AMS species fragmentation patterns to account for atypical species, and can thus lead to more complete factor profiles. It is expected that this method would be even more useful for HR-ToF-AMS data, due to the ability to better understand the chemical nature of atypical factors from high resolution mass spectra. Second, utilizing PMF to extract factors containing inorganic species allowed for the determination of extent of neutralization, which could have implications for aerosol parameterization. Third, subtler differences in organic aerosol components were resolved through the incorporation of inorganic mass into the PMF matrix. The additional temporal features provided by the inorganic aerosol components allowed for the resolution of more types of oxygenated organic aerosol than could be reliably resolved from PMF of organics alone. Comparison of findings from the PMFFull MS and PMFOrg MS methods showed that for the Windsor airshed, the PMFFull MS method enabled additional conclusions to be drawn in terms of aerosol sources and chemical processes. While performing PMFOrg MS can provide important distinctions between types of organic aerosol, it is shown that including inorganic species in the PMF analysis can permit further apportionment of organics for unit mass resolution AMS mass spectra.

  19. Fuzzy Regression Prediction and Application Based on Multi-Dimensional Factors of Freight Volume

    NASA Astrophysics Data System (ADS)

    Xiao, Mengting; Li, Cheng

    2018-01-01

    Based on the reality of the development of air cargo, the multi-dimensional fuzzy regression method is used to determine the influencing factors, and the three most important influencing factors of GDP, total fixed assets investment and regular flight route mileage are determined. The system’s viewpoints and analogy methods, the use of fuzzy numbers and multiple regression methods to predict the civil aviation cargo volume. In comparison with the 13th Five-Year Plan for China’s Civil Aviation Development (2016-2020), it is proved that this method can effectively improve the accuracy of forecasting and reduce the risk of forecasting. It is proved that this model predicts civil aviation freight volume of the feasibility, has a high practical significance and practical operation.

  20. Repeat Pregnancy among Urban Adolescents: Sociodemographic, Family, and Health Factors.

    ERIC Educational Resources Information Center

    Coard, Stephanie Irby; Nitz, Katherine; Felice, Marianne E.

    2000-01-01

    Examines sociodemographic, family, and health factors associated with repeat pregnancy in a clinical sample of urban, first-time mothers. Results indicate that postpartum contraceptive method was associated with repeat pregnancy at year one; contraceptive use, maternal age, history of miscarriages, and postpartum contraceptive method were…

  1. An accelerated subspace iteration for eigenvector derivatives

    NASA Technical Reports Server (NTRS)

    Ting, Tienko

    1991-01-01

    An accelerated subspace iteration method for calculating eigenvector derivatives has been developed. Factors affecting the effectiveness and the reliability of the subspace iteration are identified, and effective strategies concerning these factors are presented. The method has been implemented, and the results of a demonstration problem are presented.

  2. A Dimensional Analysis of College Student Satisfaction.

    ERIC Educational Resources Information Center

    Betz, Ellen L.; And Others

    Further research on the College Student Satisfaction Questionnaire (CSSQ) is reported herein (see TM 000 049). Item responses of two groups of university students were separately analyzed by three different factor analytic methods. Three factors consistently appeared across groups and methods: Compensation, Social Life, and Working Conditions. Two…

  3. Spectral method for the correction of the Cerenkov light effect in plastic scintillation detectors: A comparison study of calibration procedures and validation in Cerenkov light-dominated situations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guillot, Mathieu; Gingras, Luc; Archambault, Louis

    2011-04-15

    Purpose: The purposes of this work were: (1) To determine if a spectral method can accurately correct the Cerenkov light effect in plastic scintillation detectors (PSDs) for situations where the Cerenkov light is dominant over the scintillation light and (2) to develop a procedural guideline for accurately determining the calibration factors of PSDs. Methods: The authors demonstrate, by using the equations of the spectral method, that the condition for accurately correcting the effect of Cerenkov light is that the ratio of the two calibration factors must be equal to the ratio of the Cerenkov light measured within the two differentmore » spectral regions used for analysis. Based on this proof, the authors propose two new procedures to determine the calibration factors of PSDs, which were designed to respect this condition. A PSD that consists of a cylindrical polystyrene scintillating fiber (1.6 mm{sup 3}) coupled to a plastic optical fiber was calibrated by using these new procedures and the two reference procedures described in the literature. To validate the extracted calibration factors, relative dose profiles and output factors for a 6 MV photon beam from a medical linac were measured with the PSD and an ionization chamber. Emphasis was placed on situations where the Cerenkov light is dominant over the scintillation light and on situations dissimilar to the calibration conditions. Results: The authors found that the accuracy of the spectral method depends on the procedure used to determine the calibration factors of the PSD and on the attenuation properties of the optical fiber used. The results from the relative dose profile measurements showed that the spectral method can correct the Cerenkov light effect with an accuracy level of 1%. The results obtained also indicate that PSDs measure output factors that are lower than those measured with ionization chambers for square field sizes larger than 25x25 cm{sup 2}, in general agreement with previously published Monte Carlo results. Conclusions: The authors conclude that the spectral method can be used to accurately correct the Cerenkov light effect in PSDs. The authors confirmed the importance of maximizing the difference of Cerenkov light production between calibration measurements. The authors also found that the attenuation of the optical fiber, which is assumed to be constant in the original formulation of the spectral method, may cause a variation of the calibration factors in some experimental setups.« less

  4. Method for factor analysis of GC/MS data

    DOEpatents

    Van Benthem, Mark H; Kotula, Paul G; Keenan, Michael R

    2012-09-11

    The method of the present invention provides a fast, robust, and automated multivariate statistical analysis of gas chromatography/mass spectroscopy (GC/MS) data sets. The method can involve systematic elimination of undesired, saturated peak masses to yield data that follow a linear, additive model. The cleaned data can then be subjected to a combination of PCA and orthogonal factor rotation followed by refinement with MCR-ALS to yield highly interpretable results.

  5. Much More than Model Fitting? Evidence for the Heritability of Method Effect Associated with Positively Worded Items of the Life Orientation Test Revised

    ERIC Educational Resources Information Center

    Alessandri, Guido; Vecchione, Michele; Fagnani, Corrado; Bentler, Peter M.; Barbaranelli, Claudio; Medda, Emanuela; Nistico, Lorenza; Stazi, Maria Antonietta; Caprara, Gian Vittorio

    2010-01-01

    When a self-report instrument includes a balanced number of positively and negatively worded items, factor analysts often use method effect factors to aid model fitting. One of the most widely investigated sources of method effects stems from the respondent tendencies to agree with an item regardless of its content. The nature of these effects,…

  6. Chemometrics Methods for Specificity, Authenticity and Traceability Analysis of Olive Oils: Principles, Classifications and Applications

    PubMed Central

    Messai, Habib; Farman, Muhammad; Sarraj-Laabidi, Abir; Hammami-Semmar, Asma; Semmar, Nabil

    2016-01-01

    Background. Olive oils (OOs) show high chemical variability due to several factors of genetic, environmental and anthropic types. Genetic and environmental factors are responsible for natural compositions and polymorphic diversification resulting in different varietal patterns and phenotypes. Anthropic factors, however, are at the origin of different blends’ preparation leading to normative, labelled or adulterated commercial products. Control of complex OO samples requires their (i) characterization by specific markers; (ii) authentication by fingerprint patterns; and (iii) monitoring by traceability analysis. Methods. These quality control and management aims require the use of several multivariate statistical tools: specificity highlighting requires ordination methods; authentication checking calls for classification and pattern recognition methods; traceability analysis implies the use of network-based approaches able to separate or extract mixed information and memorized signals from complex matrices. Results. This chapter presents a review of different chemometrics methods applied for the control of OO variability from metabolic and physical-chemical measured characteristics. The different chemometrics methods are illustrated by different study cases on monovarietal and blended OO originated from different countries. Conclusion. Chemometrics tools offer multiple ways for quantitative evaluations and qualitative control of complex chemical variability of OO in relation to several intrinsic and extrinsic factors. PMID:28231172

  7. 10 CFR 430.24 - Units to be tested.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... the method includes an ARM/simulation adjustment factor(s), determine the value(s) of the factors(s... process. (v) If request for approval is for an updated ARM, manufacturers must identify modifications made to the ARM since the last submittal, including any ARM/simulation adjustment factor(s) added since...

  8. A Review of CEFA Software: Comprehensive Exploratory Factor Analysis Program

    ERIC Educational Resources Information Center

    Lee, Soon-Mook

    2010-01-01

    CEFA 3.02(Browne, Cudeck, Tateneni, & Mels, 2008) is a factor analysis computer program designed to perform exploratory factor analysis. It provides the main properties that are needed for exploratory factor analysis, namely a variety of factoring methods employing eight different discrepancy functions to be minimized to yield initial…

  9. Comparison of Two- and Three-Dimensional Methods for Analysis of Trunk Kinematic Variables in the Golf Swing.

    PubMed

    Smith, Aimée C; Roberts, Jonathan R; Wallace, Eric S; Kong, Pui; Forrester, Stephanie E

    2016-02-01

    Two-dimensional methods have been used to compute trunk kinematic variables (flexion/extension, lateral bend, axial rotation) and X-factor (difference in axial rotation between trunk and pelvis) during the golf swing. Recent X-factor studies advocated three-dimensional (3D) analysis due to the errors associated with two-dimensional (2D) methods, but this has not been investigated for all trunk kinematic variables. The purpose of this study was to compare trunk kinematic variables and X-factor calculated by 2D and 3D methods to examine how different approaches influenced their profiles during the swing. Trunk kinematic variables and X-factor were calculated for golfers from vectors projected onto the global laboratory planes and from 3D segment angles. Trunk kinematic variable profiles were similar in shape; however, there were statistically significant differences in trunk flexion (-6.5 ± 3.6°) at top of backswing and trunk right-side lateral bend (8.7 ± 2.9°) at impact. Differences between 2D and 3D X-factor (approximately 16°) could largely be explained by projection errors introduced to the 2D analysis through flexion and lateral bend of the trunk and pelvis segments. The results support the need to use a 3D method for kinematic data calculation to accurately analyze the golf swing.

  10. Correcting the Relative Bias of Light Obscuration and Flow Imaging Particle Counters.

    PubMed

    Ripple, Dean C; Hu, Zhishang

    2016-03-01

    Industry and regulatory bodies desire more accurate methods for counting and characterizing particles. Measurements of proteinaceous-particle concentrations by light obscuration and flow imaging can differ by factors of ten or more. We propose methods to correct the diameters reported by light obscuration and flow imaging instruments. For light obscuration, diameters were rescaled based on characterization of the refractive index of typical particles and a light scattering model for the extinction efficiency factor. The light obscuration models are applicable for either homogeneous materials (e.g., silicone oil) or for chemically homogeneous, but spatially non-uniform aggregates (e.g., protein aggregates). For flow imaging, the method relied on calibration of the instrument with silica beads suspended in water-glycerol mixtures. These methods were applied to a silicone-oil droplet suspension and four particle suspensions containing particles produced from heat stressed and agitated human serum albumin, agitated polyclonal immunoglobulin, and abraded ethylene tetrafluoroethylene polymer. All suspensions were measured by two flow imaging and one light obscuration apparatus. Prior to correction, results from the three instruments disagreed by a factor ranging from 3.1 to 48 in particle concentration over the size range from 2 to 20 μm. Bias corrections reduced the disagreement from an average factor of 14 down to an average factor of 1.5. The methods presented show promise in reducing the relative bias between light obscuration and flow imaging.

  11. Approximate method of variational Bayesian matrix factorization/completion with sparse prior

    NASA Astrophysics Data System (ADS)

    Kawasumi, Ryota; Takeda, Koujin

    2018-05-01

    We derive the analytical expression of a matrix factorization/completion solution by the variational Bayes method, under the assumption that the observed matrix is originally the product of low-rank, dense and sparse matrices with additive noise. We assume the prior of a sparse matrix is a Laplace distribution by taking matrix sparsity into consideration. Then we use several approximations for the derivation of a matrix factorization/completion solution. By our solution, we also numerically evaluate the performance of a sparse matrix reconstruction in matrix factorization, and completion of a missing matrix element in matrix completion.

  12. Impact of the Fano Factor on Position and Energy Estimation in Scintillation Detectors.

    PubMed

    Bora, Vaibhav; Barrett, Harrison H; Jha, Abhinav K; Clarkson, Eric

    2015-02-01

    The Fano factor for an integer-valued random variable is defined as the ratio of its variance to its mean. Light from various scintillation crystals have been reported to have Fano factors from sub-Poisson (Fano factor < 1) to super-Poisson (Fano factor > 1). For a given mean, a smaller Fano factor implies a smaller variance and thus less noise. We investigated if lower noise in the scintillation light will result in better spatial and energy resolutions. The impact of Fano factor on the estimation of position of interaction and energy deposited in simple gamma-camera geometries is estimated by two methods - calculating the Cramér-Rao bound and estimating the variance of a maximum likelihood estimator. The methods are consistent with each other and indicate that when estimating the position of interaction and energy deposited by a gamma-ray photon, the Fano factor of a scintillator does not affect the spatial resolution. A smaller Fano factor results in a better energy resolution.

  13. Determination of the reference air kerma rate for 192Ir brachytherapy sources and the related uncertainty.

    PubMed

    van Dijk, Eduard; Kolkman-Deurloo, Inger-Karine K; Damen, Patricia M G

    2004-10-01

    Different methods exist to determine the air kerma calibration factor of an ionization chamber for the spectrum of a 192Ir high-dose-rate (HDR) or pulsed-dose-rate (PDR) source. An analysis of two methods to obtain such a calibration factor was performed: (i) the method recommended by [Goetsch et al., Med. Phys. 18, 462-467 (1991)] and (ii) the method employed by the Dutch national standards institute NMi [Petersen et al., Report S-EI-94.01 (NMi, Delft, The Netherlands, 1994)]. This analysis showed a systematic difference on the order of 1% in the determination of the strength of 192Ir HDR and PDR sources depending on the method used for determining the air kerma calibration factor. The definitive significance of the difference between these methods can only be addressed after performing an accurate analysis of the associated uncertainties. For an NE 2561 (or equivalent) ionization chamber and an in-air jig, a typical uncertainty budget of 0.94% was found with the NMi method. The largest contribution in the type-B uncertainty is the uncertainty in the air kerma calibration factor for isotope i, N(i)k, as determined by the primary or secondary standards laboratories. This uncertainty is dominated by the uncertainties in the physical constants for the average mass-energy absorption coefficient ratio and the stopping power ratios. This means that it is not foreseeable that the standards laboratories can decrease the uncertainty in the air kerma calibration factors for ionization chambers in the short term. When the results of the determination of the 192Ir reference air kerma rates in, e.g., different institutes are compared, the uncertainties in the physical constants are the same. To compare the applied techniques, the ratio of the results can be judged by leaving out the uncertainties due to these physical constants. In that case an uncertainty budget of 0.40% (coverage factor=2) should be taken into account. Due to the differences in approach between the method used by NMi and the method recommended by Goetsch et al., an extra type-B uncertainty of 0.9% (k= 1) has to be taken into account when the method of Goetsch et al. is applied. Compared to the uncertainty of 1% (k= 2) found for the air calibration of 192Ir, the difference of 0.9% found is significant.

  14. Strain intensity factor approach for predicting the strength of continuously reinforced metal matrix composites

    NASA Technical Reports Server (NTRS)

    Poe, Clarence C., Jr.

    1989-01-01

    A method was previously developed to predict the fracture toughness (stress intensity factor at failure) of composites in terms of the elastic constants and the tensile failing strain of the fibers. The method was applied to boron/aluminum composites made with various proportions of 0 deg and +/- 45 deg plies. Predicted values of fracture toughness were in gross error because widespread yielding of the aluminum matrix made the compliance very nonlinear. An alternate method was develolped to predict the strain intensity factor at failure rather than the stress intensity factor because the singular strain field was not affected by yielding as much as the stress field. Far-field strains at failure were calculated from the strain intensity factor, and then strengths were calculated from the far-field strains using uniaxial stress-strain curves. The predicted strengths were in good agreement with experimental values, even for the very nonlinear laminates that contained only +/- 45 deg plies. This approach should be valid for other metal matrix composites that have continuous fibers.

  15. Complex amplitude reconstruction for dynamic beam quality M2 factor measurement with self-referencing interferometer wavefront sensor.

    PubMed

    Du, Yongzhao; Fu, Yuqing; Zheng, Lixin

    2016-12-20

    A real-time complex amplitude reconstruction method for determining the dynamic beam quality M2 factor based on a Mach-Zehnder self-referencing interferometer wavefront sensor is developed. By using the proposed complex amplitude reconstruction method, full characterization of the laser beam, including amplitude (intensity profile) and phase information, can be reconstructed from a single interference pattern with the Fourier fringe pattern analysis method in a one-shot measurement. With the reconstructed complex amplitude, the beam fields at any position z along its propagation direction can be obtained by first utilizing the diffraction integral theory. Then the beam quality M2 factor of the dynamic beam is calculated according to the specified method of the Standard ISO11146. The feasibility of the proposed method is demonstrated with the theoretical analysis and experiment, including the static and dynamic beam process. The experimental method is simple, fast, and operates without movable parts and is allowed in order to investigate the laser beam in inaccessible conditions using existing methods.

  16. Factor Structure of the Penn State Worry Questionnaire: Examination of a Method Factor

    ERIC Educational Resources Information Center

    Hazlett-Stevens, Holly; Ullman, Jodie B.; Craske, Michelle G.

    2004-01-01

    The Penn State Worry Questionnaire (PSWQ) was originally designed as a unifactorial measure of pathological trait worry. However, recent studies supported a two-factor solution with positively worded items loading on the first factor and reverse-scored items loading on a second factor. The current study compared this two-factor model to a negative…

  17. Scale-Free Nonparametric Factor Analysis: A User-Friendly Introduction with Concrete Heuristic Examples.

    ERIC Educational Resources Information Center

    Mittag, Kathleen Cage

    Most researchers using factor analysis extract factors from a matrix of Pearson product-moment correlation coefficients. A method is presented for extracting factors in a non-parametric way, by extracting factors from a matrix of Spearman rho (rank correlation) coefficients. It is possible to factor analyze a matrix of association such that…

  18. Stress intensity factors for surface and corner cracks emanating from a wedge-loaded hole

    NASA Technical Reports Server (NTRS)

    Zhao, W.; Sutton, M. A.; Shivakumar, K. N.; Newman, J. C., Jr.

    1994-01-01

    To assist analysis of riveted lap joints, stress intensity factors are determined for surface and corner cracks emanating from a wedge-loaded hole by using a 3-D weight function method in conjunction with a 3-D finite element method. A stress intensity factor equation for surface cracks is also developed to provide a closed-form solution. The equation covers commonly-encountered geometrical ranges and retains high accuracy over the entire range.

  19. Compressed sensing for rapid late gadolinium enhanced imaging of the left atrium: A preliminary study.

    PubMed

    Kamesh Iyer, Srikant; Tasdizen, Tolga; Burgon, Nathan; Kholmovski, Eugene; Marrouche, Nassir; Adluru, Ganesh; DiBella, Edward

    2016-09-01

    Current late gadolinium enhancement (LGE) imaging of left atrial (LA) scar or fibrosis is relatively slow and requires 5-15min to acquire an undersampled (R=1.7) 3D navigated dataset. The GeneRalized Autocalibrating Partially Parallel Acquisitions (GRAPPA) based parallel imaging method is the current clinical standard for accelerating 3D LGE imaging of the LA and permits an acceleration factor ~R=1.7. Two compressed sensing (CS) methods have been developed to achieve higher acceleration factors: a patch based collaborative filtering technique tested with acceleration factor R~3, and a technique that uses a 3D radial stack-of-stars acquisition pattern (R~1.8) with a 3D total variation constraint. The long reconstruction time of these CS methods makes them unwieldy to use, especially the patch based collaborative filtering technique. In addition, the effect of CS techniques on the quantification of percentage of scar/fibrosis is not known. We sought to develop a practical compressed sensing method for imaging the LA at high acceleration factors. In order to develop a clinically viable method with short reconstruction time, a Split Bregman (SB) reconstruction method with 3D total variation (TV) constraints was developed and implemented. The method was tested on 8 atrial fibrillation patients (4 pre-ablation and 4 post-ablation datasets). Blur metric, normalized mean squared error and peak signal to noise ratio were used as metrics to analyze the quality of the reconstructed images, Quantification of the extent of LGE was performed on the undersampled images and compared with the fully sampled images. Quantification of scar from post-ablation datasets and quantification of fibrosis from pre-ablation datasets showed that acceleration factors up to R~3.5 gave good 3D LGE images of the LA wall, using a 3D TV constraint and constrained SB methods. This corresponds to reducing the scan time by half, compared to currently used GRAPPA methods. Reconstruction of 3D LGE images using the SB method was over 20 times faster than standard gradient descent methods. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Evaluation of gene expression classification studies: factors associated with classification performance.

    PubMed

    Novianti, Putri W; Roes, Kit C B; Eijkemans, Marinus J C

    2014-01-01

    Classification methods used in microarray studies for gene expression are diverse in the way they deal with the underlying complexity of the data, as well as in the technique used to build the classification model. The MAQC II study on cancer classification problems has found that performance was affected by factors such as the classification algorithm, cross validation method, number of genes, and gene selection method. In this paper, we study the hypothesis that the disease under study significantly determines which method is optimal, and that additionally sample size, class imbalance, type of medical question (diagnostic, prognostic or treatment response), and microarray platform are potentially influential. A systematic literature review was used to extract the information from 48 published articles on non-cancer microarray classification studies. The impact of the various factors on the reported classification accuracy was analyzed through random-intercept logistic regression. The type of medical question and method of cross validation dominated the explained variation in accuracy among studies, followed by disease category and microarray platform. In total, 42% of the between study variation was explained by all the study specific and problem specific factors that we studied together.

  1. Chemometrics Methods for Specificity, Authenticity and Traceability Analysis of Olive Oils: Principles, Classifications and Applications.

    PubMed

    Messai, Habib; Farman, Muhammad; Sarraj-Laabidi, Abir; Hammami-Semmar, Asma; Semmar, Nabil

    2016-11-17

    Olive oils (OOs) show high chemical variability due to several factors of genetic, environmental and anthropic types. Genetic and environmental factors are responsible for natural compositions and polymorphic diversification resulting in different varietal patterns and phenotypes. Anthropic factors, however, are at the origin of different blends' preparation leading to normative, labelled or adulterated commercial products. Control of complex OO samples requires their (i) characterization by specific markers; (ii) authentication by fingerprint patterns; and (iii) monitoring by traceability analysis. These quality control and management aims require the use of several multivariate statistical tools: specificity highlighting requires ordination methods; authentication checking calls for classification and pattern recognition methods; traceability analysis implies the use of network-based approaches able to separate or extract mixed information and memorized signals from complex matrices. This chapter presents a review of different chemometrics methods applied for the control of OO variability from metabolic and physical-chemical measured characteristics. The different chemometrics methods are illustrated by different study cases on monovarietal and blended OO originated from different countries. Chemometrics tools offer multiple ways for quantitative evaluations and qualitative control of complex chemical variability of OO in relation to several intrinsic and extrinsic factors.

  2. Understanding the Impact of School Factors on School Counselor Burnout: A Mixed-Methods Study

    ERIC Educational Resources Information Center

    Bardhoshi, Gerta; Schweinle, Amy; Duncan, Kelly

    2014-01-01

    This mixed-methods study investigated the relationship between burnout and performing noncounseling duties among a national sample of professional school counselors, while identifying school factors that could attenuate this relationship. Results of regression analyses indicate that performing noncounseling duties significantly predicted burnout…

  3. Stored grain pack factors for wheat: comparison of three methods to field measurements

    USDA-ARS?s Scientific Manuscript database

    Storing grain in bulk storage units results in grain packing from overbearing pressure, which increases grain bulk density and storage-unit capacity. This study compared pack factors of hard red winter (HRW) wheat in vertical storage bins using different methods: the existing packing model (WPACKING...

  4. A Qualitative Study on Organizational Factors Affecting Occupational Accidents

    PubMed Central

    ESKANDARI, Davood; JAFARI, Mohammad Javad; MEHRABI, Yadollah; KIAN, Mostafa Pouya; CHARKHAND, Hossein; MIRGHOTBI, Mostafa

    2017-01-01

    Background: Technical, human, operational and organizational factors have been influencing the sequence of occupational accidents. Among them, organizational factors play a major role in causing occupational accidents. The aim of this research was to understand the Iranian safety experts’ experiences and perception of organizational factors. Methods: This qualitative study was conducted in 2015 by using the content analysis technique. Data were collected through semi-structured interviews with 17 safety experts working in Iranian universities and industries and analyzed with a conventional qualitative content analysis method using the MAXQDA software. Results: Eleven organizational factors’ sub-themes were identified: management commitment, management participation, employee involvement, communication, blame culture, education and training, job satisfaction, interpersonal relationship, supervision, continuous improvement, and reward system. The participants considered these factors as effective on occupational accidents. Conclusion: The mentioned 11 organizational factors are probably involved in occupational accidents in Iran. Naturally, improving organizational factors can increase the safety performance and reduce occupational accidents. PMID:28435824

  5. Mining nutrigenetics patterns related to obesity: use of parallel multifactor dimensionality reduction.

    PubMed

    Karayianni, Katerina N; Grimaldi, Keith A; Nikita, Konstantina S; Valavanis, Ioannis K

    2015-01-01

    This paper aims to enlighten the complex etiology beneath obesity by analysing data from a large nutrigenetics study, in which nutritional and genetic factors associated with obesity were recorded for around two thousand individuals. In our previous work, these data have been analysed using artificial neural network methods, which identified optimised subsets of factors to predict one's obesity status. These methods did not reveal though how the selected factors interact with each other in the obtained predictive models. For that reason, parallel Multifactor Dimensionality Reduction (pMDR) was used here to further analyse the pre-selected subsets of nutrigenetic factors. Within pMDR, predictive models using up to eight factors were constructed, further reducing the input dimensionality, while rules describing the interactive effects of the selected factors were derived. In this way, it was possible to identify specific genetic variations and their interactive effects with particular nutritional factors, which are now under further study.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yaping; Williams, Brent J.; Goldstein, Allen H.

    Here, we present a rapid method for apportioning the sources of atmospheric organic aerosol composition measured by gas chromatography–mass spectrometry methods. Here, we specifically apply this new analysis method to data acquired on a thermal desorption aerosol gas chromatograph (TAG) system. Gas chromatograms are divided by retention time into evenly spaced bins, within which the mass spectra are summed. A previous chromatogram binning method was introduced for the purpose of chromatogram structure deconvolution (e.g., major compound classes) (Zhang et al., 2014). Here we extend the method development for the specific purpose of determining aerosol samples' sources. Chromatogram bins are arrangedmore » into an input data matrix for positive matrix factorization (PMF), where the sample number is the row dimension and the mass-spectra-resolved eluting time intervals (bins) are the column dimension. Then two-dimensional PMF can effectively do three-dimensional factorization on the three-dimensional TAG mass spectra data. The retention time shift of the chromatogram is corrected by applying the median values of the different peaks' shifts. Bin width affects chemical resolution but does not affect PMF retrieval of the sources' time variations for low-factor solutions. A bin width smaller than the maximum retention shift among all samples requires retention time shift correction. A six-factor PMF comparison among aerosol mass spectrometry (AMS), TAG binning, and conventional TAG compound integration methods shows that the TAG binning method performs similarly to the integration method. However, the new binning method incorporates the entirety of the data set and requires significantly less pre-processing of the data than conventional single compound identification and integration. In addition, while a fraction of the most oxygenated aerosol does not elute through an underivatized TAG analysis, the TAG binning method does have the ability to achieve molecular level resolution on other bulk aerosol components commonly observed by the AMS.« less

  7. A new shielding calculation method for X-ray computed tomography regarding scattered radiation.

    PubMed

    Watanabe, Hiroshi; Noto, Kimiya; Shohji, Tomokazu; Ogawa, Yasuyoshi; Fujibuchi, Toshioh; Yamaguchi, Ichiro; Hiraki, Hitoshi; Kida, Tetsuo; Sasanuma, Kazutoshi; Katsunuma, Yasushi; Nakano, Takurou; Horitsugi, Genki; Hosono, Makoto

    2017-06-01

    The goal of this study is to develop a more appropriate shielding calculation method for computed tomography (CT) in comparison with the Japanese conventional (JC) method and the National Council on Radiation Protection and Measurements (NCRP)-dose length product (DLP) method. Scattered dose distributions were measured in a CT room with 18 scanners (16 scanners in the case of the JC method) for one week during routine clinical use. The radiation doses were calculated for the same period using the JC and NCRP-DLP methods. The mean (NCRP-DLP-calculated dose)/(measured dose) ratios in each direction ranged from 1.7 ± 0.6 to 55 ± 24 (mean ± standard deviation). The NCRP-DLP method underestimated the dose at 3.4% in fewer shielding directions without the gantry and a subject, and the minimum (NCRP-DLP-calculated dose)/(measured dose) ratio was 0.6. The reduction factors were 0.036 ± 0.014 and 0.24 ± 0.061 for the gantry and couch directions, respectively. The (JC-calculated dose)/(measured dose) ratios ranged from 11 ± 8.7 to 404 ± 340. The air kerma scatter factor κ is expected to be twice as high as that calculated with the NCRP-DLP method and the reduction factors are expected to be 0.1 and 0.4 for the gantry and couch directions, respectively. We, therefore, propose a more appropriate method, the Japanese-DLP method, which resolves the issues of possible underestimation of the scattered radiation and overestimation of the reduction factors in the gantry and couch directions.

  8. Constrained Response Surface Optimisation and Taguchi Methods for Precisely Atomising Spraying Process

    NASA Astrophysics Data System (ADS)

    Luangpaiboon, P.; Suwankham, Y.; Homrossukon, S.

    2010-10-01

    This research presents a development of a design of experiment technique for quality improvement in automotive manufacturing industrial. The quality of interest is the colour shade, one of the key feature and exterior appearance for the vehicles. With low percentage of first time quality, the manufacturer has spent a lot of cost for repaired works as well as the longer production time. To permanently dissolve such problem, the precisely spraying condition should be optimized. Therefore, this work will apply the full factorial design, the multiple regression, the constrained response surface optimization methods or CRSOM, and Taguchi's method to investigate the significant factors and to determine the optimum factor level in order to improve the quality of paint shop. Firstly, 2κ full factorial was employed to study the effect of five factors including the paint flow rate at robot setting, the paint levelling agent, the paint pigment, the additive slow solvent, and non volatile solid at spraying of atomizing spraying machine. The response values of colour shade at 15 and 45 degrees were measured using spectrophotometer. Then the regression models of colour shade at both degrees were developed from the significant factors affecting each response. Consequently, both regression models were placed into the form of linear programming to maximize the colour shade subjected to 3 main factors including the pigment, the additive solvent and the flow rate. Finally, Taguchi's method was applied to determine the proper level of key variable factors to achieve the mean value target of colour shade. The factor of non volatile solid was found to be one more additional factor at this stage. Consequently, the proper level of all factors from both experiment design methods were used to set a confirmation experiment. It was found that the colour shades, both visual at 15 and 45 angel of measurement degrees of spectrophotometer, were nearly closed to the target and the defective at quality gate was also reduced from 0.35 WDPV to 0.10 WDPV. This reveals that the objective of this research is met and this procedure can be used as quality improvement guidance for paint shop of automotive vehicle.

  9. Informative priors on fetal fraction increase power of the noninvasive prenatal screen.

    PubMed

    Xu, Hanli; Wang, Shaowei; Ma, Lin-Lin; Huang, Shuai; Liang, Lin; Liu, Qian; Liu, Yang-Yang; Liu, Ke-Di; Tan, Ze-Min; Ban, Hao; Guan, Yongtao; Lu, Zuhong

    2017-11-09

    PurposeNoninvasive prenatal screening (NIPS) sequences a mixture of the maternal and fetal cell-free DNA. Fetal trisomy can be detected by examining chromosomal dosages estimated from sequencing reads. The traditional method uses the Z-test, which compares a subject against a set of euploid controls, where the information of fetal fraction is not fully utilized. Here we present a Bayesian method that leverages informative priors on the fetal fraction.MethodOur Bayesian method combines the Z-test likelihood and informative priors of the fetal fraction, which are learned from the sex chromosomes, to compute Bayes factors. Bayesian framework can account for nongenetic risk factors through the prior odds, and our method can report individual positive/negative predictive values.ResultsOur Bayesian method has more power than the Z-test method. We analyzed 3,405 NIPS samples and spotted at least 9 (of 51) possible Z-test false positives.ConclusionBayesian NIPS is more powerful than the Z-test method, is able to account for nongenetic risk factors through prior odds, and can report individual positive/negative predictive values.Genetics in Medicine advance online publication, 9 November 2017; doi:10.1038/gim.2017.186.

  10. Using the Reliability Theory for Assessing the Decision Confidence Probability for Comparative Life Cycle Assessments.

    PubMed

    Wei, Wei; Larrey-Lassalle, Pyrène; Faure, Thierry; Dumoulin, Nicolas; Roux, Philippe; Mathias, Jean-Denis

    2016-03-01

    Comparative decision making process is widely used to identify which option (system, product, service, etc.) has smaller environmental footprints and for providing recommendations that help stakeholders take future decisions. However, the uncertainty problem complicates the comparison and the decision making. Probability-based decision support in LCA is a way to help stakeholders in their decision-making process. It calculates the decision confidence probability which expresses the probability of a option to have a smaller environmental impact than the one of another option. Here we apply the reliability theory to approximate the decision confidence probability. We compare the traditional Monte Carlo method with a reliability method called FORM method. The Monte Carlo method needs high computational time to calculate the decision confidence probability. The FORM method enables us to approximate the decision confidence probability with fewer simulations than the Monte Carlo method by approximating the response surface. Moreover, the FORM method calculates the associated importance factors that correspond to a sensitivity analysis in relation to the probability. The importance factors allow stakeholders to determine which factors influence their decision. Our results clearly show that the reliability method provides additional useful information to stakeholders as well as it reduces the computational time.

  11. Regulation of galactan synthase expression to modify galactan content in plants

    DOEpatents

    None

    2017-08-22

    The disclosure provides methods of engineering plants to modulate galactan content. Specifically, the disclosure provides methods for engineering a plant to increase the galactan content in a plant tissue by inducing expression of beta-1,4-galactan synthase (GALS), modulated by a heterologous promoter. Further disclosed are the methods of modulating expression level of GALS under the regulation of a transcription factor, as well as overexpression of UDP-galactose epimerse in the same plant tissue. Tissue specific promoters and transcription factors can be used in the methods are also provided.

  12. Gene Ranking of RNA-Seq Data via Discriminant Non-Negative Matrix Factorization.

    PubMed

    Jia, Zhilong; Zhang, Xiang; Guan, Naiyang; Bo, Xiaochen; Barnes, Michael R; Luo, Zhigang

    2015-01-01

    RNA-sequencing is rapidly becoming the method of choice for studying the full complexity of transcriptomes, however with increasing dimensionality, accurate gene ranking is becoming increasingly challenging. This paper proposes an accurate and sensitive gene ranking method that implements discriminant non-negative matrix factorization (DNMF) for RNA-seq data. To the best of our knowledge, this is the first work to explore the utility of DNMF for gene ranking. When incorporating Fisher's discriminant criteria and setting the reduced dimension as two, DNMF learns two factors to approximate the original gene expression data, abstracting the up-regulated or down-regulated metagene by using the sample label information. The first factor denotes all the genes' weights of two metagenes as the additive combination of all genes, while the second learned factor represents the expression values of two metagenes. In the gene ranking stage, all the genes are ranked as a descending sequence according to the differential values of the metagene weights. Leveraging the nature of NMF and Fisher's criterion, DNMF can robustly boost the gene ranking performance. The Area Under the Curve analysis of differential expression analysis on two benchmarking tests of four RNA-seq data sets with similar phenotypes showed that our proposed DNMF-based gene ranking method outperforms other widely used methods. Moreover, the Gene Set Enrichment Analysis also showed DNMF outweighs others. DNMF is also computationally efficient, substantially outperforming all other benchmarked methods. Consequently, we suggest DNMF is an effective method for the analysis of differential gene expression and gene ranking for RNA-seq data.

  13. Underlying risk factors for prescribing errors in long-term aged care: a qualitative study.

    PubMed

    Tariq, Amina; Georgiou, Andrew; Raban, Magdalena; Baysari, Melissa Therese; Westbrook, Johanna

    2016-09-01

    To identify system-related risk factors perceived to contribute to prescribing errors in Australian long-term care settings, that is, residential aged care facilities (RACFs). The study used qualitative methods to explore factors that contribute to unsafe prescribing in RACFs. Data were collected at three RACFs in metropolitan Sydney, Australia between May and November 2011. Participants included RACF managers, doctors, pharmacists and RACF staff actively involved in prescribing-related processes. Methods included non-participant observations (74 h), in-depth semistructured interviews (n=25) and artefact analysis. Detailed process activity models were developed for observed prescribing episodes supplemented by triangulated analysis using content analysis methods. System-related factors perceived to increase the risk of prescribing errors in RACFs were classified into three overarching themes: communication systems, team coordination and staff management. Factors associated with communication systems included limited point-of-care access to information, inadequate handovers, information storage across different media (paper, electronic and memory), poor legibility of charts, information double handling, multiple faxing of medication charts and reliance on manual chart reviews. Team factors included lack of established lines of responsibility, inadequate team communication and limited participation of doctors in multidisciplinary initiatives like medication advisory committee meetings. Factors related to staff management and workload included doctors' time constraints and their accessibility, lack of trained RACF staff and high RACF staff turnover. The study highlights several system-related factors including laborious methods for exchanging medication information, which often act together to contribute to prescribing errors. Multiple interventions (eg, technology systems, team communication protocols) are required to support the collaborative nature of RACF prescribing. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  14. An Assessment Tool of Performance Based Logistics Appropriateness

    DTIC Science & Technology

    2012-03-01

    weighted tool score. The reason might be the willing to use PBL as an acquisition method . There is an 8.51% positive difference is present. Figure 20 shows...performance-based acquisition methods to the maximum extent practicable when acquiring services with little exclusion’ is mandated. Although PBL...determines the factors affecting the success in selecting PBL as an acquisition method . Each factor is examined in detail and built into a spreadsheet tool

  15. Investigation of High-Angle-of-Attack Maneuver-Limiting Factors. Part 1. Analysis and Simulation

    DTIC Science & Technology

    1980-12-01

    useful, are not so satisfying or in- structive as the more positive identification of causal factors offered by the methods developed in Reference 5...same methods be applied to additional high-performance fighter aircraft having widely differing high AOA handling characteristics to see if further...predictions and the nonlinear model results were resolved. The second task involved development of methods , criteria, and an associated pilot rating scale, for

  16. Development of the Career Anchors Scale among Occupational Health Nurses in Japan

    PubMed Central

    Kubo, Yoshiko; Hatono, Yoko; Kubo, Tomohide; Shimamoto, Satoko; Nakatani, Junko; Burgel, Barbara J.

    2016-01-01

    Objectives: This study aimed to develop the Career Anchors Scale among Occupational Health Nurses (CASOHN) and evaluate its reliability and validity. Methods: Scale items were developed through a qualitative inductive analysis of interview data, and items were revised following an examination of content validity by experts and occupational health nurses (OHNs), resulting in a provisional scale of 41 items. A total of 745 OHNs (response rate 45.2%) affiliated with the Japan Society for Occupational Health participated in the self-administered questionnaire survey. Results: Two items were deleted based on item-total correlations. Factor analysis was then conducted on the remaining 39 items to examine construct validity. An exploratory factor analysis with a main factor method and promax rotation resulted in the extraction of six factors. The variance contribution ratios of the six factors were 37.45, 7.01, 5.86, 4.95, 4.16, and 3.19%. The cumulative contribution ratio was 62.62%. The factors were named as follows: Demonstrating expertise and considering position in work (Factor 1); Management skills for effective work (Factor 2); Supporting health improvement in groups and organizations (Factor 3); Providing employee-focused support (Factor 4); Collaborating with occupational health team members and personnel (Factor 5); and Compatibility of work and private life (Factor 6). The confidence coefficient determined by the split-half method was 0.85. Cronbach's alpha coefficient for the overall scale was 0.95, whereas those of the six subscales were 0.88, 0.90, 0.91, 0.80, 0.85, and 0.79, respectively. Conclusions: CASOHN was found to be valid and reliable for measuring career anchors among OHNs in Japan. PMID:27725484

  17. External Factors, Internal Factors and Self-Directed Learning Readiness

    ERIC Educational Resources Information Center

    Ramli, Nurjannah; Muljono, Pudji; Afendi, Farit M.

    2018-01-01

    There are many factors which affect the level of self-directed learning readiness. This study aims to investigate the relationship between external factors, internal factors and self-directed learning readiness. This study was carried out by using a census method for fourth year students of medical program of Tadulako University. Data were…

  18. Application of Response Surface Methods To Determine Conditions for Optimal Genomic Prediction

    PubMed Central

    Howard, Réka; Carriquiry, Alicia L.; Beavis, William D.

    2017-01-01

    An epistatic genetic architecture can have a significant impact on prediction accuracies of genomic prediction (GP) methods. Machine learning methods predict traits comprised of epistatic genetic architectures more accurately than statistical methods based on additive mixed linear models. The differences between these types of GP methods suggest a diagnostic for revealing genetic architectures underlying traits of interest. In addition to genetic architecture, the performance of GP methods may be influenced by the sample size of the training population, the number of QTL, and the proportion of phenotypic variability due to genotypic variability (heritability). Possible values for these factors and the number of combinations of the factor levels that influence the performance of GP methods can be large. Thus, efficient methods for identifying combinations of factor levels that produce most accurate GPs is needed. Herein, we employ response surface methods (RSMs) to find the experimental conditions that produce the most accurate GPs. We illustrate RSM with an example of simulated doubled haploid populations and identify the combination of factors that maximize the difference between prediction accuracies of best linear unbiased prediction (BLUP) and support vector machine (SVM) GP methods. The greatest impact on the response is due to the genetic architecture of the population, heritability of the trait, and the sample size. When epistasis is responsible for all of the genotypic variance and heritability is equal to one and the sample size of the training population is large, the advantage of using the SVM method vs. the BLUP method is greatest. However, except for values close to the maximum, most of the response surface shows little difference between the methods. We also determined that the conditions resulting in the greatest prediction accuracy for BLUP occurred when genetic architecture consists solely of additive effects, and heritability is equal to one. PMID:28720710

  19. Formal methods and digital systems validation for airborne systems

    NASA Technical Reports Server (NTRS)

    Rushby, John

    1993-01-01

    This report has been prepared to supplement a forthcoming chapter on formal methods in the FAA Digital Systems Validation Handbook. Its purpose is as follows: to outline the technical basis for formal methods in computer science; to explain the use of formal methods in the specification and verification of software and hardware requirements, designs, and implementations; to identify the benefits, weaknesses, and difficulties in applying these methods to digital systems used on board aircraft; and to suggest factors for consideration when formal methods are offered in support of certification. These latter factors assume the context for software development and assurance described in RTCA document DO-178B, 'Software Considerations in Airborne Systems and Equipment Certification,' Dec. 1992.

  20. Applying cognitive developmental psychology to middle school physics learning: The rule assessment method

    NASA Astrophysics Data System (ADS)

    Hallinen, Nicole R.; Chi, Min; Chin, Doris B.; Prempeh, Joe; Blair, Kristen P.; Schwartz, Daniel L.

    2013-01-01

    Cognitive developmental psychology often describes children's growing qualitative understanding of the physical world. Physics educators may be able to use the relevant methods to advantage for characterizing changes in students' qualitative reasoning. Siegler developed the "rule assessment" method for characterizing levels of qualitative understanding for two factor situations (e.g., volume and mass for density). The method assigns children to rule levels that correspond to the degree they notice and coordinate the two factors. Here, we provide a brief tutorial plus a demonstration of how we have used this method to evaluate instructional outcomes with middle-school students who learned about torque, projectile motion, and collisions using different instructional methods with simulations.

  1. Methods for analysis of cracks in three-dimensional solids

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Newman, J. C., Jr.

    1984-01-01

    Various analytical and numerical methods used to evaluate the stress intensity factors for cracks in three-dimensional (3-D) solids are reviewed. Classical exact solutions and many of the approximate methods used in 3-D analyses of cracks are reviewed. The exact solutions for embedded elliptic cracks in infinite solids are discussed. The approximate methods reviewed are the finite element methods, the boundary integral equation (BIE) method, the mixed methods (superposition of analytical and finite element method, stress difference method, discretization-error method, alternating method, finite element-alternating method), and the line-spring model. The finite element method with singularity elements is the most widely used method. The BIE method only needs modeling of the surfaces of the solid and so is gaining popularity. The line-spring model appears to be the quickest way to obtain good estimates of the stress intensity factors. The finite element-alternating method appears to yield the most accurate solution at the minimum cost.

  2. An Upscaling Method for Cover-Management Factor and Its Application in the Loess Plateau of China

    PubMed Central

    Zhao, Wenwu; Fu, Bojie; Qiu, Yang

    2013-01-01

    The cover-management factor (C-factor) is important for studying soil erosion. In addition, it is important to use sampling plot data to estimate the regional C-factor when assessing erosion and soil conservation. Here, the loess hill and gully region in Ansai County, China, was studied to determine a method for computing the C-factor. This C-factor is used in the Universal Soil Loss Equation (USLE) at a regional scale. After upscaling the slope-scale computational equation, the C-factor for Ansai County was calculated by using the soil loss ratio, precipitation and land use/cover type. The multi-year mean C-factor for Ansai County was 0.36. The C-factor values were greater in the eastern region of the county than in the western region. In addition, the lowest C-factor values were found in the southern region of the county near its southern border. These spatial differences were consistent with the spatial distribution of the soil loess ratios across areas with different land uses. Additional research is needed to determine the effects of seasonal vegetation growth changes on the C-factor, and the C-factor upscaling uncertainties at a regional scale. PMID:24113551

  3. An upscaling method for cover-management factor and its application in the loess Plateau of China.

    PubMed

    Zhao, Wenwu; Fu, Bojie; Qiu, Yang

    2013-10-09

    The cover-management factor (C-factor) is important for studying soil erosion. In addition, it is important to use sampling plot data to estimate the regional C-factor when assessing erosion and soil conservation. Here, the loess hill and gully region in Ansai County, China, was studied to determine a method for computing the C-factor. This C-factor is used in the Universal Soil Loss Equation (USLE) at a regional scale. After upscaling the slope-scale computational equation, the C-factor for Ansai County was calculated by using the soil loss ratio, precipitation and land use/cover type. The multi-year mean C-factor for Ansai County was 0.36. The C-factor values were greater in the eastern region of the county than in the western region. In addition, the lowest C-factor values were found in the southern region of the county near its southern border. These spatial differences were consistent with the spatial distribution of the soil loess ratios across areas with different land uses. Additional research is needed to determine the effects of seasonal vegetation growth changes on the C-factor, and the C-factor upscaling uncertainties at a regional scale.

  4. Factors Which Influence The Fish Purchasing Decision: A study on Traditional Market in Riau Mainland

    NASA Astrophysics Data System (ADS)

    Siswati, Latifa; Putri, Asgami

    2018-05-01

    The purposes of the research are to analyze and assess the factors which influence fish purchasing by the community at Tenayan Raya district Pekanbaru.Research methodology which used is survey method, especially interview and observation technique or direct supervision on the market which located at Tenayan Raya district. Determination technique of sampling location/region is done by purposive sampling. The sampling method is done by accidental sampling. Technique analysis of factors which used using the data that derived from the respondent opinion to various fish variable. The result of this research are the factors which influence fish purchasing decision done in a traditional market which located at Tenayan Raya district are product factor, price factors, social factor and individual factor. Product factor which influences fish purchasing decision as follows: the eyelets condition, the nutrition of fresh fish, the diversity of sold fish. Price factors influence the fish purchasing decision, such as: the price of fresh fish, the convincing price and the suitability price and benefits of the fresh fish. Individual factors which influence a fish purchasing decision, such as education and income levels. Social factors which influence a fish purchasing decision, such as family, colleagues and feeding habits of fish.

  5. [An automatic peak detection method for LIBS spectrum based on continuous wavelet transform].

    PubMed

    Chen, Peng-Fei; Tian, Di; Qiao, Shu-Jun; Yang, Guang

    2014-07-01

    Spectrum peak detection in the laser-induced breakdown spectroscopy (LIBS) is an essential step, but the presence of background and noise seriously disturb the accuracy of peak position. The present paper proposed a method applied to automatic peak detection for LIBS spectrum in order to enhance the ability of overlapping peaks searching and adaptivity. We introduced the ridge peak detection method based on continuous wavelet transform to LIBS, and discussed the choice of the mother wavelet and optimized the scale factor and the shift factor. This method also improved the ridge peak detection method with a correcting ridge method. The experimental results show that compared with other peak detection methods (the direct comparison method, derivative method and ridge peak search method), our method had a significant advantage on the ability to distinguish overlapping peaks and the precision of peak detection, and could be be applied to data processing in LIBS.

  6. Method for computing energy release rate using the elastic work factor approach

    NASA Astrophysics Data System (ADS)

    Rhee, K. Y.; Ernst, H. A.

    1992-01-01

    The elastic work factor eta(el) concept was applied to composite structures for the calculation of total energy release rate by using a single specimen. Cracked lap shear specimens with four different unidirectional fiber orientation were used to examine the dependence of eta(el) on the material properties. Also, three different thickness ratios (lap/strap) were used to determine how geometric conditions affect eta(el). The eta(el) values were calculated in two different ways: compliance method and crack closure method. The results show that the two methods produce comparable eta(el) values and, while eta(el) is affected significantly by geometric conditions, it is reasonably independent of material properties for the given geometry. The results also showed that the elastic work factor can be used to calculate total energy release rate using a single specimen.

  7. Estimate variable importance for recurrent event outcomes with an application to identify hypoglycemia risk factors.

    PubMed

    Duan, Ran; Fu, Haoda

    2015-08-30

    Recurrent event data are an important data type for medical research. In particular, many safety endpoints are recurrent outcomes, such as hypoglycemic events. For such a situation, it is important to identify the factors causing these events and rank these factors by their importance. Traditional model selection methods are not able to provide variable importance in this context. Methods that are able to evaluate the variable importance, such as gradient boosting and random forest algorithms, cannot directly be applied to recurrent events data. In this paper, we propose a two-step method that enables us to evaluate the variable importance for recurrent events data. We evaluated the performance of our proposed method by simulations and applied it to a data set from a diabetes study. Copyright © 2015 John Wiley & Sons, Ltd.

  8. Comparative evaluation of power factor impovement techniques for squirrel cage induction motors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spee, R.; Wallace, A.K.

    1992-04-01

    This paper describes the results obtained from a series of tests of relatively simple methods of improving the power factor of squirrel-cage induction motors. The methods, which are evaluated under controlled laboratory conditions for a 10-hp, high-efficiency motor, include terminal voltage reduction; terminal static capacitors; and a floating'' winding with static capacitors. The test results are compared with equivalent circuit model predictions that are then used to identify optimum conditions for each of the power factor improvement techniques compared with the basic induction motor. Finally, the relative economic value, and the implications of component failures, of the three methods aremore » discussed.« less

  9. Some factors contributing to protein-energy malnutrition in the middle belt of Nigeria.

    PubMed

    Ighogboja, S I

    1992-10-01

    A number of risk factors leading to malnutrition were investigated among 400 mothers of malnourished children in the middle belt of Nigeria. Poverty, family instability, poor environmental sanitation, faulty weaning practices, illiteracy, ignorance, large family size and preventable infections are the main factors responsible for malnutrition. The strategies for intervention are in the area of health education emphasizing the importance of breastfeeding, family stability, responsible parenthood and small family sizes through culturally acceptable family planning methods. There is need to improve weaning methods through nutrition education, growth monitoring and food demonstration with community participation. Political will is needed to improve literacy status, farming methods and general living conditions.

  10. Bayesian CP Factorization of Incomplete Tensors with Automatic Rank Determination.

    PubMed

    Zhao, Qibin; Zhang, Liqing; Cichocki, Andrzej

    2015-09-01

    CANDECOMP/PARAFAC (CP) tensor factorization of incomplete data is a powerful technique for tensor completion through explicitly capturing the multilinear latent factors. The existing CP algorithms require the tensor rank to be manually specified, however, the determination of tensor rank remains a challenging problem especially for CP rank . In addition, existing approaches do not take into account uncertainty information of latent factors, as well as missing entries. To address these issues, we formulate CP factorization using a hierarchical probabilistic model and employ a fully Bayesian treatment by incorporating a sparsity-inducing prior over multiple latent factors and the appropriate hyperpriors over all hyperparameters, resulting in automatic rank determination. To learn the model, we develop an efficient deterministic Bayesian inference algorithm, which scales linearly with data size. Our method is characterized as a tuning parameter-free approach, which can effectively infer underlying multilinear factors with a low-rank constraint, while also providing predictive distributions over missing entries. Extensive simulations on synthetic data illustrate the intrinsic capability of our method to recover the ground-truth of CP rank and prevent the overfitting problem, even when a large amount of entries are missing. Moreover, the results from real-world applications, including image inpainting and facial image synthesis, demonstrate that our method outperforms state-of-the-art approaches for both tensor factorization and tensor completion in terms of predictive performance.

  11. Application of GA-SVM method with parameter optimization for landslide development prediction

    NASA Astrophysics Data System (ADS)

    Li, X. Z.; Kong, J. M.

    2013-10-01

    Prediction of landslide development process is always a hot issue in landslide research. So far, many methods for landslide displacement series prediction have been proposed. Support vector machine (SVM) has been proved to be a novel algorithm with good performance. However, the performance strongly depends on the right selection of the parameters (C and γ) of SVM model. In this study, we presented an application of GA-SVM method with parameter optimization in landslide displacement rate prediction. We selected a typical large-scale landslide in some hydro - electrical engineering area of Southwest China as a case. On the basis of analyzing the basic characteristics and monitoring data of the landslide, a single-factor GA-SVM model and a multi-factor GA-SVM model of the landslide were built. Moreover, the models were compared with single-factor and multi-factor SVM models of the landslide. The results show that, the four models have high prediction accuracies, but the accuracies of GA-SVM models are slightly higher than those of SVM models and the accuracies of multi-factor models are slightly higher than those of single-factor models for the landslide prediction. The accuracy of the multi-factor GA-SVM models is the highest, with the smallest RSME of 0.0009 and the biggest RI of 0.9992.

  12. Ionospheric Delay Compensation Using a Scale Factor Based on an Altitude of a Receiver

    NASA Technical Reports Server (NTRS)

    Zhao, Hui (Inventor); Savoy, John (Inventor)

    2014-01-01

    In one embodiment, a method for ionospheric delay compensation is provided. The method includes determining an ionospheric delay based on a signal having propagated from the navigation satellite to a location below the ionosphere. A scale factor can be applied to the ionospheric delay, wherein the scale factor corresponds to a ratio of an ionospheric delay in the vertical direction based on an altitude of the satellite navigation system receiver. Compensation can be applied based on the ionospheric delay.

  13. An improved bias correction method of daily rainfall data using a sliding window technique for climate change impact assessment

    NASA Astrophysics Data System (ADS)

    Smitha, P. S.; Narasimhan, B.; Sudheer, K. P.; Annamalai, H.

    2018-01-01

    Regional climate models (RCMs) are used to downscale the coarse resolution General Circulation Model (GCM) outputs to a finer resolution for hydrological impact studies. However, RCM outputs often deviate from the observed climatological data, and therefore need bias correction before they are used for hydrological simulations. While there are a number of methods for bias correction, most of them use monthly statistics to derive correction factors, which may cause errors in the rainfall magnitude when applied on a daily scale. This study proposes a sliding window based daily correction factor derivations that help build reliable daily rainfall data from climate models. The procedure is applied to five existing bias correction methods, and is tested on six watersheds in different climatic zones of India for assessing the effectiveness of the corrected rainfall and the consequent hydrological simulations. The bias correction was performed on rainfall data downscaled using Conformal Cubic Atmospheric Model (CCAM) to 0.5° × 0.5° from two different CMIP5 models (CNRM-CM5.0, GFDL-CM3.0). The India Meteorological Department (IMD) gridded (0.25° × 0.25°) observed rainfall data was considered to test the effectiveness of the proposed bias correction method. The quantile-quantile (Q-Q) plots and Nash Sutcliffe efficiency (NSE) were employed for evaluation of different methods of bias correction. The analysis suggested that the proposed method effectively corrects the daily bias in rainfall as compared to using monthly factors. The methods such as local intensity scaling, modified power transformation and distribution mapping, which adjusted the wet day frequencies, performed superior compared to the other methods, which did not consider adjustment of wet day frequencies. The distribution mapping method with daily correction factors was able to replicate the daily rainfall pattern of observed data with NSE value above 0.81 over most parts of India. Hydrological simulations forced using the bias corrected rainfall (distribution mapping and modified power transformation methods that used the proposed daily correction factors) was similar to those simulated by the IMD rainfall. The results demonstrate that the methods and the time scales used for bias correction of RCM rainfall data have a larger impact on the accuracy of the daily rainfall and consequently the simulated streamflow. The analysis suggests that the distribution mapping with daily correction factors can be preferred for adjusting RCM rainfall data irrespective of seasons or climate zones for realistic simulation of streamflow.

  14. Analysis of Risk Factors for Postoperative Morbidity in Perforated Peptic Ulcer

    PubMed Central

    Kim, Jae-Myung; Jeong, Sang-Ho; Park, Soon-Tae; Choi, Sang-Kyung; Hong, Soon-Chan; Jung, Eun-Jung; Ju, Young-Tae; Jeong, Chi-Young; Ha, Woo-Song

    2012-01-01

    Purpose Emergency operations for perforated peptic ulcer are associated with a high incidence of postoperative complications. While several studies have investigated the impact of perioperative risk factors and underlying diseases on the postoperative morbidity after abdominal surgery, only a few have analyzed their role in perforated peptic ulcer disease. The purpose of this study was to determine any possible associations between postoperative morbidity and comorbid disease or perioperative risk factors in perforated peptic ulcer. Materials and Methods In total, 142 consecutive patients, who underwent surgery for perforated peptic ulcer, at a single institution, between January 2005 and October 2010 were included in this study. The clinical data concerning the patient characteristics, operative methods, and complications were collected retrospectively. Results The postoperative morbidity rate associated with perforated peptic ulcer operations was 36.6% (52/142). Univariate analysis revealed that a long operating time, the open surgical method, age (≥60), sex (female), high American Society of Anesthesiologists (ASA) score and presence of preoperative shock were significant perioperative risk factors for postoperative morbidity. Significant comorbid risk factors included hypertension, diabetes mellitus and pulmonary disease. Multivariate analysis revealed a long operating time, the open surgical method, high ASA score and the presence of preoperative shock were all independent risk factors for the postoperative morbidity in perforated peptic ulcer. Conclusions A high ASA score, preoperative shock, open surgery and long operating time of more than 150 minutes are high risk factors for morbidity. However, there is no association between postoperative morbidity and comorbid disease in patients with a perforated peptic ulcer. PMID:22500261

  15. Longitudinal tests of competing factor structures for the Rosenberg Self-Esteem Scale: traits, ephemeral artifacts, and stable response styles.

    PubMed

    Marsh, Herbert W; Scalas, L Francesca; Nagengast, Benjamin

    2010-06-01

    Self-esteem, typically measured by the Rosenberg Self-Esteem Scale (RSE), is one of the most widely studied constructs in psychology. Nevertheless, there is broad agreement that a simple unidimensional factor model, consistent with the original design and typical application in applied research, does not provide an adequate explanation of RSE responses. However, there is no clear agreement about what alternative model is most appropriate-or even a clear rationale for how to test competing interpretations. Three alternative interpretations exist: (a) 2 substantively important trait factors (positive and negative self-esteem), (b) 1 trait factor and ephemeral method artifacts associated with positively or negatively worded items, or (c) 1 trait factor and stable response-style method factors associated with item wording. We have posited 8 alternative models and structural equation model tests based on longitudinal data (4 waves of data across 8 years with a large, representative sample of adolescents). Longitudinal models provide no support for the unidimensional model, undermine support for the 2-factor model, and clearly refute claims that wording effects are ephemeral, but they provide good support for models positing 1 substantive (self-esteem) factor and response-style method factors that are stable over time. This longitudinal methodological approach has not only resolved these long-standing issues in self-esteem research but also has broad applicability to most psychological assessments based on self-reports with a mix of positively and negatively worded items.

  16. A Gradient Taguchi Method for Engineering Optimization

    NASA Astrophysics Data System (ADS)

    Hwang, Shun-Fa; Wu, Jen-Chih; He, Rong-Song

    2017-10-01

    To balance the robustness and the convergence speed of optimization, a novel hybrid algorithm consisting of Taguchi method and the steepest descent method is proposed in this work. Taguchi method using orthogonal arrays could quickly find the optimum combination of the levels of various factors, even when the number of level and/or factor is quite large. This algorithm is applied to the inverse determination of elastic constants of three composite plates by combining numerical method and vibration testing. For these problems, the proposed algorithm could find better elastic constants in less computation cost. Therefore, the proposed algorithm has nice robustness and fast convergence speed as compared to some hybrid genetic algorithms.

  17. A symmetrical subtraction combined with interpolated values for eliminating scattering from fluorescence EEM data

    NASA Astrophysics Data System (ADS)

    Xu, Jing; Liu, Xiaofei; Wang, Yutian

    2016-08-01

    Parallel factor analysis is a widely used method to extract qualitative and quantitative information of the analyte of interest from fluorescence emission-excitation matrix containing unknown components. Big amplitude of scattering will influence the results of parallel factor analysis. Many methods of eliminating scattering have been proposed. Each of these methods has its advantages and disadvantages. The combination of symmetrical subtraction and interpolated values has been discussed. The combination refers to both the combination of results and the combination of methods. Nine methods were used for comparison. The results show the combination of results can make a better concentration prediction for all the components.

  18. Study of Fuze Structure and Reliability Design Based on the Direct Search Method

    NASA Astrophysics Data System (ADS)

    Lin, Zhang; Ning, Wang

    2017-03-01

    Redundant design is one of the important methods to improve the reliability of the system, but mutual coupling of multiple factors is often involved in the design. In my study, Direct Search Method is introduced into the optimum redundancy configuration for design optimization, in which, the reliability, cost, structural weight and other factors can be taken into account simultaneously, and the redundant allocation and reliability design of aircraft critical system are computed. The results show that this method is convenient and workable, and applicable to the redundancy configurations and optimization of various designs upon appropriate modifications. And this method has a good practical value.

  19. Research on Operation Assessment Method for Energy Meter

    NASA Astrophysics Data System (ADS)

    Chen, Xiangqun; Huang, Rui; Shen, Liman; chen, Hao; Xiong, Dezhi; Xiao, Xiangqi; Liu, Mouhai; Xu, Renheng

    2018-03-01

    The existing electric energy meter rotation maintenance strategy regularly checks the electric energy meter and evaluates the state. It only considers the influence of time factors, neglects the influence of other factors, leads to the inaccuracy of the evaluation, and causes the waste of resources. In order to evaluate the running state of the electric energy meter in time, a method of the operation evaluation of the electric energy meter is proposed. The method is based on extracting the existing data acquisition system, marketing business system and metrology production scheduling platform that affect the state of energy meters, and classified into error stability, operational reliability, potential risks and other factors according to the influencing factors, based on the above basic test score, inspecting score, monitoring score, score of family defect detection. Then, according to the evaluation model according to the scoring, we evaluate electric energy meter operating state, and finally put forward the corresponding maintenance strategy of rotation.

  20. A Method of Evaluating Operation of Electric Energy Meter

    NASA Astrophysics Data System (ADS)

    Chen, Xiangqun; Li, Tianyang; Cao, Fei; Chu, Pengfei; Zhao, Xinwang; Huang, Rui; Liu, Liping; Zhang, Chenglin

    2018-05-01

    The existing electric energy meter rotation maintenance strategy regularly checks the electric energy meter and evaluates the state. It only considers the influence of time factors, neglects the influence of other factors, leads to the inaccuracy of the evaluation, and causes the waste of resources. In order to evaluate the running state of the electric energy meter in time, a method of the operation evaluation of the electric energy meter is proposed. The method is based on extracting the existing data acquisition system, marketing business system and metrology production scheduling platform that affect the state of energy meters, and classified into error stability, operational reliability, potential risks and other factors according to the influencing factors, based on the above basic test score, inspecting score, monitoring score, score of family defect detection. Then, according to the evaluation model according to the scoring, we evaluate electric energy meter operating state, and finally put forward the corresponding maintenance strategy of rotation.

  1. Research on electricity consumption forecast based on mutual information and random forests algorithm

    NASA Astrophysics Data System (ADS)

    Shi, Jing; Shi, Yunli; Tan, Jian; Zhu, Lei; Li, Hu

    2018-02-01

    Traditional power forecasting models cannot efficiently take various factors into account, neither to identify the relation factors. In this paper, the mutual information in information theory and the artificial intelligence random forests algorithm are introduced into the medium and long-term electricity demand prediction. Mutual information can identify the high relation factors based on the value of average mutual information between a variety of variables and electricity demand, different industries may be highly associated with different variables. The random forests algorithm was used for building the different industries forecasting models according to the different correlation factors. The data of electricity consumption in Jiangsu Province is taken as a practical example, and the above methods are compared with the methods without regard to mutual information and the industries. The simulation results show that the above method is scientific, effective, and can provide higher prediction accuracy.

  2. Multiple Statistical Models Based Analysis of Causative Factors and Loess Landslides in Tianshui City, China

    NASA Astrophysics Data System (ADS)

    Su, Xing; Meng, Xingmin; Ye, Weilin; Wu, Weijiang; Liu, Xingrong; Wei, Wanhong

    2018-03-01

    Tianshui City is one of the mountainous cities that are threatened by severe geo-hazards in Gansu Province, China. Statistical probability models have been widely used in analyzing and evaluating geo-hazards such as landslide. In this research, three approaches (Certainty Factor Method, Weight of Evidence Method and Information Quantity Method) were adopted to quantitively analyze the relationship between the causative factors and the landslides, respectively. The source data used in this study are including the SRTM DEM and local geological maps in the scale of 1:200,000. 12 causative factors (i.e., altitude, slope, aspect, curvature, plan curvature, profile curvature, roughness, relief amplitude, and distance to rivers, distance to faults, distance to roads, and the stratum lithology) were selected to do correlation analysis after thorough investigation of geological conditions and historical landslides. The results indicate that the outcomes of the three models are fairly consistent.

  3. Being Single as a Social Barrier to Access Reproductive Healthcare Services by Iranian Girls

    PubMed Central

    Kohan, Shahnaz; Mohammadi, Fatemeh; Mostafavi, Firoozeh; Gholami, Ali

    2017-01-01

    Background: Iranian single women are deprived of reproductive healthcare services, though the provision of such services to the public has increased. This study aimed to explore the experiences of Iranian single women on their access to reproductive health services. Methods: A qualitative design using a conventional content analysis method was used. Semi-structured interviews were held with 17 single women and nine health providers chosen using the purposive sampling method. Results: Data analysis resulted in the development of three categories: ‘family’s attitudes and performance about single women’s reproductive healthcare,’ ‘socio-cultural factors influencing reproductive healthcare,’ and ‘cultural factors influencing being a single woman.’ Conclusion: Cultural and contextual factors affect being a single woman in every society. Therefore, healthcare providers need to identify such factors during the designing of strategies for improving the facilitation of access to reproductive healthcare services. PMID:28812794

  4. Ion-ion dynamic structure factor of warm dense mixtures

    DOE PAGES

    Gill, N. M.; Heinonen, R. A.; Starrett, C. E.; ...

    2015-06-25

    In this study, the ion-ion dynamic structure factor of warm dense matter is determined using the recently developed pseudoatom molecular dynamics method [Starrett et al., Phys. Rev. E 91, 013104 (2015)]. The method uses density functional theory to determine ion-ion pair interaction potentials that have no free parameters. These potentials are used in classical molecular dynamics simulations. This constitutes a computationally efficient and realistic model of dense plasmas. Comparison with recently published simulations of the ion-ion dynamic structure factor and sound speed of warm dense aluminum finds good to reasonable agreement. Using this method, we make predictions of the ion-ionmore » dynamical structure factor and sound speed of a warm dense mixture—equimolar carbon-hydrogen. This material is commonly used as an ablator in inertial confinement fusion capsules, and our results are amenable to direct experimental measurement.« less

  5. Exploring Task- and Student-Related Factors in the Method of Propositional Manipulation (MPM)

    ERIC Educational Resources Information Center

    Leppink, Jimmie; Broers, Nick J.; Imbos, Tjaart; van der Vleuten, Cees P. M.; Berger, Martijn P. F.

    2011-01-01

    The method of propositional manipulation (MPM) aims to help students develop conceptual understanding of statistics by guiding them into self-explaining propositions. To explore task- and student-related factors influencing students' ability to learn from MPM, twenty undergraduate students performed six learning tasks while thinking aloud. The…

  6. A Mixed-Methods Approach to Demotivating Factors among Iranian EFL Learners

    ERIC Educational Resources Information Center

    Ghonsooly, Behzad; Hassanzadeh, Tahereh; Samavarchi, Laila; Hamedi, Seyyedeh Mina

    2017-01-01

    This study used a mixed-methods approach to investigate Iranian EFL learners' attitudes towards demotivating factors which may hinder their success in a language learning course. In the quantitative phase, a sample of 337 undergraduate students from universities in Mashhad, Yazd and Gonabad completed a 34-item questionnaire. They also completed…

  7. A Structural and Correlational Analysis of Two Common Measures of Personal Epistemology

    ERIC Educational Resources Information Center

    Laster, Bonnie Bost

    2010-01-01

    Scope and Method of Study: The current inquiry is a factor analytic study which utilizes first and second order factor analytic methods to examine the internal structures of two measurements of personal epistemological beliefs: the Schommer Epistemological Questionnaire (SEQ) and Epistemic Belief Inventory (EBI). The study also examines the…

  8. Risky Business: An Ecological Analysis of Intimate Partner Violence Disclosure

    ERIC Educational Resources Information Center

    Alaggia, Ramona; Regehr, Cheryl; Jenney, Angelique

    2012-01-01

    Objective: A multistage, mixed-methods study using grounded theory with descriptive data was conducted to examine factors in disclosure of intimate partner violence (IPV). Method: In-depth interviews with individuals and focus groups were undertaken to collect data from 98 IPV survivors and service providers to identify influential factors.…

  9. Factors, Practices, and Policies Influencing Students' Upward Transfer to Baccalaureate-Degree Programs and Institutions: A Mixed Methods Analysis

    ERIC Educational Resources Information Center

    LaSota, Robin Rae

    2013-01-01

    My dissertation utilizes an explanatory, sequential mixed-methods research design to assess factors influencing community college students' transfer probability to baccalaureate-granting institutions and to present promising practices in colleges and states directed at improving upward transfer, particularly for low-income and first-generation…

  10. The Critical Success Factors Method: Its Application in a Special Library Environment.

    ERIC Educational Resources Information Center

    Borbely, Jack

    1981-01-01

    Discusses the background and theory of the Critical Success Factors (CSF) management method, as well as its application in an information center or other special library environment. CSF is viewed as a management tool that can enhance the viability of the special library within its parent organization. (FM)

  11. Construction of RFIF using VVSFs with application

    NASA Astrophysics Data System (ADS)

    Katiyar, Kuldip; Prasad, Bhagwati

    2017-10-01

    A method of variable vertical scaling factors (VVSFs) is proposed to define the recurrent fractal interpolation function (RFIF) for fitting the data sets. A generalization of one of the recent methods using analytic approach is presented for finding variable vertical scaling factors. An application of it in reconstruction of an EEG signal is also given.

  12. Some Factors That Affecting the Performance of Mathematics Teachers in Junior High School in Medan

    ERIC Educational Resources Information Center

    Manullang, Martua; Rajagukguk, Waminton

    2016-01-01

    Some Factor's That Affecting The Mathematic Teacher Performance For Junior High School In Medan. This research will examine the effect of direct and indirect of the Organizational Knowledge towards the achievement motivation, decision making, organizational commitment, the performance of mathematics teacher. The research method is a method of…

  13. Analyzing the Validity of the Adult-Adolescent Parenting Inventory for Low-Income Populations

    ERIC Educational Resources Information Center

    Lawson, Michael A.; Alameda-Lawson, Tania; Byrnes, Edward

    2017-01-01

    Objectives: The purpose of this study was to examine the construct and predictive validity of the Adult-Adolescent Parenting Inventory (AAPI-2). Methods: The validity of the AAPI-2 was evaluated using multiple statistical methods, including exploratory factor analysis, confirmatory factor analysis, and latent class analysis. These analyses were…

  14. Testing Measurement Invariance in the Target Rotated Multigroup Exploratory Factor Model

    ERIC Educational Resources Information Center

    Dolan, Conor V.; Oort, Frans J.; Stoel, Reinoud D.; Wicherts, Jelte M.

    2009-01-01

    We propose a method to investigate measurement invariance in the multigroup exploratory factor model, subject to target rotation. We consider both oblique and orthogonal target rotation. This method has clear advantages over other approaches, such as the use of congruence measures. We demonstrate that the model can be implemented readily in the…

  15. Novel method for on-road emission factor measurements using a plume capture trailer.

    PubMed

    Morawska, L; Ristovski, Z D; Johnson, G R; Jayaratne, E R; Mengersen, K

    2007-01-15

    The method outlined provides for emission factor measurements to be made for unmodified vehicles driving under real world conditions at minimal cost. The method consists of a plume capture trailer towed behind a test vehicle. The trailer collects a sample of the naturally diluted plume in a 200 L conductive bag and this is delivered immediately to a mobile laboratory for subsequent analysis of particulate and gaseous emissions. The method offers low test turnaround times with the potential to complete much larger numbers of emission factor measurements than have been possible using dynamometer testing. Samples can be collected at distances up to 3 m from the exhaust pipe allowing investigation of early dilution processes. Particle size distribution measurements, as well as particle number and mass emission factor measurements, based on naturally diluted plumes are presented. A dilution profile relating the plume dilution ratio to distance from the vehicle tail pipe for a diesel passenger vehicle is also presented. Such profiles are an essential input for new mechanistic roadway air quality models.

  16. Comprehensive quantification of signal-to-noise ratio and g-factor for image-based and k-space-based parallel imaging reconstructions.

    PubMed

    Robson, Philip M; Grant, Aaron K; Madhuranthakam, Ananth J; Lattanzi, Riccardo; Sodickson, Daniel K; McKenzie, Charles A

    2008-10-01

    Parallel imaging reconstructions result in spatially varying noise amplification characterized by the g-factor, precluding conventional measurements of noise from the final image. A simple Monte Carlo based method is proposed for all linear image reconstruction algorithms, which allows measurement of signal-to-noise ratio and g-factor and is demonstrated for SENSE and GRAPPA reconstructions for accelerated acquisitions that have not previously been amenable to such assessment. Only a simple "prescan" measurement of noise amplitude and correlation in the phased-array receiver, and a single accelerated image acquisition are required, allowing robust assessment of signal-to-noise ratio and g-factor. The "pseudo multiple replica" method has been rigorously validated in phantoms and in vivo, showing excellent agreement with true multiple replica and analytical methods. This method is universally applicable to the parallel imaging reconstruction techniques used in clinical applications and will allow pixel-by-pixel image noise measurements for all parallel imaging strategies, allowing quantitative comparison between arbitrary k-space trajectories, image reconstruction, or noise conditioning techniques. (c) 2008 Wiley-Liss, Inc.

  17. Optimal parallel solution of sparse triangular systems

    NASA Technical Reports Server (NTRS)

    Alvarado, Fernando L.; Schreiber, Robert

    1990-01-01

    A method for the parallel solution of triangular sets of equations is described that is appropriate when there are many right-handed sides. By preprocessing, the method can reduce the number of parallel steps required to solve Lx = b compared to parallel forward or backsolve. Applications are to iterative solvers with triangular preconditioners, to structural analysis, or to power systems applications, where there may be many right-handed sides (not all available a priori). The inverse of L is represented as a product of sparse triangular factors. The problem is to find a factored representation of this inverse of L with the smallest number of factors (or partitions), subject to the requirement that no new nonzero elements be created in the formation of these inverse factors. A method from an earlier reference is shown to solve this problem. This method is improved upon by constructing a permutation of the rows and columns of L that preserves triangularity and allow for the best possible such partition. A number of practical examples and algorithmic details are presented. The parallelism attainable is illustrated by means of elimination trees and clique trees.

  18. Comparative evaluation of perceptions of dental students to three methods of teaching in Ile-Ife, Nigeria.

    PubMed

    Esan, T A; Oziegbe, E O

    2015-12-01

    The World Health Organization in 1994 recommended that dental education should be problem based, socially and culturally relevant, and community oriented. To explore the perceptions of Pre-phase II (pre-clinical II) dental students on three methods of teaching used during two academic sessions. All part IV dental students in two consecutive sessions undergoing pre phase II course in the Faculty of Dentistry, Obafemi Awolowo University, Ile-Ife were recruited into the study. Three different modes of teaching that is, Problem based learning (PBL), hybrid PBL and traditional teaching were used to teach the students. A twenty two itemed anonymous questionnaire on a five point Likert scale was administered to the students at the end of the course. Six perceived factors were extracted from the questionnaire using factor analysis. There was a statistically significant difference (p < 0.01) between the overall mean of PBL method compared to the other methods of teaching. The perceived factor "communication with peers" had the highest mean score for PBL in both sessions (4.57 ± 0.58 and 4.09 ± 0.93 respectively). However, PBL method was very helpful in all the six perceived factors while the students perceived that the traditional method of teaching was not helpful in "interaction with tutors" and "challenge to critical thinking". The findings showed that students preferred the PBL method to other forms of teaching. PBL enhanced the students' communication skill, was very useful as pedagogic tool and improved their critical thinking.

  19. The Role of Psychological and Physiological Factors in Decision Making under Risk and in a Dilemma

    PubMed Central

    Fooken, Jonas; Schaffner, Markus

    2016-01-01

    Different methods to elicit risk attitudes of individuals often provide differing results despite a common theory. Reasons for such inconsistencies may be the different influence of underlying factors in risk-taking decisions. In order to evaluate this conjecture, a better understanding of underlying factors across methods and decision contexts is desirable. In this paper we study the difference in result of two different risk elicitation methods by linking estimates of risk attitudes to gender, age, and personality traits, which have been shown to be related. We also investigate the role of these factors during decision-making in a dilemma situation. For these two decision contexts we also investigate the decision-maker's physiological state during the decision, measured by heart rate variability (HRV), which we use as an indicator of emotional involvement. We found that the two elicitation methods provide different individual risk attitude measures which is partly reflected in a different gender effect between the methods. Personality traits explain only relatively little in terms of driving risk attitudes and the difference between methods. We also found that risk taking and the physiological state are related for one of the methods, suggesting that more emotionally involved individuals are more risk averse in the experiment. Finally, we found evidence that personality traits are connected to whether individuals made a decision in the dilemma situation, but risk attitudes and the physiological state were not indicative for the ability to decide in this decision context. PMID:26834591

  20. Factor Structure and Psychometric Properties of the Brief Illness Perception Questionnaire in Turkish Cancer Patients

    PubMed Central

    Karataş, Tuğba; Özen, Şükrü; Kutlutürkan, Sevinç

    2017-01-01

    Objective: The main aim of this study was to investigate the factor structure and psychometric properties of the Brief Illness Perception Questionnaire (BIPQ) in Turkish cancer patients. Methods: This methodological study involved 135 cancer patients. Statistical methods included confirmatory or exploratory factor analysis and Cronbach alpha coefficients for internal consistency. Results: The values of fit indices are within the acceptable range. The alpha coefficients for emotional illness representations, cognitive illness representations, and total scale are 0.83, 0.80, and 0.85, respectively. Conclusions: The results confirm the two-factor structure of the Turkish BIPQ and demonstrate its reliability and validity. PMID:28217734

  1. Methods of Combinatorial Optimization to Reveal Factors Affecting Gene Length

    PubMed Central

    Bolshoy, Alexander; Tatarinova, Tatiana

    2012-01-01

    In this paper we present a novel method for genome ranking according to gene lengths. The main outcomes described in this paper are the following: the formulation of the genome ranking problem, presentation of relevant approaches to solve it, and the demonstration of preliminary results from prokaryotic genomes ordering. Using a subset of prokaryotic genomes, we attempted to uncover factors affecting gene length. We have demonstrated that hyperthermophilic species have shorter genes as compared with mesophilic organisms, which probably means that environmental factors affect gene length. Moreover, these preliminary results show that environmental factors group together in ranking evolutionary distant species. PMID:23300345

  2. Contribution of artificial intelligence to the knowledge of prognostic factors in Hodgkin's lymphoma.

    PubMed

    Buciński, Adam; Marszałł, Michał Piotr; Krysiński, Jerzy; Lemieszek, Andrzej; Załuski, Jerzy

    2010-07-01

    Hodgkin's lymphoma is one of the most curable malignancies and most patients achieve a lasting complete remission. In this study, artificial neural network (ANN) analysis was shown to provide significant factors with regard to 5-year recurrence after lymphoma treatment. Data from 114 patients treated for Hodgkin's disease were available for evaluation and comparison. A total of 31 variables were subjected to ANN analysis. The ANN approach as an advanced multivariate data processing method was shown to provide objective prognostic data. Some of these prognostic factors are consistent or even identical to the factors evaluated earlier by other statistical methods.

  3. Stress Intensity Factors of Semi-Circular Bend Specimens with Straight-Through and Chevron Notches

    NASA Astrophysics Data System (ADS)

    Ayatollahi, M. R.; Mahdavi, E.; Alborzi, M. J.; Obara, Y.

    2016-04-01

    Semi-circular bend specimen is one of the useful test specimens for determining fracture toughness of rock and geo-materials. Generally, in rock test specimens, initial cracks are produced in two shapes: straight-edge cracks and chevron notches. In this study, the minimum dimensionless stress intensity factors of semi-circular bend specimen (SCB) with straight-through and chevron notches are calculated. First, using finite element analysis, a suitable relation for the dimensionless stress intensity factor of SCB with straight-through crack is presented based on the normalized crack length and half-distance between supports. For evaluating the validity and accuracy of this relation, the obtained results are then compared with numerical and experimental results reported in the literature. Subsequently, by performing some experiments and also finite element analysis of the SCB specimen with chevron notch, the minimum dimensionless stress intensity factor of this specimen is obtained. Using the new equation for the dimensionless stress intensity factor of SCB with straight-through crack and an analytical method, i.e., Bluhm's slice synthesis method, the minimum (critical) dimensionless stress intensity factor of chevron notched semi-circular bend specimens is calculated. Good agreement is observed between the results of two mentioned methods.

  4. Does experience of the 'occult' predict use of complementary medicine? Experience of, and beliefs about, both complementary medicine and ways of telling the future.

    PubMed

    Furnham, A

    2000-12-01

    This study looked at the relationship between ratings of the perceived effectiveness of 24 methods for telling the future, 39 complementary therapies (CM) and 12 specific attitude statements about science and medicine. A total of 159 participants took part. The results showed that the participants were deeply sceptical of the effectiveness of the methods for telling the future which factored into meaningful and interpretable factors. Participants were much more positive about particular, but not all, specialties of complementary medicine (CM). These also factored into a meaningful factor structure. Finally, the 12 attitude to science/medicine statements revealed four factors: scepticism of medicine; the importance of psychological factors; patient protection; and the importance of scientific evaluation. Regressional analysis showed that belief in the total effectiveness of different ways of predicting the future was best predicted by beliefs in the effectiveness of the CM therapies. Although interest in the occult was associated with interest in CM, participants were able to distinguish between the two, and displayed scepticism about the effectiveness of methods of predicting the future and some CM therapies. Copyright 2000 Harcourt Publishers Ltd.

  5. Recovering hidden diagonal structures via non-negative matrix factorization with multiple constraints.

    PubMed

    Yang, Xi; Han, Guoqiang; Cai, Hongmin; Song, Yan

    2017-03-31

    Revealing data with intrinsically diagonal block structures is particularly useful for analyzing groups of highly correlated variables. Earlier researches based on non-negative matrix factorization (NMF) have been shown to be effective in representing such data by decomposing the observed data into two factors, where one factor is considered to be the feature and the other the expansion loading from a linear algebra perspective. If the data are sampled from multiple independent subspaces, the loading factor would possess a diagonal structure under an ideal matrix decomposition. However, the standard NMF method and its variants have not been reported to exploit this type of data via direct estimation. To address this issue, a non-negative matrix factorization with multiple constraints model is proposed in this paper. The constraints include an sparsity norm on the feature matrix and a total variational norm on each column of the loading matrix. The proposed model is shown to be capable of efficiently recovering diagonal block structures hidden in observed samples. An efficient numerical algorithm using the alternating direction method of multipliers model is proposed for optimizing the new model. Compared with several benchmark models, the proposed method performs robustly and effectively for simulated and real biological data.

  6. Determination of effective loss factors in reduced SEA models

    NASA Astrophysics Data System (ADS)

    Chimeno Manguán, M.; Fernández de las Heras, M. J.; Roibás Millán, E.; Simón Hidalgo, F.

    2017-01-01

    The definition of Statistical Energy Analysis (SEA) models for large complex structures is highly conditioned by the classification of the structure elements into a set of coupled subsystems and the subsequent determination of the loss factors representing both the internal damping and the coupling between subsystems. The accurate definition of the complete system can lead to excessively large models as the size and complexity increases. This fact can also rise practical issues for the experimental determination of the loss factors. This work presents a formulation of reduced SEA models for incomplete systems defined by a set of effective loss factors. This reduced SEA model provides a feasible number of subsystems for the application of the Power Injection Method (PIM). For structures of high complexity, their components accessibility can be restricted, for instance internal equipments or panels. For these cases the use of PIM to carry out an experimental SEA analysis is not possible. New methods are presented for this case in combination with the reduced SEA models. These methods allow defining some of the model loss factors that could not be obtained through PIM. The methods are validated with a numerical analysis case and they are also applied to an actual spacecraft structure with accessibility restrictions: a solar wing in folded configuration.

  7. Slope stability analysis using limit equilibrium method in nonlinear criterion.

    PubMed

    Lin, Hang; Zhong, Wenwen; Xiong, Wei; Tang, Wenyu

    2014-01-01

    In slope stability analysis, the limit equilibrium method is usually used to calculate the safety factor of slope based on Mohr-Coulomb criterion. However, Mohr-Coulomb criterion is restricted to the description of rock mass. To overcome its shortcomings, this paper combined Hoek-Brown criterion and limit equilibrium method and proposed an equation for calculating the safety factor of slope with limit equilibrium method in Hoek-Brown criterion through equivalent cohesive strength and the friction angle. Moreover, this paper investigates the impact of Hoek-Brown parameters on the safety factor of slope, which reveals that there is linear relation between equivalent cohesive strength and weakening factor D. However, there are nonlinear relations between equivalent cohesive strength and Geological Strength Index (GSI), the uniaxial compressive strength of intact rock σ ci , and the parameter of intact rock m i . There is nonlinear relation between the friction angle and all Hoek-Brown parameters. With the increase of D, the safety factor of slope F decreases linearly; with the increase of GSI, F increases nonlinearly; when σ ci is relatively small, the relation between F and σ ci is nonlinear, but when σ ci is relatively large, the relation is linear; with the increase of m i , F decreases first and then increases.

  8. Slope Stability Analysis Using Limit Equilibrium Method in Nonlinear Criterion

    PubMed Central

    Lin, Hang; Zhong, Wenwen; Xiong, Wei; Tang, Wenyu

    2014-01-01

    In slope stability analysis, the limit equilibrium method is usually used to calculate the safety factor of slope based on Mohr-Coulomb criterion. However, Mohr-Coulomb criterion is restricted to the description of rock mass. To overcome its shortcomings, this paper combined Hoek-Brown criterion and limit equilibrium method and proposed an equation for calculating the safety factor of slope with limit equilibrium method in Hoek-Brown criterion through equivalent cohesive strength and the friction angle. Moreover, this paper investigates the impact of Hoek-Brown parameters on the safety factor of slope, which reveals that there is linear relation between equivalent cohesive strength and weakening factor D. However, there are nonlinear relations between equivalent cohesive strength and Geological Strength Index (GSI), the uniaxial compressive strength of intact rock σ ci, and the parameter of intact rock m i. There is nonlinear relation between the friction angle and all Hoek-Brown parameters. With the increase of D, the safety factor of slope F decreases linearly; with the increase of GSI, F increases nonlinearly; when σ ci is relatively small, the relation between F and σ ci is nonlinear, but when σ ci is relatively large, the relation is linear; with the increase of m i, F decreases first and then increases. PMID:25147838

  9. Bayes factors for the linear ballistic accumulator model of decision-making.

    PubMed

    Evans, Nathan J; Brown, Scott D

    2018-04-01

    Evidence accumulation models of decision-making have led to advances in several different areas of psychology. These models provide a way to integrate response time and accuracy data, and to describe performance in terms of latent cognitive processes. Testing important psychological hypotheses using cognitive models requires a method to make inferences about different versions of the models which assume different parameters to cause observed effects. The task of model-based inference using noisy data is difficult, and has proven especially problematic with current model selection methods based on parameter estimation. We provide a method for computing Bayes factors through Monte-Carlo integration for the linear ballistic accumulator (LBA; Brown and Heathcote, 2008), a widely used evidence accumulation model. Bayes factors are used frequently for inference with simpler statistical models, and they do not require parameter estimation. In order to overcome the computational burden of estimating Bayes factors via brute force integration, we exploit general purpose graphical processing units; we provide free code for this. This approach allows estimation of Bayes factors via Monte-Carlo integration within a practical time frame. We demonstrate the method using both simulated and real data. We investigate the stability of the Monte-Carlo approximation, and the LBA's inferential properties, in simulation studies.

  10. Assessing and improving health in the workplace: an integration of subjective and objective measures with the STress Assessment and Research Toolkit (St.A.R.T.) method.

    PubMed

    Panari, Chiara; Guglielmi, Dina; Ricci, Aurora; Tabanelli, Maria Carla; Violante, Francesco Saverio

    2012-09-20

    The aim of this work was to introduce a new combined method of subjective and objective measures to assess psychosocial risk factors at work and improve workers' health and well-being. In the literature most of the research on work-related stress focuses on self-report measures and this work represents the first methodology capable of integrating different sources of data. An integrated method entitled St.A.R.T. (STress Assessment and Research Toolkit) was used in order to assess psychosocial risk factors and two health outcomes. In particular, a self-report questionnaire combined with an observational structured checklist was administered to 113 workers from an Italian retail company. The data showed a correlation between subjective data and the rating data of the observational checklist for the psychosocial risk factors related to work contexts such as customer relationship management and customer queue. Conversely, the factors related to work content (workload and boredom) measured with different methods (subjective vs. objective) showed a discrepancy. Furthermore, subjective measures of psychosocial risk factors were more predictive of workers' psychological health and exhaustion than rating data. The different objective measures played different roles, however, in terms of their influence on the two health outcomes considered. It is important to integrate self-related assessment of stressors with objective measures for a better understanding of workers' conditions in the workplace. The method presented could be considered a useful methodology for combining the two measures and differentiating the impact of different psychological risk factors related to work content and context on workers' health.

  11. Comparison of calculation methods for estimating annual carbon stock change in German forests under forest management in the German greenhouse gas inventory.

    PubMed

    Röhling, Steffi; Dunger, Karsten; Kändler, Gerald; Klatt, Susann; Riedel, Thomas; Stümer, Wolfgang; Brötz, Johannes

    2016-12-01

    The German greenhouse gas inventory in the land use change sector strongly depends on national forest inventory data. As these data were collected periodically 1987, 2002, 2008 and 2012, the time series on emissions show several "jumps" due to biomass stock change, especially between 2001 and 2002 and between 2007 and 2008 while within the periods the emissions seem to be constant due to the application of periodical average emission factors. This does not reflect inter-annual variability in the time series, which would be assumed as the drivers for the carbon stock changes fluctuate between the years. Therefore additional data, which is available on annual basis, should be introduced into the calculations of the emissions inventories in order to get more plausible time series. This article explores the possibility of introducing an annual rather than periodical approach to calculating emission factors with the given data and thus smoothing the trajectory of time series for emissions from forest biomass. Two approaches are introduced to estimate annual changes derived from periodic data: the so-called logging factor method and the growth factor method. The logging factor method incorporates annual logging data to project annual values from periodic values. This is less complex to implement than the growth factor method, which additionally adds growth data into the calculations. Calculation of the input variables is based on sound statistical methodologies and periodically collected data that cannot be altered. Thus a discontinuous trajectory of the emissions over time remains, even after the adjustments. It is intended to adopt this approach in the German greenhouse gas reporting in order to meet the request for annually adjusted values.

  12. Applying parallel factor analysis and Tucker-3 methods on sensory and instrumental data to establish preference maps: case study on sweet corn varieties.

    PubMed

    Gere, Attila; Losó, Viktor; Györey, Annamária; Kovács, Sándor; Huzsvai, László; Nábrádi, András; Kókai, Zoltán; Sipos, László

    2014-12-01

    Traditional internal and external preference mapping methods are based on principal component analysis (PCA). However, parallel factor analysis (PARAFAC) and Tucker-3 methods could be a better choice. To evaluate the methods, preference maps of sweet corn varieties will be introduced. A preference map of eight sweet corn varieties was established using PARAFAC and Tucker-3 methods. Instrumental data were also integrated into the maps. The triplot created by the PARAFAC model explains better how odour is separated from texture or appearance, and how some varieties are separated from others. Internal and external preference maps were created using parallel factor analysis (PARAFAC) and Tucker-3 models employing both sensory (trained panel and consumers) and instrumental parameters simultaneously. Triplots of the applied three-way models have a competitive advantage compared to the traditional biplots of the PCA-based external preference maps. The solution of PARAFAC and Tucker-3 is very similar regarding the interpretation of the first and third factors. The main difference is due to the second factor as it differentiated the attributes better. Consumers who prefer 'super sweet' varieties (they place great emphasis especially on taste) are much younger and have significantly higher incomes, and buy sweet corn products rarely (once a month). Consumers who consume sweet corn products mainly because of their texture and appearance are significantly older and include a higher ratio of men. © 2014 Society of Chemical Industry.

  13. Replace-approximation method for ambiguous solutions in factor analysis of ultrasonic hepatic perfusion

    NASA Astrophysics Data System (ADS)

    Zhang, Ji; Ding, Mingyue; Yuchi, Ming; Hou, Wenguang; Ye, Huashan; Qiu, Wu

    2010-03-01

    Factor analysis is an efficient technique to the analysis of dynamic structures in medical image sequences and recently has been used in contrast-enhanced ultrasound (CEUS) of hepatic perfusion. Time-intensity curves (TICs) extracted by factor analysis can provide much more diagnostic information for radiologists and improve the diagnostic rate of focal liver lesions (FLLs). However, one of the major drawbacks of factor analysis of dynamic structures (FADS) is nonuniqueness of the result when only the non-negativity criterion is used. In this paper, we propose a new method of replace-approximation based on apex-seeking for ambiguous FADS solutions. Due to a partial overlap of different structures, factor curves are assumed to be approximately replaced by the curves existing in medical image sequences. Therefore, how to find optimal curves is the key point of the technique. No matter how many structures are assumed, our method always starts to seek apexes from one-dimensional space where the original high-dimensional data is mapped. By finding two stable apexes from one dimensional space, the method can ascertain the third one. The process can be continued until all structures are found. This technique were tested on two phantoms of blood perfusion and compared to the two variants of apex-seeking method. The results showed that the technique outperformed two variants in comparison of region of interest measurements from phantom data. It can be applied to the estimation of TICs derived from CEUS images and separation of different physiological regions in hepatic perfusion.

  14. Seismic analysis for translational failure of landfills with retaining walls.

    PubMed

    Feng, Shi-Jin; Gao, Li-Ya

    2010-11-01

    In the seismic impact zone, seismic force can be a major triggering mechanism for translational failures of landfills. The scope of this paper is to develop a three-part wedge method for seismic analysis of translational failures of landfills with retaining walls. The approximate solution of the factor of safety can be calculated. Unlike previous conventional limit equilibrium methods, the new method is capable of revealing the effects of both the solid waste shear strength and the retaining wall on the translational failures of landfills during earthquake. Parameter studies of the developed method show that the factor of safety decreases with the increase of the seismic coefficient, while it increases quickly with the increase of the minimum friction angle beneath waste mass for various horizontal seismic coefficients. Increasing the minimum friction angle beneath the waste mass appears to be more effective than any other parameters for increasing the factor of safety under the considered condition. Thus, selecting liner materials with higher friction angle will considerably reduce the potential for translational failures of landfills during earthquake. The factor of safety gradually increases with the increase of the height of retaining wall for various horizontal seismic coefficients. A higher retaining wall is beneficial to the seismic stability of the landfill. Simply ignoring the retaining wall will lead to serious underestimation of the factor of safety. Besides, the approximate solution of the yield acceleration coefficient of the landfill is also presented based on the calculated method. Copyright © 2010 Elsevier Ltd. All rights reserved.

  15. Factor Analysis via Components Analysis

    ERIC Educational Resources Information Center

    Bentler, Peter M.; de Leeuw, Jan

    2011-01-01

    When the factor analysis model holds, component loadings are linear combinations of factor loadings, and vice versa. This interrelation permits us to define new optimization criteria and estimation methods for exploratory factor analysis. Although this article is primarily conceptual in nature, an illustrative example and a small simulation show…

  16. Secondary School Students' Views of Inhibiting Factors in Seeking Counselling

    ERIC Educational Resources Information Center

    Chan, Stephanie; Quinn, Philip

    2012-01-01

    This study examines secondary school students' perceptions of inhibiting factors in seeking counselling. Responses to a questionnaire completed by 1346 secondary school students were analysed using quantitative and qualitative methods. Exploratory factor analysis highlighted that within 21 pre-defined inhibiting factors, items loaded strongly on…

  17. 21 CFR 113.100 - Processing and production records.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... critical factors specified in the scheduled process shall also be recorded. In addition, the following... preservation methods wherein critical factors such as water activity are used in conjunction with thermal... critical factors, as well as other critical factors, and results of aw determinations. (7) Other systems...

  18. The Infinitesimal Jackknife with Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Preacher, Kristopher J.; Jennrich, Robert I.

    2012-01-01

    The infinitesimal jackknife, a nonparametric method for estimating standard errors, has been used to obtain standard error estimates in covariance structure analysis. In this article, we adapt it for obtaining standard errors for rotated factor loadings and factor correlations in exploratory factor analysis with sample correlation matrices. Both…

  19. Denoising Sparse Images from GRAPPA using the Nullspace Method (DESIGN)

    PubMed Central

    Weller, Daniel S.; Polimeni, Jonathan R.; Grady, Leo; Wald, Lawrence L.; Adalsteinsson, Elfar; Goyal, Vivek K

    2011-01-01

    To accelerate magnetic resonance imaging using uniformly undersampled (nonrandom) parallel imaging beyond what is achievable with GRAPPA alone, the Denoising of Sparse Images from GRAPPA using the Nullspace method (DESIGN) is developed. The trade-off between denoising and smoothing the GRAPPA solution is studied for different levels of acceleration. Several brain images reconstructed from uniformly undersampled k-space data using DESIGN are compared against reconstructions using existing methods in terms of difference images (a qualitative measure), PSNR, and noise amplification (g-factors) as measured using the pseudo-multiple replica method. Effects of smoothing, including contrast loss, are studied in synthetic phantom data. In the experiments presented, the contrast loss and spatial resolution are competitive with existing methods. Results for several brain images demonstrate significant improvements over GRAPPA at high acceleration factors in denoising performance with limited blurring or smoothing artifacts. In addition, the measured g-factors suggest that DESIGN mitigates noise amplification better than both GRAPPA and L1 SPIR-iT (the latter limited here by uniform undersampling). PMID:22213069

  20. Measurement System Characterization in the Presence of Measurement Errors

    NASA Technical Reports Server (NTRS)

    Commo, Sean A.

    2012-01-01

    In the calibration of a measurement system, data are collected in order to estimate a mathematical model between one or more factors of interest and a response. Ordinary least squares is a method employed to estimate the regression coefficients in the model. The method assumes that the factors are known without error; yet, it is implicitly known that the factors contain some uncertainty. In the literature, this uncertainty is known as measurement error. The measurement error affects both the estimates of the model coefficients and the prediction, or residual, errors. There are some methods, such as orthogonal least squares, that are employed in situations where measurement errors exist, but these methods do not directly incorporate the magnitude of the measurement errors. This research proposes a new method, known as modified least squares, that combines the principles of least squares with knowledge about the measurement errors. This knowledge is expressed in terms of the variance ratio - the ratio of response error variance to measurement error variance.

  1. Dysmenorrhea Characteristics of Female Students of Health School and Affecting Factors and Their Knowledge and Use of Complementary and Alternative Medicine Methods.

    PubMed

    Midilli, Tulay Sagkal; Yasar, Eda; Baysal, Ebru

    2015-01-01

    The purpose of this study was to examine the menstruation and dysmenorrhea characteristics and the factors affecting dysmenorrhea of health school students, and the knowledge and use of the methods of complementary and alternative medicine (CAM) on the part of those students with dysmenorrhea. This is a descriptive study. A descriptive analysis was made by calculating the number, percentage, mean, Pearson χ, and logistic regression analysis. A total of 488 female students participated in the research and 87.7% (n = 428) of all students experienced dysmenorrhea. It was detected that a family history of dysmenorrhea and regular menstrual cycles of the students were dysmenorrhea-affecting factors (P < .05). Seven of 10 students with dysmenorrhea used CAM methods. Heat application of CAM methods for dysmenorrhea management was the most commonly used and also known by the students. The students who experienced severe pain used analgesics (P < .05) and CAM methods (P < .05).

  2. [Methods of the multivariate statistical analysis of so-called polyetiological diseases using the example of coronary heart disease].

    PubMed

    Lifshits, A M

    1979-01-01

    General characteristics of the multivariate statistical analysis (MSA) is given. Methodical premises and criteria for the selection of an adequate MSA method applicable to pathoanatomic investigations of the epidemiology of multicausal diseases are presented. The experience of using MSA with computors and standard computing programs in studies of coronary arteries aterosclerosis on the materials of 2060 autopsies is described. The combined use of 4 MSA methods: sequential, correlational, regressional, and discriminant permitted to quantitate the contribution of each of the 8 examined risk factors in the development of aterosclerosis. The most important factors were found to be the age, arterial hypertension, and heredity. Occupational hypodynamia and increased fatness were more important in men, whereas diabetes melitus--in women. The registration of this combination of risk factors by MSA methods provides for more reliable prognosis of the likelihood of coronary heart disease with a fatal outcome than prognosis of the degree of coronary aterosclerosis.

  3. A novel edge-preserving nonnegative matrix factorization method for spectral unmixing

    NASA Astrophysics Data System (ADS)

    Bao, Wenxing; Ma, Ruishi

    2015-12-01

    Spectral unmixing technique is one of the key techniques to identify and classify the material in the hyperspectral image processing. A novel robust spectral unmixing method based on nonnegative matrix factorization(NMF) is presented in this paper. This paper used an edge-preserving function as hypersurface cost function to minimize the nonnegative matrix factorization. To minimize the hypersurface cost function, we constructed the updating functions for signature matrix of end-members and abundance fraction respectively. The two functions are updated alternatively. For evaluation purpose, synthetic data and real data have been used in this paper. Synthetic data is used based on end-members from USGS digital spectral library. AVIRIS Cuprite dataset have been used as real data. The spectral angle distance (SAD) and abundance angle distance(AAD) have been used in this research for assessment the performance of proposed method. The experimental results show that this method can obtain more ideal results and good accuracy for spectral unmixing than present methods.

  4. Teaching learning methods of an entrepreneurship curriculum.

    PubMed

    Esmi, Keramat; Marzoughi, Rahmatallah; Torkzadeh, Jafar

    2015-10-01

    One of the most significant elements of entrepreneurship curriculum design is teaching-learning methods, which plays a key role in studies and researches related to such a curriculum. It is the teaching method, and systematic, organized and logical ways of providing lessons that should be consistent with entrepreneurship goals and contents, and should also be developed according to the learners' needs. Therefore, the current study aimed to introduce appropriate, modern, and effective methods of teaching entrepreneurship and their validation. This is a mixed method research of a sequential exploratory kind conducted through two stages: a) developing teaching methods of entrepreneurship curriculum, and b) validating developed framework. Data were collected through "triangulation" (study of documents, investigating theoretical basics and the literature, and semi-structured interviews with key experts). Since the literature on this topic is very rich, and views of the key experts are vast, directed and summative content analysis was used. In the second stage, qualitative credibility of research findings was obtained using qualitative validation criteria (credibility, confirmability, and transferability), and applying various techniques. Moreover, in order to make sure that the qualitative part is reliable, reliability test was used. Moreover, quantitative validation of the developed framework was conducted utilizing exploratory and confirmatory factor analysis methods and Cronbach's alpha. The data were gathered through distributing a three-aspect questionnaire (direct presentation teaching methods, interactive, and practical-operational aspects) with 29 items among 90 curriculum scholars. Target population was selected by means of purposive sampling and representative sample. Results obtained from exploratory factor analysis showed that a three factor structure is an appropriate method for describing elements of teaching-learning methods of entrepreneurship curriculum. Moreover, the value for Kaiser Meyer Olkin measure of sampling adequacy equaled 0.72 and the value for Bartlett's test of variances homogeneity was significant at the 0.0001 level. Except for internship element, the rest had a factor load of higher than 0.3. Also, the results of confirmatory factor analysis showed the model appropriateness, and the criteria for qualitative accreditation were acceptable. Developed model can help instructors in selecting an appropriate method of entrepreneurship teaching, and it can also make sure that the teaching is on the right path. Moreover, the model is comprehensive and includes all the effective teaching methods in entrepreneurship education. It is also based on qualities, conditions, and requirements of Higher Education Institutions in Iranian cultural environment.

  5. Pier and contraction scour prediction in cohesive soils at selected bridges in Illinois

    USGS Publications Warehouse

    Straub, Timothy D.; Over, Thomas M.

    2010-01-01

    This report presents the results of testing the Scour Rate In Cohesive Soils-Erosion Function Apparatus (SRICOS-EFA) method for estimating scour depth of cohesive soils at 15 bridges in Illinois. The SRICOS-EFA method for complex pier and contraction scour in cohesive soils has two primary components. The first component includes the calculation of the maximum contraction and pier scour (Zmax). The second component is an integrated approach that considers a time factor, soil properties, and continued interaction between the contraction and pier scour (SRICOS runs). The SRICOS-EFA results were compared to scour prediction results for non-cohesive soils based on Hydraulic Engineering Circular No. 18 (HEC-18). On average, the HEC-18 method predicted higher scour depths than the SRICOS-EFA method. A reduction factor was determined for each HEC-18 result to make it match the maximum of three types of SRICOS run results. The unconfined compressive strength (Qu) for the soil was then matched with the reduction factor and the results were ranked in order of increasing Qu. Reduction factors were then grouped by Qu and applied to each bridge site and soil. These results, and comparison with the SRICOS Zmax calculation, show that less than half of the reduction-factor method values were the lowest estimate of scour; whereas, the Zmax method values were the lowest estimate for over half. A tiered approach to predicting pier and contraction scour was developed. There are four levels to this approach numbered in order of complexity, with the fourth level being a full SRICOS-EFA analysis. Levels 1 and 2 involve the reduction factors and Zmax calculation, and can be completed without EFA data. Level 3 requires some surrogate EFA data. Levels 3 and 4 require streamflow for input into SRICOS. Estimation techniques for both EFA surrogate data and streamflow data were developed.

  6. The Brain-Derived Neurotrophic Factor Val66Met Polymorphism, Delivery Method, Birth Weight, and Night Sleep Duration as Determinants of Obesity in Vietnamese Children of Primary School Age.

    PubMed

    Tuyet, Le Thi; Nhung, Bui Thi; Dao, Duong Thi Anh; Hanh, Nguyen Thi Hong; Tuyen, Le Danh; Binh, Tran Quang; Thuc, Vu Thi Minh

    2017-10-01

    Obesity is a complex disease that involves both environmental and genetic factors in its pathogenesis. Several studies have identified multiple obesity-associated loci in many populations. However, their contribution to obesity in the Vietnamese population is not fully described, especially in children. The study aimed to investigate the association of obesity with Val66Met polymorphism in brain-derived neurotrophic factor (BDNF) gene, delivery method, birth weight, and lifestyle factors in Vietnamese primary school children. A case-control study was conducted on 559 children aged 6-11 years (278 obese cases and 281 normal controls). The obesity of the children was classified using both criteria of International Obesity Task Force (IOTF, 2000) and World Health Organization (WHO, 2007). Lifestyle factors, birth delivery, and birth weight of the children were self-reported by parents. The BDNF genotype was analyzed using the polymerase chain reaction-restriction fragment length polymorphism method. Association was evaluated by multivariate logistic regression and cross-validated by the Bayesian model averaging method. The most significantly independent factors for obesity were delivery method (cesarean section vs. vaginal delivery, β = 0.56, p = 0.007), birth weight (>3500 to <4000 g vs. 2500-3500 g, β = 0.52, p = 0.035; ≥4000 g vs. 2500-3500 g, β = 1.06, p = 0.015), night sleep duration (<8 h/day vs. ≥8 h/day, β = 0.99, p < 0.0001), and BDNF Val66Met polymorphism (AA and GG vs. AG, β = 0.38, p = 0.039). The study suggested the significant association of delivery method, birth weight, night sleep duration, and BDNF Val66Met polymorphism, with obesity in Vietnamese primary school children.

  7. Determination of Parachute Joint Factors using Seam and Joint Testing

    NASA Technical Reports Server (NTRS)

    Mollmann, Catherine

    2015-01-01

    This paper details the methodology for determining the joint factor for all parachute components. This method has been successfully implemented on the Capsule Parachute Assembly System (CPAS) for the NASA Orion crew module for use in determining the margin of safety for each component under peak loads. Also discussed are concepts behind the joint factor and what drives the loss of material strength at joints. The joint factor is defined as a "loss in joint strength...relative to the basic material strength" that occurs when "textiles are connected to each other or to metals." During the CPAS engineering development phase, a conservative joint factor of 0.80 was assumed for each parachute component. In order to refine this factor and eliminate excess conservatism, a seam and joint testing program was implemented as part of the structural validation. This method split each of the parachute structural joints into discrete tensile tests designed to duplicate the loading of each joint. Breaking strength data collected from destructive pull testing was then used to calculate the joint factor in the form of an efficiency. Joint efficiency is the percentage of the base material strength that remains after degradation due to sewing or interaction with other components; it is used interchangeably with joint factor in this paper. Parachute materials vary in type-mainly cord, tape, webbing, and cloth -which require different test fixtures and joint sample construction methods. This paper defines guidelines for designing and testing samples based on materials and test goals. Using the test methodology and analysis approach detailed in this paper, the minimum joint factor for each parachute component can be formulated. The joint factors can then be used to calculate the design factor and margin of safety for that component, a critical part of the design verification process.

  8. Poster — Thur Eve — 03: Application of the non-negative matrix factorization technique to [{sup 11}C]-DTBZ dynamic PET data for the early detection of Parkinson's disease

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Dong-Chang; Jans, Hans; McEwan, Sandy

    2014-08-15

    In this work, a class of non-negative matrix factorization (NMF) technique known as alternating non-negative least squares, combined with the projected gradient method, is used to analyze twenty-five [{sup 11}C]-DTBZ dynamic PET/CT brain data. For each subject, a two-factor model is assumed and two factors representing the striatum (factor 1) and the non-striatum (factor 2) tissues are extracted using the proposed NMF technique and commercially available factor analysis software “Pixies”. The extracted factor 1 and 2 curves represent the binding site of the radiotracer and describe the uptake and clearance of the radiotracer by soft tissues in the brain, respectively.more » The proposed NMF technique uses prior information about the dynamic data to obtain sample time-activity curves representing the striatum and the non-striatum tissues. These curves are then used for “warm” starting the optimization. Factor solutions from the two methods are compared graphically and quantitatively. In healthy subjects, radiotracer uptake by factors 1 and 2 are approximately 35–40% and 60–65%, respectively. The solutions are also used to develop a factor-based metric for the detection of early, untreated Parkinson's disease. The metric stratifies healthy subjects from suspected Parkinson's patients (based on the graphical method). The analysis shows that both techniques produce comparable results with similar computational time. The “semi-automatic” approach used by the NMF technique allows clinicians to manually set a starting condition for “warm” starting the optimization in order to facilitate control and efficient interaction with the data.« less

  9. Statistical Determination of Rainfall-Runoff Erosivity Indices for Single Storms in the Chinese Loess Plateau

    PubMed Central

    Zheng, Mingguo; Chen, Xiaoan

    2015-01-01

    Correlation analysis is popular in erosion- or earth-related studies, however, few studies compare correlations on a basis of statistical testing, which should be conducted to determine the statistical significance of the observed sample difference. This study aims to statistically determine the erosivity index of single storms, which requires comparison of a large number of dependent correlations between rainfall-runoff factors and soil loss, in the Chinese Loess Plateau. Data observed at four gauging stations and five runoff experimental plots were presented. Based on the Meng’s tests, which is widely used for comparing correlations between a dependent variable and a set of independent variables, two methods were proposed. The first method removes factors that are poorly correlated with soil loss from consideration in a stepwise way, while the second method performs pairwise comparisons that are adjusted using the Bonferroni correction. Among 12 rainfall factors, I 30 (the maximum 30-minute rainfall intensity) has been suggested for use as the rainfall erosivity index, although I 30 is equally correlated with soil loss as factors of I 20, EI 10 (the product of the rainfall kinetic energy, E, and I 10), EI 20 and EI 30 are. Runoff depth (total runoff volume normalized to drainage area) is more correlated with soil loss than all other examined rainfall-runoff factors, including I 30, peak discharge and many combined factors. Moreover, sediment concentrations of major sediment-producing events are independent of all examined rainfall-runoff factors. As a result, introducing additional factors adds little to the prediction accuracy of the single factor of runoff depth. Hence, runoff depth should be the best erosivity index at scales from plots to watersheds. Our findings can facilitate predictions of soil erosion in the Loess Plateau. Our methods provide a valuable tool while determining the predictor among a number of variables in terms of correlations. PMID:25781173

  10. Statistical determination of rainfall-runoff erosivity indices for single storms in the Chinese Loess Plateau.

    PubMed

    Zheng, Mingguo; Chen, Xiaoan

    2015-01-01

    Correlation analysis is popular in erosion- or earth-related studies, however, few studies compare correlations on a basis of statistical testing, which should be conducted to determine the statistical significance of the observed sample difference. This study aims to statistically determine the erosivity index of single storms, which requires comparison of a large number of dependent correlations between rainfall-runoff factors and soil loss, in the Chinese Loess Plateau. Data observed at four gauging stations and five runoff experimental plots were presented. Based on the Meng's tests, which is widely used for comparing correlations between a dependent variable and a set of independent variables, two methods were proposed. The first method removes factors that are poorly correlated with soil loss from consideration in a stepwise way, while the second method performs pairwise comparisons that are adjusted using the Bonferroni correction. Among 12 rainfall factors, I30 (the maximum 30-minute rainfall intensity) has been suggested for use as the rainfall erosivity index, although I30 is equally correlated with soil loss as factors of I20, EI10 (the product of the rainfall kinetic energy, E, and I10), EI20 and EI30 are. Runoff depth (total runoff volume normalized to drainage area) is more correlated with soil loss than all other examined rainfall-runoff factors, including I30, peak discharge and many combined factors. Moreover, sediment concentrations of major sediment-producing events are independent of all examined rainfall-runoff factors. As a result, introducing additional factors adds little to the prediction accuracy of the single factor of runoff depth. Hence, runoff depth should be the best erosivity index at scales from plots to watersheds. Our findings can facilitate predictions of soil erosion in the Loess Plateau. Our methods provide a valuable tool while determining the predictor among a number of variables in terms of correlations.

  11. A multifactorial analysis of obesity as CVD risk factor: Use of neural network based methods in a nutrigenetics context

    PubMed Central

    2010-01-01

    Background Obesity is a multifactorial trait, which comprises an independent risk factor for cardiovascular disease (CVD). The aim of the current work is to study the complex etiology beneath obesity and identify genetic variations and/or factors related to nutrition that contribute to its variability. To this end, a set of more than 2300 white subjects who participated in a nutrigenetics study was used. For each subject a total of 63 factors describing genetic variants related to CVD (24 in total), gender, and nutrition (38 in total), e.g. average daily intake in calories and cholesterol, were measured. Each subject was categorized according to body mass index (BMI) as normal (BMI ≤ 25) or overweight (BMI > 25). Two artificial neural network (ANN) based methods were designed and used towards the analysis of the available data. These corresponded to i) a multi-layer feed-forward ANN combined with a parameter decreasing method (PDM-ANN), and ii) a multi-layer feed-forward ANN trained by a hybrid method (GA-ANN) which combines genetic algorithms and the popular back-propagation training algorithm. Results PDM-ANN and GA-ANN were comparatively assessed in terms of their ability to identify the most important factors among the initial 63 variables describing genetic variations, nutrition and gender, able to classify a subject into one of the BMI related classes: normal and overweight. The methods were designed and evaluated using appropriate training and testing sets provided by 3-fold Cross Validation (3-CV) resampling. Classification accuracy, sensitivity, specificity and area under receiver operating characteristics curve were utilized to evaluate the resulted predictive ANN models. The most parsimonious set of factors was obtained by the GA-ANN method and included gender, six genetic variations and 18 nutrition-related variables. The corresponding predictive model was characterized by a mean accuracy equal of 61.46% in the 3-CV testing sets. Conclusions The ANN based methods revealed factors that interactively contribute to obesity trait and provided predictive models with a promising generalization ability. In general, results showed that ANNs and their hybrids can provide useful tools for the study of complex traits in the context of nutrigenetics. PMID:20825661

  12. Using a latent variable model with non-constant factor loadings to examine PM2.5 constituents related to secondary inorganic aerosols.

    PubMed

    Zhang, Zhenzhen; O'Neill, Marie S; Sánchez, Brisa N

    2016-04-01

    Factor analysis is a commonly used method of modelling correlated multivariate exposure data. Typically, the measurement model is assumed to have constant factor loadings. However, from our preliminary analyses of the Environmental Protection Agency's (EPA's) PM 2.5 fine speciation data, we have observed that the factor loadings for four constituents change considerably in stratified analyses. Since invariance of factor loadings is a prerequisite for valid comparison of the underlying latent variables, we propose a factor model that includes non-constant factor loadings that change over time and space using P-spline penalized with the generalized cross-validation (GCV) criterion. The model is implemented using the Expectation-Maximization (EM) algorithm and we select the multiple spline smoothing parameters by minimizing the GCV criterion with Newton's method during each iteration of the EM algorithm. The algorithm is applied to a one-factor model that includes four constituents. Through bootstrap confidence bands, we find that the factor loading for total nitrate changes across seasons and geographic regions.

  13. Temporally controlled release of multiple growth factors from a self-assembling peptide hydrogel

    NASA Astrophysics Data System (ADS)

    Bruggeman, Kiara F.; Rodriguez, Alexandra L.; Parish, Clare L.; Williams, Richard J.; Nisbet, David R.

    2016-09-01

    Protein growth factors have demonstrated great potential for tissue repair, but their inherent instability and large size prevents meaningful presentation to biologically protected nervous tissue. Here, we create a nanofibrous network from a self-assembling peptide (SAP) hydrogel to carry and stabilize the growth factors. We significantly reduced growth factor degradation to increase their lifespan by over 40 times. To control the temporal release profile we covalently attached polysaccharide chitosan molecules to the growth factor to increase its interactions with the hydrogel nanofibers and achieved a 4 h delay, demonstrating the potential of this method to provide temporally controlled growth factor delivery. We also describe release rate based analysis to examine the growth factor delivery in more detail than standard cumulative release profiles allow and show that the chitosan attachment method provided a more consistent release profile with a 60% reduction in fluctuations. To prove the potential of this system as a complex growth factor delivery platform we demonstrate for the first time temporally distinct release of multiple growth factors from a single tissue specific SAP hydrogel: a significant goal in regenerative medicine.

  14. Comparison of three methods for evaluation of work postures in a truck assembly plant.

    PubMed

    Zare, Mohsen; Biau, Sophie; Brunet, Rene; Roquelaure, Yves

    2017-11-01

    This study compared the results of three risk assessment tools (self-reported questionnaire, observational tool, direct measurement method) for the upper limbs and back in a truck assembly plant at two cycle times (11 and 8 min). The weighted Kappa factor showed fair agreement between the observational and direct measurement method for the arm (0.39) and back (0.47). The weighted Kappa factor for these methods was poor for the neck (0) and wrist (0) but the observed proportional agreement (P o ) was 0.78 for the neck and 0.83 for the wrist. The weighted Kappa factor between questionnaire and direct measurement showed poor or slight agreement (0) for different body segments in both cycle times. The results revealed moderate agreement between the observational tool and the direct measurement method, and poor agreement between the self-reported questionnaire and direct measurement. Practitioner Summary: This study provides risk exposure measurement by different common ergonomic methods in the field. The results help to develop valid measurements and improve exposure evaluation. Hence, the ergonomist/practitioners should apply the methods with caution, or at least knowing what the issues/errors are.

  15. Missing in space: an evaluation of imputation methods for missing data in spatial analysis of risk factors for type II diabetes.

    PubMed

    Baker, Jannah; White, Nicole; Mengersen, Kerrie

    2014-11-20

    Spatial analysis is increasingly important for identifying modifiable geographic risk factors for disease. However, spatial health data from surveys are often incomplete, ranging from missing data for only a few variables, to missing data for many variables. For spatial analyses of health outcomes, selection of an appropriate imputation method is critical in order to produce the most accurate inferences. We present a cross-validation approach to select between three imputation methods for health survey data with correlated lifestyle covariates, using as a case study, type II diabetes mellitus (DM II) risk across 71 Queensland Local Government Areas (LGAs). We compare the accuracy of mean imputation to imputation using multivariate normal and conditional autoregressive prior distributions. Choice of imputation method depends upon the application and is not necessarily the most complex method. Mean imputation was selected as the most accurate method in this application. Selecting an appropriate imputation method for health survey data, after accounting for spatial correlation and correlation between covariates, allows more complete analysis of geographic risk factors for disease with more confidence in the results to inform public policy decision-making.

  16. FY17 Status Report on the Initial EPP Finite Element Analysis of Grade 91 Steel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Messner, M. C.; Sham, T. -L.

    This report describes a modification to the elastic-perfectly plastic (EPP) strain limits design method to account for cyclic softening in Gr. 91 steel. The report demonstrates that the unmodified EPP strain limits method described in current ASME code case is not conservative for materials with substantial cyclic softening behavior like Gr. 91 steel. However, the EPP strain limits method can be modified to be conservative for softening materials by using softened isochronous stress-strain curves in place of the standard curves developed from unsoftened creep experiments. The report provides softened curves derived from inelastic material simulations and factors describing the transformationmore » of unsoftened curves to a softened state. Furthermore, the report outlines a method for deriving these factors directly from creep/fatigue tests. If the material softening saturates the proposed EPP strain limits method can be further simplified, providing a methodology based on temperature-dependent softening factors that could be implemented in an ASME code case allowing the use of the EPP strain limits method with Gr. 91. Finally, the report demonstrates the conservatism of the modified method when applied to inelastic simulation results and two bar experiments.« less

  17. Illustrated structural application of universal first-order reliability method

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1994-01-01

    The general application of the proposed first-order reliability method was achieved through the universal normalization of engineering probability distribution data. The method superimposes prevailing deterministic techniques and practices on the first-order reliability method to surmount deficiencies of the deterministic method and provide benefits of reliability techniques and predictions. A reliability design factor is derived from the reliability criterion to satisfy a specified reliability and is analogous to the deterministic safety factor. Its application is numerically illustrated on several practical structural design and verification cases with interesting results and insights. Two concepts of reliability selection criteria are suggested. Though the method was developed to support affordable structures for access to space, the method should also be applicable for most high-performance air and surface transportation systems.

  18. Contextual factors affecting autonomy for patients in Iranian hospitals: A qualitative study

    PubMed Central

    Ebrahimi, Hossein; Sadeghian, Efat; Seyedfatemi, Naeimeh; Mohammadi, Eesa; Crowley, Maureen

    2016-01-01

    Background: Consideration of patient autonomy is an essential element in individualized, patient-centered, ethical care. Internal and external factors associated with patient autonomy are related to culture and it is not clear what they are in Iran. The aim of this study was to explore contextual factors affecting the autonomy of patients in Iranian hospitals. Materials and Methods: This was a qualitative study using conventional content analysis methods. Thirty-four participants (23 patients, 9 nurses, and 2 doctors) from three Iranian teaching hospitals, selected using purposive sampling, participated in semi-structured interviews. Unstructured observation and filed notes were other methods for data collection. The data were subjected to qualitative content analysis and analyzed using the MAXQDA-10 software. Results: Five categories and sixteen subcategories were identified. The five main categories related to patient autonomy were: Intrapersonal factors, physical health status, supportive family and friends, communication style, and organizational constraints. Conclusions: In summary, this study uncovered contextual factors that the care team, managers, and planners in the health field should target in order to improve patient autonomy in Iranian hospitals. PMID:27186203

  19. Adaptive multi-view clustering based on nonnegative matrix factorization and pairwise co-regularization

    NASA Astrophysics Data System (ADS)

    Zhang, Tianzhen; Wang, Xiumei; Gao, Xinbo

    2018-04-01

    Nowadays, several datasets are demonstrated by multi-view, which usually include shared and complementary information. Multi-view clustering methods integrate the information of multi-view to obtain better clustering results. Nonnegative matrix factorization has become an essential and popular tool in clustering methods because of its interpretation. However, existing nonnegative matrix factorization based multi-view clustering algorithms do not consider the disagreement between views and neglects the fact that different views will have different contributions to the data distribution. In this paper, we propose a new multi-view clustering method, named adaptive multi-view clustering based on nonnegative matrix factorization and pairwise co-regularization. The proposed algorithm can obtain the parts-based representation of multi-view data by nonnegative matrix factorization. Then, pairwise co-regularization is used to measure the disagreement between views. There is only one parameter to auto learning the weight values according to the contribution of each view to data distribution. Experimental results show that the proposed algorithm outperforms several state-of-the-arts algorithms for multi-view clustering.

  20. Reinforcing mechanism of anchors in slopes: a numerical comparison of results of LEM and FEM

    NASA Astrophysics Data System (ADS)

    Cai, Fei; Ugai, Keizo

    2003-06-01

    This paper reports the limitation of the conventional Bishop's simplified method to calculate the safety factor of slopes stabilized with anchors, and proposes a new approach to considering the reinforcing effect of anchors on the safety factor. The reinforcing effect of anchors can be explained using an additional shearing resistance on the slip surface. A three-dimensional shear strength reduction finite element method (SSRFEM), where soil-anchor interactions were simulated by three-dimensional zero-thickness elasto-plastic interface elements, was used to calculate the safety factor of slopes stabilized with anchors to verify the reinforcing mechanism of anchors. The results of SSRFEM were compared with those of the conventional and proposed approaches for Bishop's simplified method for various orientations, positions, and spacings of anchors, and shear strengths of soil-grouted body interfaces. For the safety factor, the proposed approach compared better with SSRFEM than the conventional approach. The additional shearing resistance can explain the influence of the orientation, position, and spacing of anchors, and the shear strength of soil-grouted body interfaces on the safety factor of slopes stabilized with anchors.

  1. Configurations of Common Childhood Psychosocial Risk Factors

    ERIC Educational Resources Information Center

    Copeland, William; Shanahan, Lilly; Costello, E. Jane; Angold, Adrian

    2009-01-01

    Background: Co-occurrence of psychosocial risk factors is commonplace, but little is known about psychiatrically-predictive configurations of psychosocial risk factors. Methods: Latent class analysis (LCA) was applied to 17 putative psychosocial risk factors in a representative population sample of 920 children ages 9 to 17. The resultant class…

  2. 48 CFR 16.104 - Factors in selecting contract types.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 1 2014-10-01 2014-10-01 false Factors in selecting... CONTRACTING METHODS AND CONTRACT TYPES TYPES OF CONTRACTS Selecting Contract Types 16.104 Factors in selecting contract types. There are many factors that the contracting officer should consider in selecting and...

  3. 48 CFR 16.104 - Factors in selecting contract types.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 1 2013-10-01 2013-10-01 false Factors in selecting... CONTRACTING METHODS AND CONTRACT TYPES TYPES OF CONTRACTS Selecting Contract Types 16.104 Factors in selecting contract types. There are many factors that the contracting officer should consider in selecting and...

  4. A GRAPHICAL DIAGNOSTIC METHOD FOR ASSESSING THE ROTATION IN FACTOR ANALYTICAL MODELS OF ATMOSPHERIC POLLUTION. (R831078)

    EPA Science Inventory

    Factor analytic tools such as principal component analysis (PCA) and positive matrix factorization (PMF), suffer from rotational ambiguity in the results: different solutions (factors) provide equally good fits to the measured data. The PMF model imposes non-negativity of both...

  5. Redefining the WISC-R: Implications for Professional Practice and Public Policy.

    ERIC Educational Resources Information Center

    Macmann, Gregg M.; Barnett, David W.

    1992-01-01

    The factor structure of the Wechsler Intelligence Scale for Children (Revised) was examined in the standardization sample using new methods of factor analysis. The substantial overlap across factors was most parsimoniously represented by a single general factor. Implications for public policy regarding the purposes and outcomes of special…

  6. Testing for measurement invariance and latent mean differences across methods: interesting incremental information from multitrait-multimethod studies

    PubMed Central

    Geiser, Christian; Burns, G. Leonard; Servera, Mateu

    2014-01-01

    Models of confirmatory factor analysis (CFA) are frequently applied to examine the convergent validity of scores obtained from multiple raters or methods in so-called multitrait-multimethod (MTMM) investigations. We show that interesting incremental information about method effects can be gained from including mean structures and tests of MI across methods in MTMM models. We present a modeling framework for testing MI in the first step of a CFA-MTMM analysis. We also discuss the relevance of MI in the context of four more complex CFA-MTMM models with method factors. We focus on three recently developed multiple-indicator CFA-MTMM models for structurally different methods [the correlated traits-correlated (methods – 1), latent difference, and latent means models; Geiser et al., 2014a; Pohl and Steyer, 2010; Pohl et al., 2008] and one model for interchangeable methods (Eid et al., 2008). We demonstrate that some of these models require or imply MI by definition for a proper interpretation of trait or method factors, whereas others do not, and explain why MI may or may not be required in each model. We show that in the model for interchangeable methods, testing for MI is critical for determining whether methods can truly be seen as interchangeable. We illustrate the theoretical issues in an empirical application to an MTMM study of attention deficit and hyperactivity disorder (ADHD) with mother, father, and teacher ratings as methods. PMID:25400603

  7. Knowledge of risk factors and early detection methods and practices towards breast cancer among nurses in Indira Gandhi Medical College, Shimla, Himachal Pradesh, India.

    PubMed

    Fotedar, Vikas; Seam, Rajeev K; Gupta, Manoj K; Gupta, Manish; Vats, Siddharth; Verma, Sunita

    2013-01-01

    Breast cancer is an increasing health problem in India. Screening for early detection should lead to a reduction in mortality from the disease. It is known that motivation by nurses influences uptake of screening methods by women. This study aimed to investigate knowledge of breast cancer risk factors and early detection methods and the practice of screening among nurses in Indira Gandhi Medical College, Shimla, Himachal Pradesh. A cross-sectional study was conducted using a self-administered questionnaire to assess the knowledge of breast cancer risk factors, early detection methods and practice of screening methods among 457 nurses working in an Indira Gandhi Medical College, Shimla-H.P. Chi square test, Data was analysed using SPSS version 16. Test of significance used was chi square test. The response rate of the study was 94.9%. The average knowledge of risk factors about breast cancer of the entire population is 49%. 10.5% of nurses had poor knowledge, 25.2% of the nurses had good knowledge, 45% had very good knowledge and 16.3% of the nurses had excellent knowledge about risk factors of breast cancer and early detection methods. The knowledge level was significantly higher among BSC nurses than nurses with Diploma. 54% of participants in this study reportedly practice BSE at least once every year. Less than one-third reported that they had CBE within the past one year. 7% ever had mammogram before this study. Results from this study suggest the frequent continuing medical education programmes on breast cancer at institutional level is desirable.

  8. Simple method for quantifying microbiologically assisted chloramine decay in drinking water.

    PubMed

    Sathasivan, Arumugam; Fisher, Ian; Kastl, George

    2005-07-15

    In a chloraminated drinking water distribution system, monochloramine decays due to chemical and microbiological reactions. For modeling and operational control purposes, it is necessary to know the relative contribution of each type of reaction, but there was no method to quantify these contributions separately. A simple method was developed to do so. It compares monochloramine decay rates of processed (0.2 microm filtered or microbiologically inhibited by adding 100 microg of silver/L as silver nitrate) and unprocessed samples under controlled temperature conditions. The term microbial decay factor (Fm) was defined and derived from this method, to characterize the relative contribution of microbiologically assisted monochloramine decay to the total monochloramine decay observed in bulk water. Fm is the ratio between microbiologically assisted monochloramine decay and chemical decay of a given water sample measured at 20 degrees C. One possible use of the method is illustrated, where a service reservoir's bulk and inlet waters were sampled twice and analyzed for both the traditional indicators and the microbial decay factor. The microbial decay factor values alone indicated that more microbiologically assisted monochloramine decay was occurring in one bulk water than the other. In contrast, traditional nitrification indicators failed to show any difference. Further analysis showed that the microbial decay factor is more sensitive and that it alone can provide an early warning.

  9. More efficient parameter estimates for factor analysis of ordinal variables by ridge generalized least squares.

    PubMed

    Yuan, Ke-Hai; Jiang, Ge; Cheng, Ying

    2017-11-01

    Data in psychology are often collected using Likert-type scales, and it has been shown that factor analysis of Likert-type data is better performed on the polychoric correlation matrix than on the product-moment covariance matrix, especially when the distributions of the observed variables are skewed. In theory, factor analysis of the polychoric correlation matrix is best conducted using generalized least squares with an asymptotically correct weight matrix (AGLS). However, simulation studies showed that both least squares (LS) and diagonally weighted least squares (DWLS) perform better than AGLS, and thus LS or DWLS is routinely used in practice. In either LS or DWLS, the associations among the polychoric correlation coefficients are completely ignored. To mend such a gap between statistical theory and empirical work, this paper proposes new methods, called ridge GLS, for factor analysis of ordinal data. Monte Carlo results show that, for a wide range of sample sizes, ridge GLS methods yield uniformly more accurate parameter estimates than existing methods (LS, DWLS, AGLS). A real-data example indicates that estimates by ridge GLS are 9-20% more efficient than those by existing methods. Rescaled and adjusted test statistics as well as sandwich-type standard errors following the ridge GLS methods also perform reasonably well. © 2017 The British Psychological Society.

  10. Adaptive truncation of matrix decompositions and efficient estimation of NMR relaxation distributions

    NASA Astrophysics Data System (ADS)

    Teal, Paul D.; Eccles, Craig

    2015-04-01

    The two most successful methods of estimating the distribution of nuclear magnetic resonance relaxation times from two dimensional data are data compression followed by application of the Butler-Reeds-Dawson algorithm, and a primal-dual interior point method using preconditioned conjugate gradient. Both of these methods have previously been presented using a truncated singular value decomposition of matrices representing the exponential kernel. In this paper it is shown that other matrix factorizations are applicable to each of these algorithms, and that these illustrate the different fundamental principles behind the operation of the algorithms. These are the rank-revealing QR (RRQR) factorization and the LDL factorization with diagonal pivoting, also known as the Bunch-Kaufman-Parlett factorization. It is shown that both algorithms can be improved by adaptation of the truncation as the optimization process progresses, improving the accuracy as the optimal value is approached. A variation on the interior method viz, the use of barrier function instead of the primal-dual approach, is found to offer considerable improvement in terms of speed and reliability. A third type of algorithm, related to the algorithm known as Fast iterative shrinkage-thresholding algorithm, is applied to the problem. This method can be efficiently formulated without the use of a matrix decomposition.

  11. A Participatory Method to Identify Root Determinants of Health: The Heart of the Matter

    PubMed Central

    Barnidge, Ellen; Baker, Elizabeth A.; Motton, Freda; Rose, Frank; Fitzgerald, Teresa

    2010-01-01

    Background Co-learning is one of the core principles of community-based participatory research (CBPR). Often, it is difficult to engage community members beyond those involved in the formal partnership in co-learning processes. However, to understand and address locally relevant root factors of health, it is essential to engage the broader community in participatory dialogues around these factors. Objective This article provides a glimpse into how using a photo-elicitation process allowed a community–academic partnership to engage community members in a participatory dialogue about root factors influencing health. The article details the decision to use photo-elicitation and describes the photo-elicitation method. Method Similar to a focus group process, photo-elicitation uses photographs and questions to prompt reflection and dialogue. Used in conjunction with an economic development framework, this method allows participants to discuss underlying, or root, community processes and structures that influence health. Conclusion Photo-elicitation is one way to engage community members in a participatory dialogue that stimulates action around root factors of health. To use this method successfully within a CBPR approach, it is important to build on existing relationships of trust among community and academic partners and create opportunities for community partners to determine the issues for discussion. PMID:20364079

  12. Storage and computationally efficient permutations of factorized covariance and square-root information arrays

    NASA Technical Reports Server (NTRS)

    Muellerschoen, R. J.

    1988-01-01

    A unified method to permute vector stored Upper triangular Diagonal factorized covariance and vector stored upper triangular Square Root Information arrays is presented. The method involves cyclic permutation of the rows and columns of the arrays and retriangularization with fast (slow) Givens rotations (reflections). Minimal computation is performed, and a one dimensional scratch array is required. To make the method efficient for large arrays on a virtual memory machine, computations are arranged so as to avoid expensive paging faults. This method is potentially important for processing large volumes of radio metric data in the Deep Space Network.

  13. Intermediate boundary conditions for LOD, ADI and approximate factorization methods

    NASA Technical Reports Server (NTRS)

    Leveque, R. J.

    1985-01-01

    A general approach to determining the correct intermediate boundary conditions for dimensional splitting methods is presented. The intermediate solution U is viewed as a second order accurate approximation to a modified equation. Deriving the modified equation and using the relationship between this equation and the original equation allows us to determine the correct boundary conditions for U*. This technique is illustrated by applying it to locally one dimensional (LOD) and alternating direction implicit (ADI) methods for the heat equation in two and three space dimensions. The approximate factorization method is considered in slightly more generality.

  14. Multigrid and Krylov Subspace Methods for the Discrete Stokes Equations

    NASA Technical Reports Server (NTRS)

    Elman, Howard C.

    1996-01-01

    Discretization of the Stokes equations produces a symmetric indefinite system of linear equations. For stable discretizations, a variety of numerical methods have been proposed that have rates of convergence independent of the mesh size used in the discretization. In this paper, we compare the performance of four such methods: variants of the Uzawa, preconditioned conjugate gradient, preconditioned conjugate residual, and multigrid methods, for solving several two-dimensional model problems. The results indicate that where it is applicable, multigrid with smoothing based on incomplete factorization is more efficient than the other methods, but typically by no more than a factor of two. The conjugate residual method has the advantage of being both independent of iteration parameters and widely applicable.

  15. New method: calculation of magnification factor from an intracardiac marker.

    PubMed

    Cha, S D; Incarvito, J; Maranhao, V

    1983-01-01

    In order to calculate a magnification factor (MF), an intracardiac marker (pigtail catheter with markers) was evaluated using a new formula and correlated with the conventional grid method. By applying the Pythagorean theorem and trigonometry, a new formula was developed, which is (formula; see text) In an experimental study, MF by the intracardiac markers was 0.71 +/- 0.15 (M +/- SD) and one by the grid method was 0.72 +/- 0.15, with a correlation coefficient of 0.96. In patients study, MF by the intracardiac markers was 0.77 +/- 0.06 and one by the grid method was 0.77 +/- 0.05. We conclude that this new method is simple and the results were comparable to the conventional grid method at mid-chest level.

  16. Radial line method for rear-view mirror distortion detection

    NASA Astrophysics Data System (ADS)

    Rahmah, Fitri; Kusumawardhani, Apriani; Setijono, Heru; Hatta, Agus M.; Irwansyah, .

    2015-01-01

    An image of the object can be distorted due to a defect in a mirror. A rear-view mirror is an important component for the vehicle safety. One of standard parameters of the rear-view mirror is a distortion factor. This paper presents a radial line method for distortion detection of the rear-view mirror. The rear-view mirror was tested for the distortion detection by using a system consisting of a webcam sensor and an image-processing unit. In the image-processing unit, the captured image from the webcam were pre-processed by using smoothing and sharpening techniques and then a radial line method was used to define the distortion factor. It was demonstrated successfully that the radial line method could be used to define the distortion factor. This detection system is useful to be implemented such as in Indonesian's automotive component industry while the manual inspection still be used.

  17. Parallel/Vector Integration Methods for Dynamical Astronomy

    NASA Astrophysics Data System (ADS)

    Fukushima, T.

    Progress of parallel/vector computers has driven us to develop suitable numerical integrators utilizing their computational power to the full extent while being independent on the size of system to be integrated. Unfortunately, the parallel version of Runge-Kutta type integrators are known to be not so efficient. Recently we developed a parallel version of the extrapolation method (Ito and Fukushima 1997), which allows variable timesteps and still gives an acceleration factor of 3-4 for general problems. While the vector-mode usage of Picard-Chebyshev method (Fukushima 1997a, 1997b) will lead the acceleration factor of order of 1000 for smooth problems such as planetary/satellites orbit integration. The success of multiple-correction PECE mode of time-symmetric implicit Hermitian integrator (Kokubo 1998) seems to enlighten Milankar's so-called "pipelined predictor corrector method", which is expected to lead an acceleration factor of 3-4. We will review these directions and discuss future prospects.

  18. The Method of Space-time Conservation Element and Solution Element: Development of a New Implicit Solver

    NASA Technical Reports Server (NTRS)

    Chang, S. C.; Wang, X. Y.; Chow, C. Y.; Himansu, A.

    1995-01-01

    The method of space-time conservation element and solution element is a nontraditional numerical method designed from a physicist's perspective, i.e., its development is based more on physics than numerics. It uses only the simplest approximation techniques and yet is capable of generating nearly perfect solutions for a 2-D shock reflection problem used by Helen Yee and others. In addition to providing an overall view of the new method, we introduce a new concept in the design of implicit schemes, and use it to construct a highly accurate solver for a convection-diffusion equation. It is shown that, in the inviscid case, this new scheme becomes explicit and its amplification factors are identical to those of the Leapfrog scheme. On the other hand, in the pure diffusion case, its principal amplification factor becomes the amplification factor of the Crank-Nicolson scheme.

  19. Characterization of primary standards for use in the HPLC analysis of the procyanidin content of cocoa and chocolate containing products.

    PubMed

    Hurst, William J; Stanley, Bruce; Glinski, Jan A; Davey, Matthew; Payne, Mark J; Stuart, David A

    2009-10-15

    This report describes the characterization of a series of commercially available procyanidin standards ranging from dimers DP = 2 to decamers DP = 10 for the determination of procyanidins from cocoa and chocolate. Using a combination of HPLC with fluorescence detection and MALDI-TOF mass spectrometry, the purity of each standard was determined and these data were used to determine relative response factors. These response factors were compared with other response factors obtained from published methods. Data comparing the procyanidin analysis of a commercially available US dark chocolate calculated using each of the calibration methods indicates divergent results and demonstrate that previous methods may significantly underreport the procyanidins in cocoa-containing products. These results have far reaching implications because the previous calibration methods have been used to develop data for a variety of scientific reports, including food databases and clinical studies.

  20. First Order Reliability Application and Verification Methods for Semistatic Structures

    NASA Technical Reports Server (NTRS)

    Verderaime, Vincent

    1994-01-01

    Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored by conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments, its stress audits are shown to be arbitrary and incomplete, and it compromises high strength materials performance. A reliability method is proposed which combines first order reliability principles with deterministic design variables and conventional test technique to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety index expression. The application is reduced to solving for a factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and with the pace of semistatic structural designs.

  1. The Relation between Factor Score Estimates, Image Scores, and Principal Component Scores

    ERIC Educational Resources Information Center

    Velicer, Wayne F.

    1976-01-01

    Investigates the relation between factor score estimates, principal component scores, and image scores. The three methods compared are maximum likelihood factor analysis, principal component analysis, and a variant of rescaled image analysis. (RC)

  2. Job and Work Evaluation: A Literature Review.

    ERIC Educational Resources Information Center

    Heneman, Robert L.

    2003-01-01

    Describes advantages and disadvantages of work evaluation methods: ranking, market pricing, banding, classification, single-factor, competency, point-factor, and factor comparison. Compares work evaluation perspectives: traditional, realist, market advocate, strategist, organizational development, social reality, contingency theory, competency,…

  3. Two-factor theory – at the intersection of health care management and patient satisfaction

    PubMed Central

    Bohm, Josef

    2012-01-01

    Using data obtained from the 2004 Joint Canadian/United States Survey of Health, an analytic model using principles derived from Herzberg’s motivational hygiene theory was developed for evaluating patient satisfaction with health care. The analysis sought to determine whether survey variables associated with consumer satisfaction act as Hertzberg factors and contribute to survey participants’ self-reported levels of health care satisfaction. To validate the technique, data from the survey were analyzed using logistic regression methods and then compared with results obtained from the two-factor model. The findings indicate a high degree of correlation between the two methods. The two-factor analytical methodology offers advantages due to its ability to identify whether a factor assumes a motivational or hygienic role and assesses the influence of a factor within select populations. Its ease of use makes this methodology well suited for assessment of multidimensional variables. PMID:23055755

  4. Two-factor theory - at the intersection of health care management and patient satisfaction.

    PubMed

    Bohm, Josef

    2012-01-01

    Using data obtained from the 2004 Joint Canadian/United States Survey of Health, an analytic model using principles derived from Herzberg's motivational hygiene theory was developed for evaluating patient satisfaction with health care. The analysis sought to determine whether survey variables associated with consumer satisfaction act as Hertzberg factors and contribute to survey participants' self-reported levels of health care satisfaction. To validate the technique, data from the survey were analyzed using logistic regression methods and then compared with results obtained from the two-factor model. The findings indicate a high degree of correlation between the two methods. The two-factor analytical methodology offers advantages due to its ability to identify whether a factor assumes a motivational or hygienic role and assesses the influence of a factor within select populations. Its ease of use makes this methodology well suited for assessment of multidimensional variables.

  5. Heat transfer and fluid flow analysis of self-healing in metallic materials

    NASA Astrophysics Data System (ADS)

    Martínez Lucci, J.; Amano, R. S.; Rohatgi, P. K.

    2017-03-01

    This paper explores imparting self-healing characteristics to metal matrices similar to what are observed in biological systems and are being developed for polymeric materials. To impart self-healing properties to metal matrices, a liquid healing method was investigated; the met hod consists of a container filled with low melting alloy acting as a healing agent, embedded into a high melting metal matrix. When the matrix is cracked; self-healing is achieved by melting the healing agent allowing the liquid metal to flow into the crack. Upon cooling, solidification of the healing agent occurs and seals the crack. The objective of this research is to investigate the fluid flow and heat transfer to impart self-healing property to metal matrices. In this study, a dimensionless healing factor, which may help predict the possibility of healing is proposed. The healing factor is defined as the ratio of the viscous forces and the contact area of liquid metal and solid which prevent flow, and volume expansion, density, and velocity of the liquid metal, gravity, crack size and orientation which promote flow. The factor incorporates the parameters that control self-healing mechanism. It was observed that for lower values of the healing factor, the liquid flows, and for higher values of healing factor, the liquid remains in the container and healing does not occur. To validate and identify the critical range of the healing factor, experiments and simulations were performed for selected combinations of healing agents and metal matrices. The simulations were performed for three-dimensional models and a commercial software 3D Ansys-Fluent was used. Three experimental methods of synthesis of self-healing composites were used. The first method consisted of creating a hole in the matrices, and liquid healing agent was poured into the hole. The second method consisted of micro tubes containing the healing agent, and the third method consisted of incorporating micro balloons containing the healing agent in the matrix. The observed critical range of the healing factor is between 407 and 495; only for healing factor values below 407 healing was observed in the matrices.

  6. Species delimitation using Bayes factors: simulations and application to the Sceloporus scalaris species group (Squamata: Phrynosomatidae).

    PubMed

    Grummer, Jared A; Bryson, Robert W; Reeder, Tod W

    2014-03-01

    Current molecular methods of species delimitation are limited by the types of species delimitation models and scenarios that can be tested. Bayes factors allow for more flexibility in testing non-nested species delimitation models and hypotheses of individual assignment to alternative lineages. Here, we examined the efficacy of Bayes factors in delimiting species through simulations and empirical data from the Sceloporus scalaris species group. Marginal-likelihood scores of competing species delimitation models, from which Bayes factor values were compared, were estimated with four different methods: harmonic mean estimation (HME), smoothed harmonic mean estimation (sHME), path-sampling/thermodynamic integration (PS), and stepping-stone (SS) analysis. We also performed model selection using a posterior simulation-based analog of the Akaike information criterion through Markov chain Monte Carlo analysis (AICM). Bayes factor species delimitation results from the empirical data were then compared with results from the reversible-jump MCMC (rjMCMC) coalescent-based species delimitation method Bayesian Phylogenetics and Phylogeography (BP&P). Simulation results show that HME and sHME perform poorly compared with PS and SS marginal-likelihood estimators when identifying the true species delimitation model. Furthermore, Bayes factor delimitation (BFD) of species showed improved performance when species limits are tested by reassigning individuals between species, as opposed to either lumping or splitting lineages. In the empirical data, BFD through PS and SS analyses, as well as the rjMCMC method, each provide support for the recognition of all scalaris group taxa as independent evolutionary lineages. Bayes factor species delimitation and BP&P also support the recognition of three previously undescribed lineages. In both simulated and empirical data sets, harmonic and smoothed harmonic mean marginal-likelihood estimators provided much higher marginal-likelihood estimates than PS and SS estimators. The AICM displayed poor repeatability in both simulated and empirical data sets, and produced inconsistent model rankings across replicate runs with the empirical data. Our results suggest that species delimitation through the use of Bayes factors with marginal-likelihood estimates via PS or SS analyses provide a useful and complementary alternative to existing species delimitation methods.

  7. Calibration of BCR-ABL1 mRNA quantification methods using genetic reference materials is a valid strategy to report results on the international scale.

    PubMed

    Mauté, Carole; Nibourel, Olivier; Réa, Delphine; Coiteux, Valérie; Grardel, Nathalie; Preudhomme, Claude; Cayuela, Jean-Michel

    2014-09-01

    Until recently, diagnostic laboratories that wanted to report on the international scale had limited options: they had to align their BCR-ABL1 quantification methods through a sample exchange with a reference laboratory to derive a conversion factor. However, commercial methods calibrated on the World Health Organization genetic reference panel are now available. We report results from a study designed to assess the comparability of the two alignment strategies. Sixty follow-up samples from chronic myeloid leukemia patients were included. Two commercial methods calibrated on the genetic reference panel were compared to two conversion factor methods routinely used at Saint-Louis Hospital, Paris, and at Lille University Hospital. Results were matched against concordance criteria (i.e., obtaining at least two of the three following landmarks: 50, 75 and 90% of the patient samples within a 2-fold, 3-fold and 5-fold range, respectively). Out of the 60 samples, more than 32 were available for comparison. Compared to the conversion factor method, the two commercial methods were within a 2-fold, 3-fold and 5-fold range for 53 and 59%, 89 and 88%, 100 and 97%, respectively of the samples analyzed at Saint-Louis. At Lille, results were 45 and 85%, 76 and 97%, 100 and 100%, respectively. Agreements between methods were observed in the four comparisons performed. Our data show that the two commercial methods selected are concordant with the conversion factor methods. This study brings the proof of principle that alignment on the international scale using the genetic reference panel is compatible with the patient sample exchange procedure. We believe that these results are particularly important for diagnostic laboratories wishing to adopt commercial methods. Copyright © 2014 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  8. A Comparison of Measurement Equivalence Methods Based on Confirmatory Factor Analysis and Item Response Theory.

    ERIC Educational Resources Information Center

    Flowers, Claudia P.; Raju, Nambury S.; Oshima, T. C.

    Current interest in the assessment of measurement equivalence emphasizes two methods of analysis, linear, and nonlinear procedures. This study simulated data using the graded response model to examine the performance of linear (confirmatory factor analysis or CFA) and nonlinear (item-response-theory-based differential item function or IRT-Based…

  9. Resistivity Correction Factor for the Four-Probe Method: Experiment I

    NASA Astrophysics Data System (ADS)

    Yamashita, Masato; Yamaguchi, Shoji; Enjoji, Hideo

    1988-05-01

    Experimental verification of the theoretically derived resistivity correction factor (RCF) is presented. Resistivity and sheet resistance measurements by the four-probe method are made on three samples: isotropic graphite, ITO film and Au film. It is indicated that the RCF can correct the apparent variations of experimental data to yield reasonable resistivities and sheet resistances.

  10. A Transfer Learning Approach for Applying Matrix Factorization to Small ITS Datasets

    ERIC Educational Resources Information Center

    Voß, Lydia; Schatten, Carlotta; Mazziotti, Claudia; Schmidt-Thieme, Lars

    2015-01-01

    Machine Learning methods for Performance Prediction in Intelligent Tutoring Systems (ITS) have proven their efficacy; specific methods, e.g. Matrix Factorization (MF), however suffer from the lack of available information about new tasks or new students. In this paper we show how this problem could be solved by applying Transfer Learning (TL),…

  11. Unpacking Trauma Exposure Risk Factors and Differential Pathways of Influence: Predicting Postwar Mental Distress in Bosnian Adolescents

    ERIC Educational Resources Information Center

    Layne, Christopher M.; Olsen, Joseph A.; Baker, Aaron; Legerski, John-Paul; Isakson, Brian; Pasalic, Alma; Durakovic-Belko, Elvira; Dapo, Nermin; Campara, Nihada; Arslanagic, Berina; Saltzman, William R.; Pynoos, Robert S.

    2010-01-01

    Methods are needed for quantifying the potency and differential effects of risk factors to identify at-risk groups for theory building and intervention. Traditional methods for constructing war exposure measures are poorly suited to "unpack" differential relations between specific types of exposure and specific outcomes. This study of…

  12. The Functions and Methods of Mental Training on Competitive Sports

    NASA Astrophysics Data System (ADS)

    Xiong, Jianshe

    Mental training is the major training method of the competitive sports and the main factor of athletes skill and tactics level.By combining the psychological factor with the current competitive sports characteristics, this paper presents the function of mental training forward athletes, and how to improve the comprehensive psychological quality by using mental training.

  13. Method for protecting bone marrow against chemotherapeutic drugs and radiation therapy using transforming growth factor beta 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keller, J.R.; Ruscetti, F.W.; Wiltrout, R.

    1989-06-29

    Presented is a method for protecting hematopoietic stem cells from the myelotoxicity of chemotherapeutic drugs or radiation therapy, which comprises administering to a subject a therapeutically effective amount of transforming growth factor beta 1 for protecting bone marrow from the myelotoxicity of chemotherapeutic drugs or radiation therapy.

  14. The Effect of Missing Data Handling Methods on Goodness of Fit Indices in Confirmatory Factor Analysis

    ERIC Educational Resources Information Center

    Köse, Alper

    2014-01-01

    The primary objective of this study was to examine the effect of missing data on goodness of fit statistics in confirmatory factor analysis (CFA). For this aim, four missing data handling methods; listwise deletion, full information maximum likelihood, regression imputation and expectation maximization (EM) imputation were examined in terms of…

  15. Computing the Partial Fraction Decomposition of Rational Functions with Irreducible Quadratic Factors in the Denominators

    ERIC Educational Resources Information Center

    Man, Yiu-Kwong

    2012-01-01

    In this note, a new method for computing the partial fraction decomposition of rational functions with irreducible quadratic factors in the denominators is presented. This method involves polynomial divisions and substitutions only, without having to solve for the complex roots of the irreducible quadratic polynomial or to solve a system of linear…

  16. A Mixed-Method Exploration of School Organizational and Social Relationship Factors That Influence Dropout-Decision Making in a Rural High School

    ERIC Educational Resources Information Center

    Farina, Andrea J.

    2013-01-01

    This explanatory mixed-method study explored the dropout phenomenon from an ecological perspective identifying the school organizational (academics, activities, structure) and social relationship (teachers, peers) factors that most significantly influence students' decisions to leave school prior to graduation at a rural high school in south…

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacques Hugo

    Traditional engineering methods do not make provision for the integration of human considerations, while traditional human factors methods do not scale well to the complexity of large-scale nuclear power plant projects. Although the need for up-to-date human factors engineering processes and tools is recognised widely in industry, so far no formal guidance has been developed. This article proposes such a framework.

  18. A Case Study of Enabling Factors in the Technology Integration Change Process

    ERIC Educational Resources Information Center

    Hsu, Pi-Sui; Sharma, Priya

    2008-01-01

    The purpose of this qualitative case study was to analyze enabling factors in the technology integration change process in a multi-section science methods course, SCIED 408 (pseudonym), from 1997 to 2003 at a large northeastern university in the United States. We used two major data collection methods, in-depth interviewing and document reviews.…

  19. Linear Ordinary Differential Equations with Constant Coefficients. Revisiting the Impulsive Response Method Using Factorization

    ERIC Educational Resources Information Center

    Camporesi, Roberto

    2011-01-01

    We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as well as of…

  20. Factors Associated with Recruitment and Screening in the Treatment for Adolescents with Depression Study (TADS)

    ERIC Educational Resources Information Center

    May, Diane E.; Hallin, Mary J.; Kratochvil, Christopher J.; Puumala, Susan E.; Smith, Lynette S.; Reinecke, Mark A.; Silva, Susan G.; Weller, Elizabeth B.; Vitiello, Benedetto; Breland-Noble, Alfiee; March, John S.

    2007-01-01

    Objective: To examine factors associated with eligibility and randomization and consider the efficiency of recruitment methods. Method: Adolescents, ages 12 to 17 years, were telephone screened (N = 2,804) followed by in-person evaluation (N = 1,088) for the Treatment for Adolescents With Depression Study. Separate logistic regression models,…

  1. A projection operator method for the analysis of magnetic neutron form factors

    NASA Astrophysics Data System (ADS)

    Kaprzyk, S.; Van Laar, B.; Maniawski, F.

    1981-03-01

    A set of projection operators in matrix form has been derived on the basis of decomposition of the spin density into a series of fully symmetrized cubic harmonics. This set of projection operators allows a formulation of the Fourier analysis of magnetic form factors in a convenient way. The presented method is capable of checking the validity of various theoretical models used for spin density analysis up to now. The general formalism is worked out in explicit form for the fcc and bcc structures and deals with that part of spin density which is contained within the sphere inscribed in the Wigner-Seitz cell. This projection operator method has been tested on the magnetic form factors of nickel and iron.

  2. Computation of Anisotropic Bi-Material Interfacial Fracture Parameters and Delamination Creteria

    NASA Technical Reports Server (NTRS)

    Chow, W-T.; Wang, L.; Atluri, S. N.

    1998-01-01

    This report documents the recent developments in methodologies for the evaluation of the integrity and durability of composite structures, including i) the establishment of a stress-intensity-factor based fracture criterion for bimaterial interfacial cracks in anisotropic materials (see Sec. 2); ii) the development of a virtual crack closure integral method for the evaluation of the mixed-mode stress intensity factors for a bimaterial interfacial crack (see Sec. 3). Analytical and numerical results show that the proposed fracture criterion is a better fracture criterion than the total energy release rate criterion in the characterization of the bimaterial interfacial cracks. The proposed virtual crack closure integral method is an efficient and accurate numerical method for the evaluation of mixed-mode stress intensity factors.

  3. Identification of atmospheric organic sources using the carbon hollow tube-gas chromatography method and factor analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cobb, G.P.; Braman, R.S.; Gilbert, R.A.

    Atmospheric organics were sampled and analyzed by using the carbon hollow tube-gas chromatography method. Chromatograms from spice mixtures, cigarettes, and ambient air were analyzed. Principal factor analysis of row order chromatographic data produces factors which are eigenchromatograms of the components in the samples. Component sources are identified from the eigenchromatograms in all experiments and the individual eigenchromatogram corresponding to a particular source is determined in most cases. Organic sources in ambient air and in cigaretts are identified with 87% certainty. Analysis of clove cigarettes allows the determination of the relative amount of clove in different cigarettes. A new nondestructive qualitymore » control method using the hollow tube-gas chromatography analysis is discussed.« less

  4. General relaxation schemes in multigrid algorithms for higher order singularity methods

    NASA Technical Reports Server (NTRS)

    Oskam, B.; Fray, J. M. J.

    1981-01-01

    Relaxation schemes based on approximate and incomplete factorization technique (AF) are described. The AF schemes allow construction of a fast multigrid method for solving integral equations of the second and first kind. The smoothing factors for integral equations of the first kind, and comparison with similar results from the second kind of equations are a novel item. Application of the MD algorithm shows convergence to the level of truncation error of a second order accurate panel method.

  5. Method for determining formation quality factor from seismic data

    DOEpatents

    Taner, M. Turhan; Treitel, Sven

    2005-08-16

    A method is disclosed for calculating the quality factor Q from a seismic data trace. The method includes calculating a first and a second minimum phase inverse wavelet at a first and a second time interval along the seismic data trace, synthetically dividing the first wavelet by the second wavelet, Fourier transforming the result of the synthetic division, calculating the logarithm of this quotient of Fourier transforms and determining the slope of a best fit line to the logarithm of the quotient.

  6. Determination of car on-road black carbon and particle number emission factors and comparison between mobile and stationary measurements

    NASA Astrophysics Data System (ADS)

    Ježek, I.; Drinovec, L.; Ferrero, L.; Carriero, M.; Močnik, G.

    2015-01-01

    We have used two methods for measuring emission factors (EFs) in real driving conditions on five cars in a controlled environment: the stationary method, where the investigated vehicle drives by the stationary measurement platform and the composition of the plume is measured, and the chasing method, where a mobile measurement platform drives behind the investigated vehicle. We measured EFs of black carbon and particle number concentration. The stationary method was tested for repeatability at different speeds and on a slope. The chasing method was tested on a test track and compared to the portable emission measurement system. We further developed the data processing algorithm for both methods, trying to improve consistency, determine the plume duration, limit the background influence and facilitate automatic processing of measurements. The comparison of emission factors determined by the two methods showed good agreement. EFs of a single car measured with either method have a specific distribution with a characteristic value and a long tail of super emissions. Measuring EFs at different speeds or slopes did not significantly influence the EFs of different cars; hence, we propose a new description of vehicle emissions that is not related to kinematic or engine parameters, and we rather describe the vehicle EF with a characteristic value and a super emission tail.

  7. Intercomparison of methods for image quality characterization. II. Noise power spectrum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobbins, James T. III; Samei, Ehsan; Ranger, Nicole T.

    Second in a two-part series comparing measurement techniques for the assessment of basic image quality metrics in digital radiography, in this paper we focus on the measurement of the image noise power spectrum (NPS). Three methods were considered: (1) a method published by Dobbins et al. [Med. Phys. 22, 1581-1593 (1995)] (2) a method published by Samei et al. [Med. Phys. 30, 608-622 (2003)], and (3) a new method sanctioned by the International Electrotechnical Commission (IEC 62220-1, 2003), developed as part of an international standard for the measurement of detective quantum efficiency. In addition to an overall comparison of themore » estimated NPS between the three techniques, the following factors were also evaluated for their effect on the measured NPS: horizontal versus vertical directional dependence, the use of beam-limiting apertures, beam spectrum, and computational methods of NPS analysis, including the region-of-interest (ROI) size and the method of ROI normalization. Of these factors, none was found to demonstrate a substantial impact on the amplitude of the NPS estimates ({<=}3.1% relative difference in NPS averaged over frequency, for each factor considered separately). Overall, the three methods agreed to within 1.6%{+-}0.8% when averaged over frequencies >0.15 mm{sup -1}.« less

  8. Determination of important topographic factors for landslide mapping analysis using MLP network.

    PubMed

    Alkhasawneh, Mutasem Sh; Ngah, Umi Kalthum; Tay, Lea Tien; Mat Isa, Nor Ashidi; Al-batah, Mohammad Subhi

    2013-01-01

    Landslide is one of the natural disasters that occur in Malaysia. Topographic factors such as elevation, slope angle, slope aspect, general curvature, plan curvature, and profile curvature are considered as the main causes of landslides. In order to determine the dominant topographic factors in landslide mapping analysis, a study was conducted and presented in this paper. There are three main stages involved in this study. The first stage is the extraction of extra topographic factors. Previous landslide studies had identified mainly six topographic factors. Seven new additional factors have been proposed in this study. They are longitude curvature, tangential curvature, cross section curvature, surface area, diagonal line length, surface roughness, and rugosity. The second stage is the specification of the weight of each factor using two methods. The methods are multilayer perceptron (MLP) network classification accuracy and Zhou's algorithm. At the third stage, the factors with higher weights were used to improve the MLP performance. Out of the thirteen factors, eight factors were considered as important factors, which are surface area, longitude curvature, diagonal length, slope angle, elevation, slope aspect, rugosity, and profile curvature. The classification accuracy of multilayer perceptron neural network has increased by 3% after the elimination of five less important factors.

  9. Determination of Important Topographic Factors for Landslide Mapping Analysis Using MLP Network

    PubMed Central

    Alkhasawneh, Mutasem Sh.; Ngah, Umi Kalthum; Mat Isa, Nor Ashidi; Al-batah, Mohammad Subhi

    2013-01-01

    Landslide is one of the natural disasters that occur in Malaysia. Topographic factors such as elevation, slope angle, slope aspect, general curvature, plan curvature, and profile curvature are considered as the main causes of landslides. In order to determine the dominant topographic factors in landslide mapping analysis, a study was conducted and presented in this paper. There are three main stages involved in this study. The first stage is the extraction of extra topographic factors. Previous landslide studies had identified mainly six topographic factors. Seven new additional factors have been proposed in this study. They are longitude curvature, tangential curvature, cross section curvature, surface area, diagonal line length, surface roughness, and rugosity. The second stage is the specification of the weight of each factor using two methods. The methods are multilayer perceptron (MLP) network classification accuracy and Zhou's algorithm. At the third stage, the factors with higher weights were used to improve the MLP performance. Out of the thirteen factors, eight factors were considered as important factors, which are surface area, longitude curvature, diagonal length, slope angle, elevation, slope aspect, rugosity, and profile curvature. The classification accuracy of multilayer perceptron neural network has increased by 3% after the elimination of five less important factors. PMID:24453846

  10. A Comparison of Factor Score Estimation Methods in the Presence of Missing Data: Reliability and an Application to Nicotine Dependence

    ERIC Educational Resources Information Center

    Estabrook, Ryne; Neale, Michael

    2013-01-01

    Factor score estimation is a controversial topic in psychometrics, and the estimation of factor scores from exploratory factor models has historically received a great deal of attention. However, both confirmatory factor models and the existence of missing data have generally been ignored in this debate. This article presents a simulation study…

  11. A Comprehensive Evaluation System for Military Hospitals' Response Capability to Bio-terrorism.

    PubMed

    Wang, Hui; Jiang, Nan; Shao, Sicong; Zheng, Tao; Sun, Jianzhong

    2015-05-01

    The objective of this study is to establish a comprehensive evaluation system for military hospitals' response capacity to bio-terrorism. Literature research and Delphi method were utilized to establish the comprehensive evaluation system for military hospitals' response capacity to bio-terrorism. Questionnaires were designed and used to survey the status quo of 134 military hospitals' response capability to bio-terrorism. Survey indicated that factor analysis method was suitable to for analyzing the comprehensive evaluation system for military hospitals' response capacity to bio-terrorism. The constructed evaluation system was consisted of five first-class and 16 second-class indexes. Among them, medical response factor was considered as the most important factor with weight coefficient of 0.660, followed in turn by the emergency management factor with weight coefficient of 0.109, emergency management consciousness factor with weight coefficient of 0.093, hardware support factor with weight coefficient of 0.078, and improvement factor with weight coefficient of 0.059. The constructed comprehensive assessment model and system are scientific and practical.

  12. Interrelation Between Safety Factors and Reliability

    NASA Technical Reports Server (NTRS)

    Elishakoff, Isaac; Chamis, Christos C. (Technical Monitor)

    2001-01-01

    An evaluation was performed to establish relationships between safety factors and reliability relationships. Results obtained show that the use of the safety factor is not contradictory to the employment of the probabilistic methods. In many cases the safety factors can be directly expressed by the required reliability levels. However, there is a major difference that must be emphasized: whereas the safety factors are allocated in an ad hoc manner, the probabilistic approach offers a unified mathematical framework. The establishment of the interrelation between the concepts opens an avenue to specify safety factors based on reliability. In cases where there are several forms of failure, then the allocation of safety factors should he based on having the same reliability associated with each failure mode. This immediately suggests that by the probabilistic methods the existing over-design or under-design can be eliminated. The report includes three parts: Part 1-Random Actual Stress and Deterministic Yield Stress; Part 2-Deterministic Actual Stress and Random Yield Stress; Part 3-Both Actual Stress and Yield Stress Are Random.

  13. New conversion factors between human and automatic readouts of the CDMAM phantom for CR systems

    NASA Astrophysics Data System (ADS)

    Hummel, Johann; Homolka, Peter; Osanna-Elliot, Angelika; Kaar, Marcus; Semtrus, Friedrich; Figl, Michael

    2016-03-01

    Mammography screenings demand for profound image quality (IQ) assessment to guarantee their screening success. The European protocol for the quality control of the physical and technical aspects of mammography screening (EPQCM) suggests a contrast detail phantom such as the CDMAM phantom to evaluate IQ. For automatic evaluation a software is provided by the EUREF. As human and automatic readouts differ systematically conversion factors were published by the official reference organisation (EUREF). As we experienced a significant difference for these factors for Computed Radiography (CR) systems we developed an objectifying analysis software which presents the cells including the gold disks randomly in thickness and rotation. This allows to overcome the problem of an inevitable learning effect where observers know the position of the disks in advance. Applying this software, 45 computed radiography (CR) systems were evaluated and the conversion factors between human and automatic readout determined. The resulting conversion factors were compared with the ones resulting from the two methods published by EUREF. We found our conversion factors to be substantially lower than those suggested by EUREF, in particular 1.21 compared to 1.42 (EUREF EU method) and 1.62 (EUREF UK method) for 0.1 mm, and 1.40 compared to 1.73 (EUREF EU) and 1.83 (EUREF UK) for 0.25 mm disc diameter, respectively. This can result in a dose increase of up to 90% using either of these factors to adjust patient dose in order to fulfill image quality requirements. This suggests the need of an agreement on their proper application and limits the validity of the assessment methods. Therefore, we want to stress the need for clear criteria for CR systems based on appropriate studies.

  14. Overview of mycotoxin methods, present status and future needs.

    PubMed

    Gilbert, J

    1999-01-01

    This article reviews current requirements for the analysis for mycotoxins in foods and identifies legislative as well as other factors that are driving development and validation of new methods. New regulatory limits for mycotoxins and analytical quality assurance requirements for laboratories to only use validated methods are seen as major factors driving developments. Three major classes of methods are identified which serve different purposes and can be categorized as screening, official and research. In each case the present status and future needs are assessed. In addition to an overview of trends in analytical methods, some other areas of analytical quality assurance such as participation in proficiency testing and reference materials are identified.

  15. A symmetrical subtraction combined with interpolated values for eliminating scattering from fluorescence EEM data.

    PubMed

    Xu, Jing; Liu, Xiaofei; Wang, Yutian

    2016-08-05

    Parallel factor analysis is a widely used method to extract qualitative and quantitative information of the analyte of interest from fluorescence emission-excitation matrix containing unknown components. Big amplitude of scattering will influence the results of parallel factor analysis. Many methods of eliminating scattering have been proposed. Each of these methods has its advantages and disadvantages. The combination of symmetrical subtraction and interpolated values has been discussed. The combination refers to both the combination of results and the combination of methods. Nine methods were used for comparison. The results show the combination of results can make a better concentration prediction for all the components. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Galerkin-collocation domain decomposition method for arbitrary binary black holes

    NASA Astrophysics Data System (ADS)

    Barreto, W.; Clemente, P. C. M.; de Oliveira, H. P.; Rodriguez-Mueller, B.

    2018-05-01

    We present a new computational framework for the Galerkin-collocation method for double domain in the context of ADM 3 +1 approach in numerical relativity. This work enables us to perform high resolution calculations for initial sets of two arbitrary black holes. We use the Bowen-York method for binary systems and the puncture method to solve the Hamiltonian constraint. The nonlinear numerical code solves the set of equations for the spectral modes using the standard Newton-Raphson method, LU decomposition and Gaussian quadratures. We show convergence of our code for the conformal factor and the ADM mass. Thus, we display features of the conformal factor for different masses, spins and linear momenta.

  17. Estimation of the behavior factor of existing RC-MRF buildings

    NASA Astrophysics Data System (ADS)

    Vona, Marco; Mastroberti, Monica

    2018-01-01

    In recent years, several research groups have studied a new generation of analysis methods for seismic response assessment of existing buildings. Nevertheless, many important developments are still needed in order to define more reliable and effective assessment procedures. Moreover, regarding existing buildings, it should be highlighted that due to the low knowledge level, the linear elastic analysis is the only analysis method allowed. The same codes (such as NTC2008, EC8) consider the linear dynamic analysis with behavior factor as the reference method for the evaluation of seismic demand. This type of analysis is based on a linear-elastic structural model subject to a design spectrum, obtained by reducing the elastic spectrum through a behavior factor. The behavior factor (reduction factor or q factor in some codes) is used to reduce the elastic spectrum ordinate or the forces obtained from a linear analysis in order to take into account the non-linear structural capacities. The behavior factors should be defined based on several parameters that influence the seismic nonlinear capacity, such as mechanical materials characteristics, structural system, irregularity and design procedures. In practical applications, there is still an evident lack of detailed rules and accurate behavior factor values adequate for existing buildings. In this work, some investigations of the seismic capacity of the main existing RC-MRF building types have been carried out. In order to make a correct evaluation of the seismic force demand, actual behavior factor values coherent with force based seismic safety assessment procedure have been proposed and compared with the values reported in the Italian seismic code, NTC08.

  18. On the use of sibling recurrence risks to select environmental factors liable to interact with genetic risk factors.

    PubMed

    Kazma, Rémi; Bonaïti-Pellié, Catherine; Norris, Jill M; Génin, Emmanuelle

    2010-01-01

    Gene-environment interactions are likely to be involved in the susceptibility to multifactorial diseases but are difficult to detect. Available methods usually concentrate on some particular genetic and environmental factors. In this paper, we propose a new method to determine whether a given exposure is susceptible to interact with unknown genetic factors. Rather than focusing on a specific genetic factor, the degree of familial aggregation is used as a surrogate for genetic factors. A test comparing the recurrence risks in sibs according to the exposure of indexes is proposed and its power is studied for varying values of model parameters. The Exposed versus Unexposed Recurrence Analysis (EURECA) is valuable for common diseases with moderate familial aggregation, only when the role of exposure has been clearly outlined. Interestingly, accounting for a sibling correlation for the exposure increases the power of EURECA. An application on a sample ascertained through one index affected with type 2 diabetes is presented where gene-environment interactions involving obesity and physical inactivity are investigated. Association of obesity with type 2 diabetes is clearly evidenced and a potential interaction involving this factor is suggested in Hispanics (P=0.045), whereas a clear gene-environment interaction is evidenced involving physical inactivity only in non-Hispanic whites (P=0.028). The proposed method might be of particular interest before genetic studies to help determine the environmental risk factors that will need to be accounted for to increase the power to detect genetic risk factors and to select the most appropriate samples to genotype.

  19. STATISTICAL ANALYSIS OF SPECTROPHOTOMETRIC DETERMINATIONS OF BORON; Estudo Estatistico de Determinacoes Espectrofotometricas de Boro

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lima, F.W.; Pagano, C.; Schneiderman, B.

    1959-07-01

    Boron can be determined quantitatively by absorption spectrophotometry of solutions of the red compound formed by the reaction of boric acid with curcumin. This reaction is affected by various factors, some of which can be detected easily in the data interpretation. Others, however, provide more difficulty. The application of modern statistical method to the study of the influence of these factors on the quantitative determination of boron is presented. These methods provide objective ways of establishing significant effects of the factors involved. (auth)

  20. Influence of fundamental mode fill factor on disk laser output power and laser beam quality

    NASA Astrophysics Data System (ADS)

    Cheng, Zhiyong; Yang, Zhuo; Shao, Xichun; Li, Wei; Zhu, Mengzhen

    2017-11-01

    An three-dimensional numerical model based on finite element method and Fox-Li method with angular spectrum diffraction theoy is developed to calculate the output power and power density distribution of Yb:YAG disk laser. We invest the influence of fundamental mode fill factor(the ratio of fundamental mode size and pump spot size) on the output power and laser beam quality. Due to aspherical aberration and soft aperture effect in laser disk, high beam quality can be achieve with relative lower efficiency. The highest output power of fundamental laser mode is influenced by the fundamental mode fill factor. Besides we find that optimal mode fill factor increase with pump spot size.

  1. Analysis of the influencing factors of global energy interconnection development

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; He, Yongxiu; Ge, Sifan; Liu, Lin

    2018-04-01

    Under the background of building global energy interconnection and achieving green and low-carbon development, this paper grasps a new round of energy restructuring and the trend of energy technology change, based on the present situation of global and China's global energy interconnection development, established the index system of the impact of global energy interconnection development factors. A subjective and objective weight analysis of the factors affecting the development of the global energy interconnection was conducted separately by network level analysis and entropy method, and the weights are summed up by the method of additive integration, which gives the comprehensive weight of the influencing factors and the ranking of their influence.

  2. [Success factors in hospital management].

    PubMed

    Heberer, M

    1998-12-01

    The hospital environment of most Western countries is currently undergoing dramatic changes. Competition among hospitals is increasing, and economic issues have become decisive factors for the allocation of medical care. Hospitals therefore require management tools to respond to these changes adequately. The balanced scorecard is a method of enabling development and implementation of a business strategy that equally respects the financial requirements, the needs of the customers, process development, and organizational learning. This method was used to derive generally valid success factors for hospital management based on an analysis of an academic hospital in Switzerland. Strategic management, the focus of medical services, customer orientation, and integration of professional groups across the hospital value chain were identified as success factors for hospital management.

  3. Using a fuzzy DEMATEL method for analyzing the factors influencing subcontractors selection

    NASA Astrophysics Data System (ADS)

    Kozik, Renata

    2016-06-01

    Subcontracting is a long-standing practice in the construction industry. This form of project organization, if manage properly, could provide the better quality, reduction in project time and costs. Subcontractors selection is a multi-criterion problem and can be determined by many factors. Identifying the importance of each of them as well as the direction of cause-effect relations between various types of factors can improve the management process. Their values could be evaluated on the basis of the available expert opinions with the application of a fuzzy multi-stage grading scale. In this paper it is recommended to use fuzzy DEMATEL method to analyze the relationship between factors affecting subcontractors selection.

  4. Multi-factor evaluation indicator method for the risk assessment of atmospheric and oceanic hazard group due to the attack of tropical cyclones

    NASA Astrophysics Data System (ADS)

    Qi, Peng; Du, Mei

    2018-06-01

    China's southeast coastal areas frequently suffer from storm surge due to the attack of tropical cyclones (TCs) every year. Hazards induced by TCs are complex, such as strong wind, huge waves, storm surge, heavy rain, floods, and so on. The atmospheric and oceanic hazards cause serious disasters and substantial economic losses. This paper, from the perspective of hazard group, sets up a multi-factor evaluation method for the risk assessment of TC hazards using historical extreme data of concerned atmospheric and oceanic elements. Based on the natural hazard dynamic process, the multi-factor indicator system is composed of nine natural hazard factors representing intensity and frequency, respectively. Contributing to the indicator system, in order of importance, are maximum wind speed by TCs, attack frequency of TCs, maximum surge height, maximum wave height, frequency of gusts ≥ Scale 8, rainstorm intensity, maximum tidal range, rainstorm frequency, then sea-level rising rate. The first four factors are the most important, whose weights exceed 10% in the indicator system. With normalization processing, all the single-hazard factors are superposed by multiplying their weights to generate a superposed TC hazard. The multi-factor evaluation indicator method was applied to the risk assessment of typhoon-induced atmospheric and oceanic hazard group in typhoon-prone southeast coastal cities of China.

  5. Development and Implementation of a Coagulation Factor Testing Method Utilizing Autoverification in a High-volume Clinical Reference Laboratory Environment

    PubMed Central

    Riley, Paul W.; Gallea, Benoit; Valcour, Andre

    2017-01-01

    Background: Testing coagulation factor activities requires that multiple dilutions be assayed and analyzed to produce a single result. The slope of the line created by plotting measured factor concentration against sample dilution is evaluated to discern the presence of inhibitors giving rise to nonparallelism. Moreover, samples producing results on initial dilution falling outside the analytic measurement range of the assay must be tested at additional dilutions to produce reportable results. Methods: The complexity of this process has motivated a large clinical reference laboratory to develop advanced computer algorithms with automated reflex testing rules to complete coagulation factor analysis. A method was developed for autoverification of coagulation factor activity using expert rules developed with on an off the shelf commercially available data manager system integrated into an automated coagulation platform. Results: Here, we present an approach allowing for the autoverification and reporting of factor activity results with greatly diminished technologist effort. Conclusions: To the best of our knowledge, this is the first report of its kind providing a detailed procedure for implementation of autoverification expert rules as applied to coagulation factor activity testing. Advantages of this system include ease of training for new operators, minimization of technologist time spent, reduction of staff fatigue, minimization of unnecessary reflex tests, optimization of turnaround time, and assurance of the consistency of the testing and reporting process. PMID:28706751

  6. Investigating risk factors for slips, trips and falls in New Zealand residential construction using incident-centred and incident-independent methods.

    PubMed

    Bentley, Tim A; Hide, Sophie; Tappin, David; Moore, Dave; Legg, Stephen; Ashby, Liz; Parker, Richard

    2006-01-15

    Slip, trip and fall (STF) incidents, particularly falls from a height, are a leading cause of injury in the New Zealand residential construction industry. The most common origins of falls from a height in this sector are ladders, scaffolding and roofs, while slipping is the most frequent fall initiating event category. The study aimed to provide detailed information on construction industry STF risk factors for high-risk tasks, work equipment and environments, as identified from an earlier analysis of STF claims data, together with information to be used in the development of interventions to reduce STF risk in New Zealand residential construction. The study involved the use of both incident-centred and incident-independent methods of investigation, including detailed follow-up investigations of incidents and observations and interviews with workers on construction sites, to provide data on a wide range of risk factors. A large number of risk factors for residential construction STFs were identified, including factors related to the work environment, tasks and the use and availability of appropriate height work equipment. The different methods of investigation produced complementary information on factors related to equipment design and work organization, which underlie some of the site conditions and work practices identified as key risk factors for residential construction STFs. A conceptual systems model of residential construction STF risk is presented.

  7. Lipase, protease, and biofilm as the major virulence factors in staphylococci isolated from acne lesions.

    PubMed

    Saising, Jongkon; Singdam, Sudarat; Ongsakul, Metta; Voravuthikunchai, Supayang Piyawan

    2012-08-01

    Staphylococci involve infections in association with a number of bacterial virulence factors. Extracellular enzymes play an important role in staphylococcal pathogenesis. In addition, biofilm is known to be associated with their virulence. In this study, 149 staphylococcal isolates from acne lesions were investigated for their virulence factors including lipase, protease, and biofilm formation. Coagulase-negative staphylococci were demonstrated to present lipase and protease activities more often than coagulase-positive staphylococci. A microtiter plate method (quantitative method) and a Congo red agar method (qualitative method) were comparatively employed to assess biofilm formation. In addition, biofilm forming ability was commonly detected in a coagulase-negative group (97.7%, microtiter plate method and 84.7%, Congo red agar method) more frequently than in coagulase-positive organisms (68.8%, microtiter plate method and 62.5%, Congo red agar method). This study clearly confirms an important role for biofilm in coagulasenegative staphylococci which is of serious concern as a considerable infectious agent in patients with acnes and implanted medical devices. The Congo red agar method proved to be an easy method to quickly detect biofilm producers. Sensitivity of the Congo red agar method was 85.54% and 68.18% and accuracy was 84.7% and 62.5% in coagulase-negative and coagulase-positive staphylococci, respectively, while specificity was 50% in both groups. The results clearly demonstrated that a higher percentage of coagulasenegative staphylococci isolated from acne lesions exhibited lipase and protease activities, as well as biofilm formation, than coagulase-positive staphylococci.

  8. Relationship between stress-related psychosocial work factors and suboptimal health among Chinese medical staff: a cross-sectional study.

    PubMed

    Liang, Ying-Zhi; Chu, Xi; Meng, Shi-Jiao; Zhang, Jie; Wu, Li-Juan; Yan, Yu-Xiang

    2018-03-06

    The study aimed to develop and validate a model to measure psychosocial factors at work among medical staff in China based on confirmatory factor analysis (CFA). The second aim of the current study was to clarify the association between stress-related psychosocial work factors and suboptimal health status. The cross-sectional study was conducted using clustered sampling method. Xuanwu Hospital, a 3A grade hospital in Beijing. Nine hundred and fourteen medical staff aged over 40 years were sampled. Seven hundred and ninety-seven valid questionnaires were collected and used for further analyses. The sample included 94% of the Han population. The Copenhagen Psychosocial Questionnaire (COPSOQ) and the Suboptimal Health Status Questionnaires-25 were used to assess the psychosocial factors at work and suboptimal health status, respectively. CFA was conducted to establish the evaluating method of COPSOQ. A multivariate logistic regression model was used to estimate the relationship between suboptimal health status and stress-related psychosocial work factors among Chinese medical staff. There was a strong correlation among the five dimensions of COPSOQ based on the first-order factor model. Then, we established two second-order factors including negative and positive psychosocial work stress factors to evaluate psychosocial factors at work, and the second-order factor model fit well. The high score in negative (OR (95% CI)=1.47 (1.34 to 1.62), P<0.001) and positive (OR (95% CI)=0.96 (0.94 to 0.98), P<0.001) psychosocial work factors increased and decreased the risk of suboptimal health, respectively. This relationship remained statistically significant after adjusting for confounders and when using different cut-offs of suboptimal health status. Among medical staff, the second-order factor model was a suitable method to evaluate the COPSOQ. The negative and positive psychosocial work stress factors might be the risk and protective factors of suboptimal health, respectively. Moreover, negative psychosocial work stress was the most associated factor to predict suboptimal health. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  9. Relationship between stress-related psychosocial work factors and suboptimal health among Chinese medical staff: a cross-sectional study

    PubMed Central

    Meng, Shi-Jiao; Zhang, Jie; Wu, Li-Juan; Yan, Yu-Xiang

    2018-01-01

    Objectives The study aimed to develop and validate a model to measure psychosocial factors at work among medical staff in China based on confirmatory factor analysis (CFA). The second aim of the current study was to clarify the association between stress-related psychosocial work factors and suboptimal health status. Design The cross-sectional study was conducted using clustered sampling method. Setting Xuanwu Hospital, a 3A grade hospital in Beijing. Participants Nine hundred and fourteen medical staff aged over 40 years were sampled. Seven hundred and ninety-seven valid questionnaires were collected and used for further analyses. The sample included 94% of the Han population. Main outcome measures The Copenhagen Psychosocial Questionnaire (COPSOQ) and the Suboptimal Health Status Questionnaires-25 were used to assess the psychosocial factors at work and suboptimal health status, respectively. CFA was conducted to establish the evaluating method of COPSOQ. A multivariate logistic regression model was used to estimate the relationship between suboptimal health status and stress-related psychosocial work factors among Chinese medical staff. Results There was a strong correlation among the five dimensions of COPSOQ based on the first-order factor model. Then, we established two second-order factors including negative and positive psychosocial work stress factors to evaluate psychosocial factors at work, and the second-order factor model fit well. The high score in negative (OR (95% CI)=1.47 (1.34 to 1.62), P<0.001) and positive (OR (95% CI)=0.96 (0.94 to 0.98), P<0.001) psychosocial work factors increased and decreased the risk of suboptimal health, respectively. This relationship remained statistically significant after adjusting for confounders and when using different cut-offs of suboptimal health status. Conclusions Among medical staff, the second-order factor model was a suitable method to evaluate the COPSOQ. The negative and positive psychosocial work stress factors might be the risk and protective factors of suboptimal health, respectively. Moreover, negative psychosocial work stress was the most associated factor to predict suboptimal health. PMID:29511008

  10. Ultimate pier and contraction scour prediction in cohesive soils at selected bridges in Illinois

    USGS Publications Warehouse

    Straub, Timothy D.; Over, Thomas M.; Domanski, Marian M.

    2013-01-01

    The Scour Rate In COhesive Soils-Erosion Function Apparatus (SRICOS-EFA) method includes an ultimate scour prediction that is the equilibrium maximum pier and contraction scour of cohesive soils over time. The purpose of this report is to present the results of testing the ultimate pier and contraction scour methods for cohesive soils on 30 bridge sites in Illinois. Comparison of the ultimate cohesive and noncohesive methods, along with the Illinois Department of Transportation (IDOT) cohesive soil reduction-factor method and measured scour are presented. Also, results of the comparison of historic IDOT laboratory and field values of unconfined compressive strength of soils (Qu) are presented. The unconfined compressive strength is used in both ultimate cohesive and reduction-factor methods, and knowing how the values from field methods compare to the laboratory methods is critical to the informed application of the methods. On average, the non-cohesive method results predict the highest amount of scour, followed by the reduction-factor method results; and the ultimate cohesive method results predict the lowest amount of scour. The 100-year scour predicted for the ultimate cohesive, noncohesive, and reduction-factor methods for each bridge site and soil are always larger than observed scour in this study, except 12% of predicted values that are all within 0.4 ft of the observed scour. The ultimate cohesive scour prediction is smaller than the non-cohesive scour prediction method for 78% of bridge sites and soils. Seventy-six percent of the ultimate cohesive predictions show a 45% or greater reduction from the non-cohesive predictions that are over 10 ft. Comparing the ultimate cohesive and reduction-factor 100-year scour predictions methods for each bridge site and soil, the scour predicted by the ultimate cohesive scour prediction method is less than the reduction-factor 100-year scour prediction method for 51% of bridge sites and soils. Critical shear stress remains a needed parameter in the ultimate scour prediction for cohesive soils. The unconfined soil compressive strength measured by IDOT in the laboratory was found to provide a good prediction of critical shear stress, as measured by using the erosion function apparatus in a previous study. Because laboratory Qu analyses are time-consuming and expensive, the ability of field-measured Rimac data to estimate unconfined soil strength in the critical shear–soil strength relation was tested. A regression analysis was completed using a historic IDOT dataset containing 366 data pairs of laboratory Qu and field Rimac measurements from common sites with cohesive soils. The resulting equations provide a point prediction of Qu, given any Rimac value with the 90% confidence interval. The prediction equations are not significantly different from the identity Qu = Rimac. The alternative predictions of ultimate cohesive scour presented in this study assume Qu will be estimated using Rimac measurements that include computed uncertainty. In particular, the ultimate cohesive predicted scour is greater than observed scour for the entire 90% confidence interval range for predicting Qu at the bridges and soils used in this study, with the exception of the six predicted values that are all within 0.6 ft of the observed scour.

  11. Multi-PSF fusion in image restoration of range-gated systems

    NASA Astrophysics Data System (ADS)

    Wang, Canjin; Sun, Tao; Wang, Tingfeng; Miao, Xikui; Wang, Rui

    2018-07-01

    For the task of image restoration, an accurate estimation of degrading PSF/kernel is the premise of recovering a visually superior image. The imaging process of range-gated imaging system in atmosphere associates with lots of factors, such as back scattering, background radiation, diffraction limit and the vibration of the platform. On one hand, due to the difficulty of constructing models for all factors, the kernels from physical-model based methods are not strictly accurate and practical. On the other hand, there are few strong edges in images, which brings significant errors to most of image-feature-based methods. Since different methods focus on different formation factors of the kernel, their results often complement each other. Therefore, we propose an approach which combines physical model with image features. With an fusion strategy using GCRF (Gaussian Conditional Random Fields) framework, we get a final kernel which is closer to the actual one. Aiming at the problem that ground-truth image is difficult to obtain, we then propose a semi data-driven fusion method in which different data sets are used to train fusion parameters. Finally, a semi blind restoration strategy based on EM (Expectation Maximization) and RL (Richardson-Lucy) algorithm is proposed. Our methods not only models how the lasers transfer in the atmosphere and imaging in the ICCD (Intensified CCD) plane, but also quantifies other unknown degraded factors using image-based methods, revealing how multiple kernel elements interact with each other. The experimental results demonstrate that our method achieves better performance than state-of-the-art restoration approaches.

  12. A propensity score approach to correction for bias due to population stratification using genetic and non-genetic factors.

    PubMed

    Zhao, Huaqing; Rebbeck, Timothy R; Mitra, Nandita

    2009-12-01

    Confounding due to population stratification (PS) arises when differences in both allele and disease frequencies exist in a population of mixed racial/ethnic subpopulations. Genomic control, structured association, principal components analysis (PCA), and multidimensional scaling (MDS) approaches have been proposed to address this bias using genetic markers. However, confounding due to PS can also be due to non-genetic factors. Propensity scores are widely used to address confounding in observational studies but have not been adapted to deal with PS in genetic association studies. We propose a genomic propensity score (GPS) approach to correct for bias due to PS that considers both genetic and non-genetic factors. We compare the GPS method with PCA and MDS using simulation studies. Our results show that GPS can adequately adjust and consistently correct for bias due to PS. Under no/mild, moderate, and severe PS, GPS yielded estimated with bias close to 0 (mean=-0.0044, standard error=0.0087). Under moderate or severe PS, the GPS method consistently outperforms the PCA method in terms of bias, coverage probability (CP), and type I error. Under moderate PS, the GPS method consistently outperforms the MDS method in terms of CP. PCA maintains relatively high power compared to both MDS and GPS methods under the simulated situations. GPS and MDS are comparable in terms of statistical properties such as bias, type I error, and power. The GPS method provides a novel and robust tool for obtaining less-biased estimates of genetic associations that can consider both genetic and non-genetic factors. 2009 Wiley-Liss, Inc.

  13. An underdamped stochastic resonance method with stable-state matching for incipient fault diagnosis of rolling element bearings

    NASA Astrophysics Data System (ADS)

    Lei, Yaguo; Qiao, Zijian; Xu, Xuefang; Lin, Jing; Niu, Shantao

    2017-09-01

    Most traditional overdamped monostable, bistable and even tristable stochastic resonance (SR) methods have three shortcomings in weak characteristic extraction: (1) their potential structures characterized by single stable-state type are insufficient to match with the complicated and diverse mechanical vibration signals; (2) they vulnerably suffer the interference from multiscale noise and largely depend on the help of highpass filters whose parameters are selected subjectively, probably resulting in false detection; and (3) their rescaling factors are fixed as constants generally, thereby ignoring the synergistic effect among vibration signals, potential structures and rescaling factors. These three shortcomings have limited the enhancement ability of SR. To explore the SR potential, this paper initially investigates the SR in a multistable system by calculating its output spectral amplification, further analyzes its output frequency response numerically, then examines the effect of both damping and rescaling factors on output responses and finally presents a promising underdamped SR method with stable-state matching for incipient bearing fault diagnosis. This method has three advantages: (1) the diversity of stable-state types in a multistable potential makes it easy to match with various vibration signals; (2) the underdamped multistable SR, equivalent to a moving nonlinear bandpass filter that is dependent on the rescaling factors, is able to suppress the multiscale noise; and (3) the synergistic effect among vibration signals, potential structures and rescaling and damping factors is achieved using quantum genetic algorithms whose fitness functions are new weighted signal-to-noise ratio (WSNR) instead of SNR. Therefore, the proposed method is expected to possess good enhancement ability. Simulated and experimental data of rolling element bearings demonstrate its effectiveness. The comparison results show that the proposed method is able to obtain higher amplitude at target frequency and larger output WSNR, and performs better than traditional SR methods.

  14. Prevalence and factors affecting use of long acting and permanent contraceptive methods in Jinka town, Southern Ethiopia: a cross sectional study.

    PubMed

    Mekonnen, Getachew; Enquselassie, Fikre; Tesfaye, Gezahegn; Semahegn, Agumasie

    2014-01-01

    In Ethiopia, knowledge of contraceptive methods is high though there is low contraceptive prevalence rate. This study was aimed to assess prevalence and associated factors of long acting and permanent contraceptive methods in Jinka town, southern Ethiopia. Community based cross sectional survey was conducted to assess the prevalence and factors affecting long acting and permanent methods of contraceptives utilization from March to April 2008. Eight hundred child bearing age women were participated in the quantitative study and 32 purposively selected focus group discussants were participated in the qualitative study. Face to face interview was used for data collection. Data were analyzed by SPSS version 13.0 statistical software. Descriptive statistics and logistic regression were computed to analyze the data. The prevalence of long acting and permanent contraceptive method was 7.3%. Three fourth (76.1%) of the women have ever heard about implants and implant 28 (50%) were the most widely used method. Almost two third of women had intention to use long acting and permanent methods. Knowledge of contraceptive and age of women have significant association with the use of long acting and permanent contraceptive methods. The overall prevalence of long acting and permanent contraceptive method was low. Knowledge of contraceptive and age of women have significant association with use of long acting and permanent contraceptive. Extensive health information should be provided.

  15. Teaching learning methods of an entrepreneurship curriculum

    PubMed Central

    ESMI, KERAMAT; MARZOUGHI, RAHMATALLAH; TORKZADEH, JAFAR

    2015-01-01

    Introduction One of the most significant elements of entrepreneurship curriculum design is teaching-learning methods, which plays a key role in studies and researches related to such a curriculum. It is the teaching method, and systematic, organized and logical ways of providing lessons that should be consistent with entrepreneurship goals and contents, and should also be developed according to the learners’ needs. Therefore, the current study aimed to introduce appropriate, modern, and effective methods of teaching entrepreneurship and their validation Methods This is a mixed method research of a sequential exploratory kind conducted through two stages: a) developing teaching methods of entrepreneurship curriculum, and b) validating developed framework. Data were collected through “triangulation” (study of documents, investigating theoretical basics and the literature, and semi-structured interviews with key experts). Since the literature on this topic is very rich, and views of the key experts are vast, directed and summative content analysis was used. In the second stage, qualitative credibility of research findings was obtained using qualitative validation criteria (credibility, confirmability, and transferability), and applying various techniques. Moreover, in order to make sure that the qualitative part is reliable, reliability test was used. Moreover, quantitative validation of the developed framework was conducted utilizing exploratory and confirmatory factor analysis methods and Cronbach’s alpha. The data were gathered through distributing a three-aspect questionnaire (direct presentation teaching methods, interactive, and practical-operational aspects) with 29 items among 90 curriculum scholars. Target population was selected by means of purposive sampling and representative sample. Results Results obtained from exploratory factor analysis showed that a three factor structure is an appropriate method for describing elements of teaching-learning methods of entrepreneurship curriculum. Moreover, the value for Kaiser Meyer Olkin measure of sampling adequacy equaled 0.72 and the value for Bartlett’s test of variances homogeneity was significant at the 0.0001 level. Except for internship element, the rest had a factor load of higher than 0.3. Also, the results of confirmatory factor analysis showed the model appropriateness, and the criteria for qualitative accreditation were acceptable. Conclusion Developed model can help instructors in selecting an appropriate method of entrepreneurship teaching, and it can also make sure that the teaching is on the right path. Moreover, the model is comprehensive and includes all the effective teaching methods in entrepreneurship education. It is also based on qualities, conditions, and requirements of Higher Education Institutions in Iranian cultural environment. PMID:26457314

  16. Brief Symptom Inventory Factor Structure in Antisocial Adolescents: Implications for Juvenile Justice

    ERIC Educational Resources Information Center

    Whitt, Ahmed; Howard, Matthew O.

    2012-01-01

    Objectives: The Brief Symptom Inventory (BSI) is widely used in juvenile justice settings; however, little is known regarding its factor structure in antisocial youth. The authors evaluated the BSI factor structure in a state residential treatment population. Methods: 707 adolescents completed the BSI. Exploratory and confirmatory factor analyses…

  17. How Factor Analysis Can Be Used in Classification.

    ERIC Educational Resources Information Center

    Harman, Harry H.

    This is a methodological study that suggests a taxometric technique for objective classification of yeasts. It makes use of the minres method of factor analysis and groups strains of yeast according to their factor profiles. The similarities are judged in the higher-dimensional space determined by the factor analysis, but otherwise rely on the…

  18. Social and Individual Frame Factors in L2 Learning: Comparative Aspects.

    ERIC Educational Resources Information Center

    Ekstrand, Lars H.

    A large number of factors are considered in their role in second language learning. Individual factors include language aptitude, personality, attitudes and motivation, and the role of the speaker's native language. Teacher factors involve the method of instruction, the sex of the teacher, and a teacher's training and competence, while…

  19. Carbon dioxide emission factors for U.S. coal by origin and destination

    USGS Publications Warehouse

    Quick, J.C.

    2010-01-01

    This paper describes a method that uses published data to calculate locally robust CO2 emission factors for U.S. coal. The method is demonstrated by calculating CO2 emission factors by coal origin (223 counties, in 1999) and destination (479 power plants, in 2005). Locally robust CO2 emission factors should improve the accuracy and verification of greenhouse gas emission measurements from individual coal-fired power plants. Based largely on the county origin, average emission factors for U.S. lignite, subbituminous, bituminous, and anthracite coal produced during 1999 were 92.97,91.97,88.20, and 98.91 kg CO2/GJgross, respectively. However, greater variation is observed within these rank classes than between them, which limits the reliability of CO2 emission factors specified by coal rank. Emission factors calculated by destination (power plant) showed greater variation than those listed in the Emissions & Generation Resource Integrated Database (eGRID), which exhibit an unlikely uniformity that is inconsistent with the natural variation of CO2 emission factors for U.S. coal. ?? 2010 American Chemical Society.

  20. Design of a transverse-flux permanent-magnet linear generator and controller for use with a free-piston stirling engine

    NASA Astrophysics Data System (ADS)

    Zheng, Jigui; Huang, Yuping; Wu, Hongxing; Zheng, Ping

    2016-07-01

    Transverse-flux with high efficiency has been applied in Stirling engine and permanent magnet synchronous linear generator system, however it is restricted for large application because of low and complex process. A novel type of cylindrical, non-overlapping, transverse-flux, and permanent-magnet linear motor(TFPLM) is investigated, furthermore, a high power factor and less process complexity structure research is developed. The impact of magnetic leakage factor on power factor is discussed, by using the Finite Element Analysis(FEA) model of stirling engine and TFPLM, an optimization method for electro-magnetic design of TFPLM is proposed based on magnetic leakage factor. The relation between power factor and structure parameter is investigated, and a structure parameter optimization method is proposed taking power factor maximum as a goal. At last, the test bench is founded, starting experimental and generating experimental are performed, and a good agreement of simulation and experimental is achieved. The power factor is improved and the process complexity is decreased. This research provides the instruction to design high-power factor permanent-magnet linear generator.

  1. Aquifer sensitivity to pesticide leaching: Testing a soils and hydrogeologic index method

    USGS Publications Warehouse

    Mehnert, E.; Keefer, D.A.; Dey, W.S.; Wehrmann, H.A.; Wilson, S.D.; Ray, C.

    2005-01-01

    For years, researchers have sought index and other methods to predict aquifer sensitivity and vulnerability to nonpoint pesticide contamination. In 1995, an index method and map were developed to define aquifer sensitivity to pesticide leaching based on a combination of soil and hydrogeologic factors. The soil factor incorporated three soil properties: hydraulic conductivity, amount of organic matter within individual soil layers, and drainage class. These properties were obtained from a digital soil association map. The hydrogeologic factor was depth to uppermost aquifer material. To test this index method, a shallow ground water monitoring well network was designed, installed, and sampled in Illinois. The monitoring wells had a median depth of 7.6 m and were located adjacent to corn and soybean fields where the only known sources of pesticides were those used in normal agricultural production. From September 1998 through February 2001, 159 monitoring wells were sampled for 14 pesticides but no pesticide metabolites. Samples were collected and analyzed to assess the distribution of pesticide occurrence across three units of aquifer sensitivity. Pesticides were detected in 18% of all samples and nearly uniformly from samples from the three units of aquifer sensitivity. The new index method did not predict pesticide occurrence because occurrence was not dependent on the combined soil and hydrogeologic factors. However, pesticide occurrence was dependent on the tested hydrogeologic factor and was three times higher in areas where the depth to the uppermost aquifer was <6 m than in areas where the depth to the uppermost aquifer was 6 to <15 m. Copyright ?? 2005 National Ground Water Association.

  2. A neuro-data envelopment analysis approach for optimization of uncorrelated multiple response problems with smaller the better type controllable factors

    NASA Astrophysics Data System (ADS)

    Bashiri, Mahdi; Farshbaf-Geranmayeh, Amir; Mogouie, Hamed

    2013-11-01

    In this paper, a new method is proposed to optimize a multi-response optimization problem based on the Taguchi method for the processes where controllable factors are the smaller-the-better (STB)-type variables and the analyzer desires to find an optimal solution with smaller amount of controllable factors. In such processes, the overall output quality of the product should be maximized while the usage of the process inputs, the controllable factors, should be minimized. Since all possible combinations of factors' levels, are not considered in the Taguchi method, the response values of the possible unpracticed treatments are estimated using the artificial neural network (ANN). The neural network is tuned by the central composite design (CCD) and the genetic algorithm (GA). Then data envelopment analysis (DEA) is applied for determining the efficiency of each treatment. Although the important issue for implementation of DEA is its philosophy, which is maximization of outputs versus minimization of inputs, this important issue has been neglected in previous similar studies in multi-response problems. Finally, the most efficient treatment is determined using the maximin weight model approach. The performance of the proposed method is verified in a plastic molding process. Moreover a sensitivity analysis has been done by an efficiency estimator neural network. The results show efficiency of the proposed approach.

  3. The statistical mechanics of complex signaling networks: nerve growth factor signaling

    NASA Astrophysics Data System (ADS)

    Brown, K. S.; Hill, C. C.; Calero, G. A.; Myers, C. R.; Lee, K. H.; Sethna, J. P.; Cerione, R. A.

    2004-10-01

    The inherent complexity of cellular signaling networks and their importance to a wide range of cellular functions necessitates the development of modeling methods that can be applied toward making predictions and highlighting the appropriate experiments to test our understanding of how these systems are designed and function. We use methods of statistical mechanics to extract useful predictions for complex cellular signaling networks. A key difficulty with signaling models is that, while significant effort is being made to experimentally measure the rate constants for individual steps in these networks, many of the parameters required to describe their behavior remain unknown or at best represent estimates. To establish the usefulness of our approach, we have applied our methods toward modeling the nerve growth factor (NGF)-induced differentiation of neuronal cells. In particular, we study the actions of NGF and mitogenic epidermal growth factor (EGF) in rat pheochromocytoma (PC12) cells. Through a network of intermediate signaling proteins, each of these growth factors stimulates extracellular regulated kinase (Erk) phosphorylation with distinct dynamical profiles. Using our modeling approach, we are able to predict the influence of specific signaling modules in determining the integrated cellular response to the two growth factors. Our methods also raise some interesting insights into the design and possible evolution of cellular systems, highlighting an inherent property of these systems that we call 'sloppiness.'

  4. A new experimental design method to optimize formulations focusing on a lubricant for hydrophilic matrix tablets.

    PubMed

    Choi, Du Hyung; Shin, Sangmun; Khoa Viet Truong, Nguyen; Jeong, Seong Hoon

    2012-09-01

    A robust experimental design method was developed with the well-established response surface methodology and time series modeling to facilitate the formulation development process with magnesium stearate incorporated into hydrophilic matrix tablets. Two directional analyses and a time-oriented model were utilized to optimize the experimental responses. Evaluations of tablet gelation and drug release were conducted with two factors x₁ and x₂: one was a formulation factor (the amount of magnesium stearate) and the other was a processing factor (mixing time), respectively. Moreover, different batch sizes (100 and 500 tablet batches) were also evaluated to investigate an effect of batch size. The selected input control factors were arranged in a mixture simplex lattice design with 13 experimental runs. The obtained optimal settings of magnesium stearate for gelation were 0.46 g, 2.76 min (mixing time) for a 100 tablet batch and 1.54 g, 6.51 min for a 500 tablet batch. The optimal settings for drug release were 0.33 g, 7.99 min for a 100 tablet batch and 1.54 g, 6.51 min for a 500 tablet batch. The exact ratio and mixing time of magnesium stearate could be formulated according to the resulting hydrophilic matrix tablet properties. The newly designed experimental method provided very useful information for characterizing significant factors and hence to obtain optimum formulations allowing for a systematic and reliable experimental design method.

  5. Parametric design of pressure-relieving foot orthosis using statistics-based finite element method.

    PubMed

    Cheung, Jason Tak-Man; Zhang, Ming

    2008-04-01

    Custom-molded foot orthoses are frequently prescribed in routine clinical practice to prevent or treat plantar ulcers in diabetes by reducing the peak plantar pressure. However, the design and fabrication of foot orthosis vary among clinical practitioners and manufacturers. Moreover, little information about the parametric effect of different combinations of design factors is available. As an alternative to the experimental approach, therefore, computational models of the foot and footwear can provide efficient evaluations of different combinations of structural and material design factors on plantar pressure distribution. In this study, a combined finite element and Taguchi method was used to identify the sensitivity of five design factors (arch type, insole and midsole thickness, insole and midsole stiffness) of foot orthosis on peak plantar pressure relief. From the FE predictions, the custom-molded shape was found to be the most important design factor in reducing peak plantar pressure. Besides the use of an arch-conforming foot orthosis, the insole stiffness was found to be the second most important factor for peak pressure reduction. Other design factors, such as insole thickness, midsole stiffness and midsole thickness, contributed to less important roles in peak pressure reduction in the given order. The statistics-based FE method was found to be an effective approach in evaluating and optimizing the design of foot orthosis.

  6. Practical state of health estimation of power batteries based on Delphi method and grey relational grade analysis

    NASA Astrophysics Data System (ADS)

    Sun, Bingxiang; Jiang, Jiuchun; Zheng, Fangdan; Zhao, Wei; Liaw, Bor Yann; Ruan, Haijun; Han, Zhiqiang; Zhang, Weige

    2015-05-01

    The state of health (SOH) estimation is very critical to battery management system to ensure the safety and reliability of EV battery operation. Here, we used a unique hybrid approach to enable complex SOH estimations. The approach hybridizes the Delphi method known for its simplicity and effectiveness in applying weighting factors for complicated decision-making and the grey relational grade analysis (GRGA) for multi-factor optimization. Six critical factors were used in the consideration for SOH estimation: peak power at 30% state-of-charge (SOC), capacity, the voltage drop at 30% SOC with a C/3 pulse, the temperature rises at the end of discharge and charge at 1C; respectively, and the open circuit voltage at the end of charge after 1-h rest. The weighting of these factors for SOH estimation was scored by the 'experts' in the Delphi method, indicating the influencing power of each factor on SOH. The parameters for these factors expressing the battery state variations are optimized by GRGA. Eight battery cells were used to illustrate the principle and methodology to estimate the SOH by this hybrid approach, and the results were compared with those based on capacity and power capability. The contrast among different SOH estimations is discussed.

  7. A retrospective likelihood approach for efficient integration of multiple omics factors in case-control association studies.

    PubMed

    Balliu, Brunilda; Tsonaka, Roula; Boehringer, Stefan; Houwing-Duistermaat, Jeanine

    2015-03-01

    Integrative omics, the joint analysis of outcome and multiple types of omics data, such as genomics, epigenomics, and transcriptomics data, constitute a promising approach for powerful and biologically relevant association studies. These studies often employ a case-control design, and often include nonomics covariates, such as age and gender, that may modify the underlying omics risk factors. An open question is how to best integrate multiple omics and nonomics information to maximize statistical power in case-control studies that ascertain individuals based on the phenotype. Recent work on integrative omics have used prospective approaches, modeling case-control status conditional on omics, and nonomics risk factors. Compared to univariate approaches, jointly analyzing multiple risk factors with a prospective approach increases power in nonascertained cohorts. However, these prospective approaches often lose power in case-control studies. In this article, we propose a novel statistical method for integrating multiple omics and nonomics factors in case-control association studies. Our method is based on a retrospective likelihood function that models the joint distribution of omics and nonomics factors conditional on case-control status. The new method provides accurate control of Type I error rate and has increased efficiency over prospective approaches in both simulated and real data. © 2015 Wiley Periodicals, Inc.

  8. Biomaterials with persistent growth factor gradients in vivo accelerate vascularized tissue formation.

    PubMed

    Akar, Banu; Jiang, Bin; Somo, Sami I; Appel, Alyssa A; Larson, Jeffery C; Tichauer, Kenneth M; Brey, Eric M

    2015-12-01

    Gradients of soluble factors play an important role in many biological processes, including blood vessel assembly. Gradients can be studied in detail in vitro, but methods that enable the study of spatially distributed soluble factors and multi-cellular processes in vivo are limited. Here, we report on a method for the generation of persistent in vivo gradients of growth factors in a three-dimensional (3D) biomaterial system. Fibrin loaded porous poly (ethylene glycol) (PEG) scaffolds were generated using a particulate leaching method. Platelet derived growth factor BB (PDGF-BB) was encapsulated into poly (lactic-co-glycolic acid) (PLGA) microspheres which were placed distal to the tissue-material interface. PLGA provides sustained release of PDGF-BB and its diffusion through the porous structure results in gradient formation. Gradients within the scaffold were confirmed in vivo using near-infrared fluorescence imaging and gradients were present for more than 3 weeks. The diffusion of PDGF-BB was modeled and verified with in vivo imaging findings. The depth of tissue invasion and density of blood vessels formed in response to the biomaterial increased with magnitude of the gradient. This biomaterial system allows for generation of sustained growth factor gradients for the study of tissue response to gradients in vivo. Published by Elsevier Ltd.

  9. A novel method of predicting microRNA-disease associations based on microRNA, disease, gene and environment factor networks.

    PubMed

    Peng, Wei; Lan, Wei; Zhong, Jiancheng; Wang, Jianxin; Pan, Yi

    2017-07-15

    MicroRNAs have been reported to have close relationship with diseases due to their deregulation of the expression of target mRNAs. Detecting disease-related microRNAs is helpful for disease therapies. With the development of high throughput experimental techniques, a large number of microRNAs have been sequenced. However, it is still a big challenge to identify which microRNAs are related to diseases. Recently, researchers are interesting in combining multiple-biological information to identify the associations between microRNAs and diseases. In this work, we have proposed a novel method to predict the microRNA-disease associations based on four biological properties. They are microRNA, disease, gene and environment factor. Compared with previous methods, our method makes predictions not only by using the prior knowledge of associations among microRNAs, disease, environment factors and genes, but also by using the internal relationship among these biological properties. We constructed four biological networks based on the similarity of microRNAs, diseases, environment factors and genes, respectively. Then random walking was implemented on the four networks unequally. In the walking course, the associations can be inferred from the neighbors in the same networks. Meanwhile the association information can be transferred from one network to another. The results of experiment showed that our method achieved better prediction performance than other existing state-of-the-art methods. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. A technique for rapid source apportionment applied to ambient organic aerosol measurements from a thermal desorption aerosol gas chromatograph (TAG)

    DOE PAGES

    Zhang, Yaping; Williams, Brent J.; Goldstein, Allen H.; ...

    2016-11-25

    Here, we present a rapid method for apportioning the sources of atmospheric organic aerosol composition measured by gas chromatography–mass spectrometry methods. Here, we specifically apply this new analysis method to data acquired on a thermal desorption aerosol gas chromatograph (TAG) system. Gas chromatograms are divided by retention time into evenly spaced bins, within which the mass spectra are summed. A previous chromatogram binning method was introduced for the purpose of chromatogram structure deconvolution (e.g., major compound classes) (Zhang et al., 2014). Here we extend the method development for the specific purpose of determining aerosol samples' sources. Chromatogram bins are arrangedmore » into an input data matrix for positive matrix factorization (PMF), where the sample number is the row dimension and the mass-spectra-resolved eluting time intervals (bins) are the column dimension. Then two-dimensional PMF can effectively do three-dimensional factorization on the three-dimensional TAG mass spectra data. The retention time shift of the chromatogram is corrected by applying the median values of the different peaks' shifts. Bin width affects chemical resolution but does not affect PMF retrieval of the sources' time variations for low-factor solutions. A bin width smaller than the maximum retention shift among all samples requires retention time shift correction. A six-factor PMF comparison among aerosol mass spectrometry (AMS), TAG binning, and conventional TAG compound integration methods shows that the TAG binning method performs similarly to the integration method. However, the new binning method incorporates the entirety of the data set and requires significantly less pre-processing of the data than conventional single compound identification and integration. In addition, while a fraction of the most oxygenated aerosol does not elute through an underivatized TAG analysis, the TAG binning method does have the ability to achieve molecular level resolution on other bulk aerosol components commonly observed by the AMS.« less

  11. A technique for rapid source apportionment applied to ambient organic aerosol measurements from a thermal desorption aerosol gas chromatograph (TAG)

    NASA Astrophysics Data System (ADS)

    Zhang, Yaping; Williams, Brent J.; Goldstein, Allen H.; Docherty, Kenneth S.; Jimenez, Jose L.

    2016-11-01

    We present a rapid method for apportioning the sources of atmospheric organic aerosol composition measured by gas chromatography-mass spectrometry methods. Here, we specifically apply this new analysis method to data acquired on a thermal desorption aerosol gas chromatograph (TAG) system. Gas chromatograms are divided by retention time into evenly spaced bins, within which the mass spectra are summed. A previous chromatogram binning method was introduced for the purpose of chromatogram structure deconvolution (e.g., major compound classes) (Zhang et al., 2014). Here we extend the method development for the specific purpose of determining aerosol samples' sources. Chromatogram bins are arranged into an input data matrix for positive matrix factorization (PMF), where the sample number is the row dimension and the mass-spectra-resolved eluting time intervals (bins) are the column dimension. Then two-dimensional PMF can effectively do three-dimensional factorization on the three-dimensional TAG mass spectra data. The retention time shift of the chromatogram is corrected by applying the median values of the different peaks' shifts. Bin width affects chemical resolution but does not affect PMF retrieval of the sources' time variations for low-factor solutions. A bin width smaller than the maximum retention shift among all samples requires retention time shift correction. A six-factor PMF comparison among aerosol mass spectrometry (AMS), TAG binning, and conventional TAG compound integration methods shows that the TAG binning method performs similarly to the integration method. However, the new binning method incorporates the entirety of the data set and requires significantly less pre-processing of the data than conventional single compound identification and integration. In addition, while a fraction of the most oxygenated aerosol does not elute through an underivatized TAG analysis, the TAG binning method does have the ability to achieve molecular level resolution on other bulk aerosol components commonly observed by the AMS.

  12. A Comparison of the Sensitivity and Fecal Egg Counts of the McMaster Egg Counting and Kato-Katz Thick Smear Methods for Soil-Transmitted Helminths

    PubMed Central

    Levecke, Bruno; Behnke, Jerzy M.; Ajjampur, Sitara S. R.; Albonico, Marco; Ame, Shaali M.; Charlier, Johannes; Geiger, Stefan M.; Hoa, Nguyen T. V.; Kamwa Ngassam, Romuald I.; Kotze, Andrew C.; McCarthy, James S.; Montresor, Antonio; Periago, Maria V.; Roy, Sheela; Tchuem Tchuenté, Louis-Albert; Thach, D. T. C.; Vercruysse, Jozef

    2011-01-01

    Background The Kato-Katz thick smear (Kato-Katz) is the diagnostic method recommended for monitoring large-scale treatment programs implemented for the control of soil-transmitted helminths (STH) in public health, yet it is difficult to standardize. A promising alternative is the McMaster egg counting method (McMaster), commonly used in veterinary parasitology, but rarely so for the detection of STH in human stool. Methodology/Principal Findings The Kato-Katz and McMaster methods were compared for the detection of STH in 1,543 subjects resident in five countries across Africa, Asia and South America. The consistency of the performance of both methods in different trials, the validity of the fixed multiplication factor employed in the Kato-Katz method and the accuracy of these methods for estimating ‘true’ drug efficacies were assessed. The Kato-Katz method detected significantly more Ascaris lumbricoides infections (88.1% vs. 75.6%, p<0.001), whereas the difference in sensitivity between the two methods was non-significant for hookworm (78.3% vs. 72.4%) and Trichuris trichiura (82.6% vs. 80.3%). The sensitivity of the methods varied significantly across trials and magnitude of fecal egg counts (FEC). Quantitative comparison revealed a significant correlation (Rs >0.32) in FEC between both methods, and indicated no significant difference in FEC, except for A. lumbricoides, where the Kato-Katz resulted in significantly higher FEC (14,197 eggs per gram of stool (EPG) vs. 5,982 EPG). For the Kato-Katz, the fixed multiplication factor resulted in significantly higher FEC than the multiplication factor adjusted for mass of feces examined for A. lumbricoides (16,538 EPG vs. 15,396 EPG) and T. trichiura (1,490 EPG vs. 1,363 EPG), but not for hookworm. The McMaster provided more accurate efficacy results (absolute difference to ‘true’ drug efficacy: 1.7% vs. 4.5%). Conclusions/Significance The McMaster is an alternative method for monitoring large-scale treatment programs. It is a robust (accurate multiplication factor) and accurate (reliable efficacy results) method, which can be easily standardized. PMID:21695104

  13. Logging methods and peeling of Aspen

    Treesearch

    T. Schantz-Hansen

    1948-01-01

    The logging of forest products is influenced by many factors, including the size of the trees, density of the stand, the soundness of the trees, size of the area logged, topography and soil, weather conditions, the degree of utilization, the skill of the logger and the equipment used, the distance from market, etc. Each of these factors influences not only the method...

  14. Comparison of Factor Simplicity Indices for Dichotomous Data: DETECT R, Bentler's Simplicity Index, and the Loading Simplicity Index

    ERIC Educational Resources Information Center

    Finch, Holmes; Stage, Alan Kirk; Monahan, Patrick

    2008-01-01

    A primary assumption underlying several of the common methods for modeling item response data is unidimensionality, that is, test items tap into only one latent trait. This assumption can be assessed several ways, using nonlinear factor analysis and DETECT, a method based on the item conditional covariances. When multidimensionality is identified,…

  15. What Informs Practice and What Is Valued in Corporate Instructional Design? A Mixed Methods Study

    ERIC Educational Resources Information Center

    Thompson-Sellers, Ingrid N.

    2012-01-01

    This study used a two-phased explanatory mixed-methods design to explore in-depth what factors are perceived by Instructional Design and Technology (IDT) professionals as impacting instructional design practice, how these factors are valued in the field, and what differences in perspectives exist between IDT managers and non-managers. For phase 1…

  16. Enhanced factoring with a bose-einstein condensate.

    PubMed

    Sadgrove, Mark; Kumar, Sanjay; Nakagawa, Ken'ichi

    2008-10-31

    We present a novel method to realize analog sum computation with a Bose-Einstein condensate in an optical lattice potential subject to controlled phase jumps. We use the method to implement the Gauss sum algorithm for factoring numbers. By exploiting higher order quantum momentum states, we are able to improve the algorithm's accuracy beyond the limits of the usual classical implementation.

  17. Analysis of Social Cohesion in Health Data by Factor Analysis Method: The Ghanaian Perspective

    ERIC Educational Resources Information Center

    Saeed, Bashiru I. I.; Xicang, Zhao; Musah, A. A. I.; Abdul-Aziz, A. R.; Yawson, Alfred; Karim, Azumah

    2013-01-01

    We investigated the study of the overall social cohesion of Ghanaians. In this study, we considered the paramount interest of the involvement of Ghanaians in their communities, their views of other people and institutions, and their level of interest in both local and national politics. The factor analysis method was employed for analysis using R…

  18. A Fresh Look at Linear Ordinary Differential Equations with Constant Coefficients. Revisiting the Impulsive Response Method Using Factorization

    ERIC Educational Resources Information Center

    Camporesi, Roberto

    2016-01-01

    We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations of any order based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as…

  19. Multimethod Assessment of Psychopathy in Relation to Factors of Internalizing and Externalizing from the Personality Assessment Inventory: The Impact of Method Variance and Suppressor Effects

    ERIC Educational Resources Information Center

    Blonigen, Daniel M.; Patrick, Christopher J.; Douglas, Kevin S.; Poythress, Norman G.; Skeem, Jennifer L.; Lilienfeld, Scott O.; Edens, John F.; Krueger, Robert F.

    2010-01-01

    Research to date has revealed divergent relations across factors of psychopathy measures with criteria of "internalizing" (INT; anxiety, depression) and "externalizing" (EXT; antisocial behavior, substance use). However, failure to account for method variance and suppressor effects has obscured the consistency of these findings…

  20. Analysis of algae growth mechanism and water bloom prediction under the effect of multi-affecting factor.

    PubMed

    Wang, Li; Wang, Xiaoyi; Jin, Xuebo; Xu, Jiping; Zhang, Huiyan; Yu, Jiabin; Sun, Qian; Gao, Chong; Wang, Lingbin

    2017-03-01

    The formation process of algae is described inaccurately and water blooms are predicted with a low precision by current methods. In this paper, chemical mechanism of algae growth is analyzed, and a correlation analysis of chlorophyll-a and algal density is conducted by chemical measurement. Taking into account the influence of multi-factors on algae growth and water blooms, the comprehensive prediction method combined with multivariate time series and intelligent model is put forward in this paper. Firstly, through the process of photosynthesis, the main factors that affect the reproduction of the algae are analyzed. A compensation prediction method of multivariate time series analysis based on neural network and Support Vector Machine has been put forward which is combined with Kernel Principal Component Analysis to deal with dimension reduction of the influence factors of blooms. Then, Genetic Algorithm is applied to improve the generalization ability of the BP network and Least Squares Support Vector Machine. Experimental results show that this method could better compensate the prediction model of multivariate time series analysis which is an effective way to improve the description accuracy of algae growth and prediction precision of water blooms.

  1. Appearance of cell-adhesion factor in osteoblast proliferation and differentiation of apatite coating titanium by blast coating method.

    PubMed

    Umeda, Hirotsugu; Mano, Takamitsu; Harada, Koji; Tarannum, Ferdous; Ueyama, Yoshiya

    2017-08-01

    We have already reported that the apatite coating of titanium by the blast coating (BC) method could show a higher rate of bone contact from the early stages in vivo, when compared to the pure titanium (Ti) and the apatite coating of titanium by the flame spraying (FS) method. However, the detailed mechanism by which BC resulted in satisfactory bone contact is still unknown. In the present study, we investigated the importance of various factors including cell adhesion factor in osteoblast proliferation and differentiation that could affect the osteoconductivity of the BC disks. Cell proliferation assay revealed that Saos-2 could grow fastest on BC disks, and that a spectrophotometric method using a LabAssay TM ALP kit showed that ALP activity was increased in cells on BC disks compared to Ti disks and FS disks. In addition, higher expression of E-cadherin and Fibronectin was observed in cells on BC disks than Ti disks and FS disks by relative qPCR as well as Western blotting. These results suggested that the expression of cell-adhesion factors, proliferation and differentiation of osteoblast might be enhanced on BC disks, which might result higher osteoconductivity.

  2. A contracting-interval program for the Danilewski method. Ph.D. Thesis - Va. Univ.

    NASA Technical Reports Server (NTRS)

    Harris, J. D.

    1971-01-01

    The concept of contracting-interval programs is applied to finding the eigenvalues of a matrix. The development is a three-step process in which (1) a program is developed for the reduction of a matrix to Hessenberg form, (2) a program is developed for the reduction of a Hessenberg matrix to colleague form, and (3) the characteristic polynomial with interval coefficients is readily obtained from the interval of colleague matrices. This interval polynomial is then factored into quadratic factors so that the eigenvalues may be obtained. To develop a contracting-interval program for factoring this polynomial with interval coefficients it is necessary to have an iteration method which converges even in the presence of controlled rounding errors. A theorem is stated giving sufficient conditions for the convergence of Newton's method when both the function and its Jacobian cannot be evaluated exactly but errors can be made proportional to the square of the norm of the difference between the previous two iterates. This theorem is applied to prove the convergence of the generalization of the Newton-Bairstow method that is used to obtain quadratic factors of the characteristic polynomial.

  3. Risk factors of infant anemia in the perinatal period.

    PubMed

    Hirata, Michio; Kusakawa, Isao; Ohde, Sachiko; Yamanaka, Michiko; Yoda, Hitoshi

    2017-04-01

    Infants are at particular risk of iron-deficiency anemia. We investigated changes in the blood count of the mother and infant as well as the relationship between them and the relationship between infant nutrition method and infant anemia. This retrospective cohort study included healthy neonates born between August 2011 and July 2014 at St Luke's International Hospital, Tokyo, Japan. Data from maternal blood samples obtained during late pregnancy and those of infants obtained at birth and at the age of 3, 6, and 9 months were analyzed. Using multivariate logistic regression, we investigated nutrition methods, maternal anemia, and other clinically relevant parameters that were potential risk factors for infant anemia. In total, data for 3472 infants and their mothers were analyzed. Nutrition method was the most significant risk factor for infant anemia, with risk of future anemia decreasing in the following order: exclusive breast-feeding, partial breast-feeding, and formula feeding. Furthermore, low umbilical cord blood hemoglobin led to a tendency toward anemia in the child. Infant nutrition method was the most significant factor related to anemia in late infancy. Infants with low umbilical cord blood hemoglobin are more likely to develop anemia in late infancy. © 2016 Japan Pediatric Society.

  4. Combined target factor analysis and Bayesian soft-classification of interference-contaminated samples: forensic fire debris analysis.

    PubMed

    Williams, Mary R; Sigman, Michael E; Lewis, Jennifer; Pitan, Kelly McHugh

    2012-10-10

    A bayesian soft classification method combined with target factor analysis (TFA) is described and tested for the analysis of fire debris data. The method relies on analysis of the average mass spectrum across the chromatographic profile (i.e., the total ion spectrum, TIS) from multiple samples taken from a single fire scene. A library of TIS from reference ignitable liquids with assigned ASTM classification is used as the target factors in TFA. The class-conditional distributions of correlations between the target and predicted factors for each ASTM class are represented by kernel functions and analyzed by bayesian decision theory. The soft classification approach assists in assessing the probability that ignitable liquid residue from a specific ASTM E1618 class, is present in a set of samples from a single fire scene, even in the presence of unspecified background contributions from pyrolysis products. The method is demonstrated with sample data sets and then tested on laboratory-scale burn data and large-scale field test burns. The overall performance achieved in laboratory and field test of the method is approximately 80% correct classification of fire debris samples. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  5. Selection of nursing teaching strategies in mainland China: A questionnaire survey.

    PubMed

    Zhou, HouXiu; Liu, MengJie; Zeng, Jing; Zhu, JingCi

    2016-04-01

    In nursing education, the traditional lecture and direct demonstration teaching method cannot cultivate the various skills that nursing students need. How to choose a more scientific and rational teaching method is a common concern for nursing educators worldwide. To investigate the basis for selecting teaching methods among nursing teachers in mainland China, the factors affecting the selection of different teaching methods, and the application of different teaching methods in theoretical and skill-based nursing courses. Questionnaire survey. Seventy one nursing colleges from 28 provincial-level administrative regions in mainland China. Following the principle of voluntary informed consent, 262 nursing teachers were randomly selected through a nursing education network platform and a conference platform. The questionnaire contents included the basis for and the factors influencing the selection of nursing teaching methods, the participants' common teaching methods, and the teaching experience of the surveyed nursing teachers. The questionnaires were distributed through the network or conference platform, and the data were analyzed by SPSS 17.0 software. The surveyed nursing teachers selected teaching methods mainly based on the characteristics of the teaching content, the characteristics of the students, and their previous teaching experiences. The factors affecting the selection of teaching methods mainly included large class sizes, limited class time, and limited examination formats. The surveyed nursing teachers primarily used lectures to teach theory courses and the direct demonstration method to teach skills courses, and the application frequencies of these two teaching methods were significantly higher than those of other teaching methods (P=0.000). More attention should be paid to the selection of nursing teaching methods. Every teacher should strategically choose teaching methods before each lesson, and nursing education training focused on selecting effective teaching methods should be more extensive. Copyright © 2016. Published by Elsevier Ltd.

  6. Resistivity Correction Factor for the Four-Probe Method: Experiment II

    NASA Astrophysics Data System (ADS)

    Yamashita, Masato; Yamaguchi, Shoji; Nishii, Toshifumi; Kurihara, Hiroshi; Enjoji, Hideo

    1989-05-01

    Experimental verification of the theoretically derived resistivity correction factor F is presented. Factor F can be applied to a system consisting of a disk sample and a four-probe array. Measurements are made on isotropic graphite disks and crystalline ITO films. Factor F can correct the apparent variations of the data and lead to reasonable resistivities and sheet resistances. Here factor F is compared to other correction factors; i.e. FASTM and FJIS.

  7. Calibration of resistance factors needed in the LRFD design of driven piles.

    DOT National Transportation Integrated Search

    2009-05-01

    This research project presents the calibration of resistance factors for the Load and Resistance Factor Design (LRFD) method of driven : piles driven into Louisiana soils based on reliability theory. Fifty-three square Precast-Prestressed-Concrete (P...

  8. Calibration of Resistance Factors Needed in the LRFD Design of Driven Piles

    DOT National Transportation Integrated Search

    2009-05-01

    This research project presents the calibration of resistance factors for the Load and Resistance Factor Design (LRFD) method of driven : piles driven into Louisiana soils based on reliability theory. Fifty-three square Precast-Prestressed-Concrete (P...

  9. Recurrent-neural-network-based Boolean factor analysis and its application to word clustering.

    PubMed

    Frolov, Alexander A; Husek, Dusan; Polyakov, Pavel Yu

    2009-07-01

    The objective of this paper is to introduce a neural-network-based algorithm for word clustering as an extension of the neural-network-based Boolean factor analysis algorithm (Frolov , 2007). It is shown that this extended algorithm supports even the more complex model of signals that are supposed to be related to textual documents. It is hypothesized that every topic in textual data is characterized by a set of words which coherently appear in documents dedicated to a given topic. The appearance of each word in a document is coded by the activity of a particular neuron. In accordance with the Hebbian learning rule implemented in the network, sets of coherently appearing words (treated as factors) create tightly connected groups of neurons, hence, revealing them as attractors of the network dynamics. The found factors are eliminated from the network memory by the Hebbian unlearning rule facilitating the search of other factors. Topics related to the found sets of words can be identified based on the words' semantics. To make the method complete, a special technique based on a Bayesian procedure has been developed for the following purposes: first, to provide a complete description of factors in terms of component probability, and second, to enhance the accuracy of classification of signals to determine whether it contains the factor. Since it is assumed that every word may possibly contribute to several topics, the proposed method might be related to the method of fuzzy clustering. In this paper, we show that the results of Boolean factor analysis and fuzzy clustering are not contradictory, but complementary. To demonstrate the capabilities of this attempt, the method is applied to two types of textual data on neural networks in two different languages. The obtained topics and corresponding words are at a good level of agreement despite the fact that identical topics in Russian and English conferences contain different sets of keywords.

  10. Motivation factors for suicidal behavior and their clinical relevance in admitted psychiatric patients.

    PubMed

    Hayashi, Naoki; Igarashi, Miyabi; Imai, Atsushi; Yoshizawa, Yuka; Asamura, Kaori; Ishikawa, Yoichi; Tokunaga, Taro; Ishimoto, Kayo; Tatebayashi, Yoshitaka; Harima, Hirohiko; Kumagai, Naoki; Ishii, Hidetoki; Okazaki, Yuji

    2017-01-01

    Suicidal behavior (SB) is a major, worldwide health concern. To date there is limited understanding of the associated motivational aspects which accompany this self-initiated conduct. To develop a method for identifying motivational features associated with SB by studying admitted psychiatric patients, and to examine their clinical relevance. By performing a factor analytic study using data obtained from a patient sample exhibiting high suicidality and a variety of SB methods, Motivations for SB Scale (MSBS) was constructed to measure the features. Data included assessments of DSM-IV psychiatric and personality disorders, suicide intent, depressive symptomatology, overt aggression, recent life events (RLEs) and methods of SB, collated from structured interviews. Association of identified features with clinical variables was examined by correlation analyses and MANCOVA. Factor analyses elicited a 4-factor solution composed of Interpersonal-testing (IT), Interpersonal-change (IC), Self-renunciation (SR) and Self-sustenance (SS). These factors were classified according to two distinctions, namely interpersonal vs. intra-personal directedness, and the level of assumed influence by SB or the relationship to prevailing emotions. Analyses revealed meaningful links between patient features and clinical variables. Interpersonal-motivations (IT and IC) were associated with overt aggression, low suicidality and RLE discord or conflict, while SR was associated with depression, high suicidality and RLE separation or death. Borderline personality disorder showed association with IC and SS. When self-strangulation was set as a reference SB method, self-cutting and overdose-taking were linked to IT and SS, respectively. The factors extracted in this study largely corresponded to factors from previous studies, implying that they may be useful in a wider clinical context. The association of these features with SB-related factors suggests that they constitute an integral part of the process leading to SB. These results provide a base for further research into clinical strategies for patient management and therapy.

  11. Full-Information Item Bi-Factor Analysis. ONR Technical Report. [Biometric Lab Report No. 90-2.

    ERIC Educational Resources Information Center

    Gibbons, Robert D.; And Others

    A plausible "s"-factor solution for many types of psychological and educational tests is one in which there is one general factor and "s - 1" group- or method-related factors. The bi-factor solution results from the constraint that each item has a non-zero loading on the primary dimension "alpha(sub j1)" and at most…

  12. Factorization-based texture segmentation

    DOE PAGES

    Yuan, Jiangye; Wang, Deliang; Cheriyadat, Anil M.

    2015-06-17

    This study introduces a factorization-based approach that efficiently segments textured images. We use local spectral histograms as features, and construct an M × N feature matrix using M-dimensional feature vectors in an N-pixel image. Based on the observation that each feature can be approximated by a linear combination of several representative features, we factor the feature matrix into two matrices-one consisting of the representative features and the other containing the weights of representative features at each pixel used for linear combination. The factorization method is based on singular value decomposition and nonnegative matrix factorization. The method uses local spectral histogramsmore » to discriminate region appearances in a computationally efficient way and at the same time accurately localizes region boundaries. Finally, the experiments conducted on public segmentation data sets show the promise of this simple yet powerful approach.« less

  13. Application of factor analysis of infrared spectra for quantitative determination of beta-tricalcium phosphate in calcium hydroxylapatite.

    PubMed

    Arsenyev, P A; Trezvov, V V; Saratovskaya, N V

    1997-01-01

    This work represents a method, which allows to determine phase composition of calcium hydroxylapatite basing on its infrared spectrum. The method uses factor analysis of the spectral data of calibration set of samples to determine minimal number of factors required to reproduce the spectra within experimental error. Multiple linear regression is applied to establish correlation between factor scores of calibration standards and their properties. The regression equations can be used to predict the property value of unknown sample. The regression model was built for determination of beta-tricalcium phosphate content in hydroxylapatite. Statistical estimation of quality of the model was carried out. Application of the factor analysis on spectral data allows to increase accuracy of beta-tricalcium phosphate determination and expand the range of determination towards its less concentration. Reproducibility of results is retained.

  14. A re-evaluation of finite-element models and stress-intensity factors for surface cracks emanating from stress concentrations

    NASA Technical Reports Server (NTRS)

    Tan, P. W.; Raju, I. S.; Shivakumar, K. N.; Newman, J. C., Jr.

    1988-01-01

    A re-evaluation of the 3-D finite-element models and methods used to analyze surface crack at stress concentrations is presented. Previous finite-element models used by Raju and Newman for surface and corner cracks at holes were shown to have ill-shaped elements at the intersection of the hole and crack boundaries. These ill-shaped elements tended to make the model too stiff and, hence, gave lower stress-intensity factors near the hole-crack intersection than models without these elements. Improved models, without these ill-shaped elements, were developed for a surface crack at a circular hole and at a semi-circular edge notch. Stress-intensity factors were calculated by both the nodal-force and virtual-crack-closure methods. Both methods and different models gave essentially the same results. Comparisons made between the previously developed stress-intensity factor equations and the results from the improved models agreed well except for configurations with large notch-radii-to-plate-thickness ratios. Stress-intensity factors for a semi-elliptical surface crack located at the center of a semi-circular edge notch in a plate subjected to remote tensile loadings were calculated using the improved models. The ratio of crack depth to crack length ranged form 0.4 to 2; the ratio of crack depth to plate thickness ranged from 0.2 to 0.8; and the ratio of notch radius to the plate thickness ranged from 1 to 3. The models had about 15,000 degrees-of-freedom. Stress-intensity factors were calculated by using the nodal-force method.

  15. Sparsity-optimized separation of body waves and ground-roll by constructing dictionaries using tunable Q-factor wavelet transforms with different Q-factors

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Chen, Wenchao; Wang, Xiaokai; Wang, Wei

    2017-10-01

    Low-frequency oscillatory ground-roll is regarded as one of the main regular interference waves, which obscures primary reflections in land seismic data. Suppressing the ground-roll can reasonably improve the signal-to-noise ratio of seismic data. Conventional suppression methods, such as high-pass and various f-k filtering, usually cause waveform distortions and loss of body wave information because of their simple cut-off operation. In this study, a sparsity-optimized separation of body waves and ground-roll, which is based on morphological component analysis theory, is realized by constructing dictionaries using tunable Q-factor wavelet transforms with different Q-factors. Our separation model is grounded on the fact that the input seismic data are composed of low-oscillatory body waves and high-oscillatory ground-roll. Two different waveform dictionaries using a low Q-factor and a high Q-factor, respectively, are confirmed as able to sparsely represent each component based on their diverse morphologies. Thus, seismic data including body waves and ground-roll can be nonlinearly decomposed into low-oscillatory and high-oscillatory components. This is a new noise attenuation approach according to the oscillatory behaviour of the signal rather than the scale or frequency. We illustrate the method using both synthetic and field shot data. Compared with results from conventional high-pass and f-k filtering, the results of the proposed method prove this method to be effective and advantageous in preserving the waveform and bandwidth of reflections.

  16. Single-Molecule Studies of Actin Assembly and Disassembly Factors

    PubMed Central

    Smith, Benjamin A.; Gelles, Jeff; Goode, Bruce L.

    2014-01-01

    The actin cytoskeleton is very dynamic and highly regulated by multiple associated proteins in vivo. Understanding how this system of proteins functions in the processes of actin network assembly and disassembly requires methods to dissect the mechanisms of activity of individual factors and of multiple factors acting in concert. The advent of single-filament and single-molecule fluorescence imaging methods has provided a powerful new approach to discovering actin-regulatory activities and obtaining direct, quantitative insights into the pathways of molecular interactions that regulate actin network architecture and dynamics. Here we describe techniques for acquisition and analysis of single-molecule data, applied to the novel challenges of studying the filament assembly and disassembly activities of actin-associated proteins in vitro. We discuss the advantages of single-molecule analysis in directly visualizing the order of molecular events, measuring the kinetic rates of filament binding and dissociation, and studying the coordination among multiple factors. The methods described here complement traditional biochemical approaches in elucidating actin-regulatory mechanisms in reconstituted filamentous networks. PMID:24630103

  17. Multiple Illuminant Colour Estimation via Statistical Inference on Factor Graphs.

    PubMed

    Mutimbu, Lawrence; Robles-Kelly, Antonio

    2016-08-31

    This paper presents a method to recover a spatially varying illuminant colour estimate from scenes lit by multiple light sources. Starting with the image formation process, we formulate the illuminant recovery problem in a statistically datadriven setting. To do this, we use a factor graph defined across the scale space of the input image. In the graph, we utilise a set of illuminant prototypes computed using a data driven approach. As a result, our method delivers a pixelwise illuminant colour estimate being devoid of libraries or user input. The use of a factor graph also allows for the illuminant estimates to be recovered making use of a maximum a posteriori (MAP) inference process. Moreover, we compute the probability marginals by performing a Delaunay triangulation on our factor graph. We illustrate the utility of our method for pixelwise illuminant colour recovery on widely available datasets and compare against a number of alternatives. We also show sample colour correction results on real-world images.

  18. Shuttle user analysis (study 2.2). Volume 4: Standardized subsystem modules analysis

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The capability to analyze payloads constructed of standardized modules was provided for the planning of future mission models. An inventory of standardized module designs previously obtained was used as a starting point. Some of the conclusions and recommendations are: (1) the two growth factor synthesis methods provide logical configurations for satellite type selection; (2) the recommended method is the one that determines the growth factor as a function of the baseline subsystem weight, since it provides a larger growth factor for small subsystem weights and results in a greater overkill due to standardization; (3) the method that is not recommended is the one that depends upon a subsystem similarity selection, since care must be used in the subsystem similarity selection; (4) it is recommended that the application of standardized subsystem factors be limited to satellites with baseline dry weights between about 700 and 6,500 lbs; and (5) the standardized satellite design approach applies to satellites maintainable in orbit or retrieved for ground maintenance.

  19. Timing Calibration in PET Using a Time Alignment Probe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moses, William W.; Thompson, Christopher J.

    2006-05-05

    We evaluate the Scanwell Time Alignment Probe for performing the timing calibration for the LBNL Prostate-Specific PET Camera. We calibrate the time delay correction factors for each detector module in the camera using two methods--using the Time Alignment Probe (which measures the time difference between the probe and each detector module) and using the conventional method (which measures the timing difference between all module-module combinations in the camera). These correction factors, which are quantized in 2 ns steps, are compared on a module-by-module basis. The values are in excellent agreement--of the 80 correction factors, 62 agree exactly, 17 differ bymore » 1 step, and 1 differs by 2 steps. We also measure on-time and off-time counting rates when the two sets of calibration factors are loaded into the camera and find that they agree within statistical error. We conclude that the performance using the Time Alignment Probe and conventional methods are equivalent.« less

  20. Application of the Risk-Based Early Warning Method in a Fracture-Karst Water Source, North China.

    PubMed

    Guo, Yongli; Wu, Qing; Li, Changsuo; Zhao, Zhenhua; Sun, Bin; He, Shiyi; Jiang, Guanghui; Zhai, Yuanzheng; Guo, Fang

    2018-03-01

      The paper proposes a risk-based early warning considering characteristics of fracture-karst aquifer in North China and applied it in a super-large fracture-karst water source. Groundwater vulnerability, types of land use, water abundance, transmissivity and spatial temporal variation of groundwater quality were chosen as indexes of the method. Weights of factors were obtained by using AHP method based on relative importance of factors, maps of factors were zoned by GIS, early warning map was conducted based on extension theory with the help of GIS, ENVI+IDL. The early warning map fused five factors very well, serious and tremendous warning areas are mainly located in northwest and east with high or relatively high transmissivity and groundwater pollutant loading, and obviously deteriorated or deteriorated trend of petroleum. The early warning map warns people where more attention should be paid, and the paper guides decision making to take appropriate protection actions in different warning levels areas.

  1. [Study on depressive disorder and related factors in surgical inpatients].

    PubMed

    Ge, Hong-min; Liu, Lan-fen; Han, Jian-bo

    2008-03-01

    To investigate the prevalence and possible influencing factors of depressive disorder in surgical inpatients. Two hundred and sixty-six surgical inpatients meeting the inclusion criteria were first screened with the self rating depression scale (SDS), and then the subjects screened positive and 20% of those screened negative were evaluated with Structured Clinical Interview for DSM-IV Axis I Disorders (SCID) as a gold standard for diagnosis of depressive disorder. Possible influencing factors were also analyzed by experienced psychiatrists. The standard score of SDS in the surgical inpatients were significantly higher than those in the Chinese norm, and the incidence of depressive disorder in the surgical inpatients was 37.2%. Unvaried analysis showed that depressive disorder were associated with gender, education, economic condition, variety of diseases, hospitalization duration, and treatment methods. Logistic regression analysis revealed that gender, economic condition, treatment methods and previous history were the main influencing factors. The incidence of depressive disorder in the surgical inpatients is high, and it is mainly influenced by gender, economic condition, treatment methods and previous history.

  2. Strain intensity factor approach for predicting the strength of continuously reinforced metal matrix composites

    NASA Technical Reports Server (NTRS)

    Poe, C. C., Jr.

    1988-01-01

    A method was previously developed to predict the fracture toughness (stress intensity factor at failure) of composites in terms of the elastic constants and the tensile failing strain of the fibers. The method was applied to boron/aluminum composites made with various proportions of 0 to + or - 45 deg plies. Predicted values of fracture toughness were in gross error because widespread yielding of the aluminum matrix made the compliance very nonlinear. An alternate method was developed to predict the strain intensity factor at failure rather than the stress intensity factor because the singular strain field was not affected by yielding as much as the stress field. Strengths of specimens containing crack-like slits were calculated from predicted failing strains using uniaxial stress-strain curves. Predicted strengths were in good agreement with experimental values, even for the very nonlinear laminates that contained only + or - 45 deg plies. This approach should be valid for other metal matrix composites that have continuous fibers.

  3. The World Health Organization STEPwise Approach to Noncommunicable Disease Risk-Factor Surveillance: Methods, Challenges, and Opportunities

    PubMed Central

    Guthold, Regina; Cowan, Melanie; Savin, Stefan; Bhatti, Lubna; Armstrong, Timothy; Bonita, Ruth

    2016-01-01

    Objectives. We sought to outline the framework and methods used by the World Health Organization (WHO) STEPwise approach to noncommunicable disease (NCD) surveillance (STEPS), describe the development and current status, and discuss strengths, limitations, and future directions of STEPS surveillance. Methods. STEPS is a WHO-developed, standardized but flexible framework for countries to monitor the main NCD risk factors through questionnaire assessment and physical and biochemical measurements. It is coordinated by national authorities of the implementing country. The STEPS surveys are generally household-based and interviewer-administered, with scientifically selected samples of around 5000 participants. Results. To date, 122 countries across all 6 WHO regions have completed data collection for STEPS or STEPS-aligned surveys. Conclusions. STEPS data are being used to inform NCD policies and track risk-factor trends. Future priorities include strengthening these linkages from data to action on NCDs at the country level, and continuing to develop STEPS’ capacities to enable a regular and continuous cycle of risk-factor surveillance worldwide. PMID:26696288

  4. New perspectives in the diagnostic of gingival recession.

    PubMed

    Dominiak, Marzena; Gedrange, Tomasz

    2014-01-01

    Gingival recession (GR) is a common clinical situation observed in patient populations regardless of their age and ethnicity. It has been estimated that over 60% of the human population has gingival recession. It is the final effect of the interaction of multiple etiological factors. Identification and definition of the range of influence is often not possible, with the result that new methods for testing and elimination of potential etiological factors are still being sought. The aim of this study is to present the etiopathogenesis of gingival recessions with regard to the analysis of morphological and functional factors. For the assessment of the bone factors, we will describe the new cephalometric method for measuring sagital width of the bone in the central incisors area, places when GR are most commonly observed. Also, a review will be presented of modern methods of treatment; in particular classes recessions; usage substitute of autogenous tissue will be emphasized--collagen matrix, and primary culture fibroblasts on collagen net.

  5. Bayes and empirical Bayes methods for reduced rank regression models in matched case-control studies.

    PubMed

    Satagopan, Jaya M; Sen, Ananda; Zhou, Qin; Lan, Qing; Rothman, Nathaniel; Langseth, Hilde; Engel, Lawrence S

    2016-06-01

    Matched case-control studies are popular designs used in epidemiology for assessing the effects of exposures on binary traits. Modern studies increasingly enjoy the ability to examine a large number of exposures in a comprehensive manner. However, several risk factors often tend to be related in a nontrivial way, undermining efforts to identify the risk factors using standard analytic methods due to inflated type-I errors and possible masking of effects. Epidemiologists often use data reduction techniques by grouping the prognostic factors using a thematic approach, with themes deriving from biological considerations. We propose shrinkage-type estimators based on Bayesian penalization methods to estimate the effects of the risk factors using these themes. The properties of the estimators are examined using extensive simulations. The methodology is illustrated using data from a matched case-control study of polychlorinated biphenyls in relation to the etiology of non-Hodgkin's lymphoma. © 2015, The International Biometric Society.

  6. Satellite Power System (SPS) student participation

    NASA Technical Reports Server (NTRS)

    Ladwig, A.; David, L.

    1978-01-01

    A assessment of methods which are appropriate to initiate student participation in the discussion of a satellite power system (SPS) is presented. Methods which are incorporated into the campus environment and the on-going learning experience are reported. The discussion of individual methods for student participation includes a description of the technique, followed by comments on its enhancing and limiting factors, references to situations where the method has been demonstrated, and a brief consideration of cost factors. The two categories of recommendations presented are: an outline of fourteen recommendations addressing specific activities related to student participation in the discussion of SPS, and three recommendations pertaining to student participation activities in general.

  7. Boundary conditions for the solution of compressible Navier-Stokes equations by an implicit factored method

    NASA Technical Reports Server (NTRS)

    Shih, T. I.-P.; Smith, G. E.; Springer, G. S.; Rimon, Y.

    1983-01-01

    A method is presented for formulating the boundary conditions in implicit finite-difference form needed for obtaining solutions to the compressible Navier-Stokes equations by the Beam and Warming implicit factored method. The usefulness of the method was demonstrated (a) by establishing the boundary conditions applicable to the analysis of the flow inside an axisymmetric piston-cylinder configuration and (b) by calculating velocities and mass fractions inside the cylinder for different geometries and different operating conditions. Stability, selection of time step and grid sizes, and computer time requirements are discussed in reference to the piston-cylinder problem analyzed.

  8. Comparison of estimators of standard deviation for hydrologic time series

    USGS Publications Warehouse

    Tasker, Gary D.; Gilroy, Edward J.

    1982-01-01

    Unbiasing factors as a function of serial correlation, ρ, and sample size, n for the sample standard deviation of a lag one autoregressive model were generated by random number simulation. Monte Carlo experiments were used to compare the performance of several alternative methods for estimating the standard deviation σ of a lag one autoregressive model in terms of bias, root mean square error, probability of underestimation, and expected opportunity design loss. Three methods provided estimates of σ which were much less biased but had greater mean square errors than the usual estimate of σ: s = (1/(n - 1) ∑ (xi −x¯)2)½. The three methods may be briefly characterized as (1) a method using a maximum likelihood estimate of the unbiasing factor, (2) a method using an empirical Bayes estimate of the unbiasing factor, and (3) a robust nonparametric estimate of σ suggested by Quenouille. Because s tends to underestimate σ, its use as an estimate of a model parameter results in a tendency to underdesign. If underdesign losses are considered more serious than overdesign losses, then the choice of one of the less biased methods may be wise.

  9. The calibration methods for Multi-Filter Rotating Shadowband Radiometer: a review

    NASA Astrophysics Data System (ADS)

    Chen, Maosi; Davis, John; Tang, Hongzhao; Ownby, Carolyn; Gao, Wei

    2013-09-01

    The continuous, over two-decade data record from the Multi-Filter Rotating Shadowband Radiometer (MFRSR) is ideal for climate research which requires timely and accurate information of important atmospheric components such as gases, aerosols, and clouds. Except for parameters derived from MFRSR measurement ratios, which are not impacted by calibration error, most applications require accurate calibration factor(s), angular correction, and spectral response function(s) from calibration. Although a laboratory lamp (or reference) calibration can provide all the information needed to convert the instrument readings to actual radiation, in situ calibration methods are implemented routinely (daily) to fill the gaps between lamp calibrations. In this paper, the basic structure and the data collection and pretreatment of the MFRSR are described. The laboratory lamp calibration and its limitations are summarized. The cloud screening algorithms for MFRSR data are presented. The in situ calibration methods, the standard Langley method and its variants, the ratio-Langley method, the general method, Alexandrov's comprehensive method, and Chen's multi-channel method, are outlined. The reason that all these methods do not fit for all situations is that they assume some properties, such as aerosol optical depth (AOD), total optical depth (TOD), precipitable water vapor (PWV), effective size of aerosol particles, or angstrom coefficient, are invariant over time. These properties are not universal and some of them rarely happen. In practice, daily calibration factors derived from these methods should be smoothed to restrain error.

  10. Contraceptive Vaccines Targeting Factors Involved in Establishment of Pregnancy

    PubMed Central

    Lemons, Angela R.; Naz, Rajesh K.

    2011-01-01

    Problem Current methods of contraception lack specificity and are accompanied with serious side effects. A more specific method of contraception is needed. Contraceptive vaccines can provide most, if not all, the desired characteristics of an ideal contraceptive. Approach This article reviews several factors involved in the establishment of pregnancy, focusing on those that are essential for successful implantation. Factors that are both essential and pregnancy-specific can provide potential targets for contraception. Conclusion Using database search, 76 factors (cytokines/chemokines/growth factors/others) were identified that are involved in various steps of the establishment of pregnancy. Among these factors, three, namely chorionic gonadotropin (CG), leukemia inhibitory factor (LIF), and preimplantation factor (PIF), are found to be unique and exciting molecules. Human CG is a well-known pregnancy-specific protein that has undergone phase I and phase II clinical trials, in women, as a contraceptive vaccine with encouraging results. LIF and PIF are pregnancy-specific and essential for successful implantation. These molecules are intriguing and may provide viable targets for immunocontraception. A multiepitope vaccine combining factors/antigens involved in various steps of the fertilization cascade and pregnancy establishment, may provide a highly immunogenic and efficacious modality for contraception in humans. PMID:21481058

  11. Comparisons of LET distributions measured in low-earth orbit using tissue-equivalent proportional counters and the position-sensitive silicon-detector telescope (RRMD-III)

    NASA Technical Reports Server (NTRS)

    Doke, T.; Hayashi, T.; Borak, T. B.; Chatterjee, A. (Principal Investigator)

    2001-01-01

    Determinations of the LET distribution, phi(L), of charged particles within a spacecraft in low-Earth orbit have been made. One method used a cylindrical tissue-equivalent proportional counter (TEPC), with the assumption that for each measured event, lineal energy, y, is equal to LET and thus phi(L) = phi(y). The other was based on the direct measurement of LETs for individual particles using a charged-particle telescope consisting of position-sensitive silicon detectors called RRMD-III. There were differences of up to a factor of 10 between estimates of phi(L) using the two methods on the same mission. This caused estimates of quality factor to vary by a factor of two between the two methods.

  12. Transport, biodegradation and isotopic fractionation of chlorinated ethenes: modeling and parameter estimation methods

    NASA Astrophysics Data System (ADS)

    Béranger, Sandra C.; Sleep, Brent E.; Lollar, Barbara Sherwood; Monteagudo, Fernando Perez

    2005-01-01

    An analytical, one-dimensional, multi-species, reactive transport model for simulating the concentrations and isotopic signatures of tetrachloroethylene (PCE) and its daughter products was developed. The simulation model was coupled to a genetic algorithm (GA) combined with a gradient-based (GB) method to estimate the first order decay coefficients and enrichment factors. In testing with synthetic data, the hybrid GA-GB method reduced the computational requirements for parameter estimation by a factor as great as 300. The isotopic signature profiles were observed to be more sensitive than the concentration profiles to estimates of both the first order decay constants and enrichment factors. Including isotopic data for parameter estimation significantly increased the GA convergence rate and slightly improved the accuracy of estimation of first order decay constants.

  13. High-throughput method for the quantitation of metabolites and co-factors from homocysteine-methionine cycle for nutritional status assessment.

    PubMed

    Da Silva, Laeticia; Collino, Sebastiano; Cominetti, Ornella; Martin, Francois-Pierre; Montoliu, Ivan; Moreno, Sergio Oller; Corthesy, John; Kaput, Jim; Kussmann, Martin; Monteiro, Jacqueline Pontes; Guiraud, Seu Ping

    2016-09-01

    There is increasing interest in the profiling and quantitation of methionine pathway metabolites for health management research. Currently, several analytical approaches are required to cover metabolites and co-factors. We report the development and the validation of a method for the simultaneous detection and quantitation of 13 metabolites in red blood cells. The method, validated in a cohort of healthy human volunteers, shows a high level of accuracy and reproducibility. This high-throughput protocol provides a robust coverage of central metabolites and co-factors in one single analysis and in a high-throughput fashion. In large-scale clinical settings, the use of such an approach will significantly advance the field of nutritional research in health and disease.

  14. Empirical study on impact of demographic and economic changes on pension cost

    NASA Astrophysics Data System (ADS)

    Yusof, Shaira; Ibrahim, Rose Irnawaty

    2014-06-01

    A continuation of the same financial standard of living after retirement as before is very importance to retired person. The pension provider has a responsibility to ensure their employees receive the sufficient benefit after retirement and regularly monitor the factors that cause insufficient funds to pay benefit to retirees. Insufficient funds may be due to increased in pension cost. Some of the factors that increase the cost of pensions are changes in mortality rates and interest rates. This study will used these two factors to determine their sensitivity to pension cost. Two methods which are Accrued Benefit Cost Method and Projected Benefit Cost Method will be used to estimate pension cost. Interest rates has a inversely related to pension cost while mortality rates has a directly related to pension cost.

  15. Influencing factors and kinetics analysis on the leaching of iron from boron carbide waste-scrap with ultrasound-assisted method.

    PubMed

    Li, Xin; Xing, Pengfei; Du, Xinghong; Gao, Shuaibo; Chen, Chen

    2017-09-01

    In this paper, the ultrasound-assisted leaching of iron from boron carbide waste-scrap was investigated and the optimization of different influencing factors had also been performed. The factors investigated were acid concentration, liquid-solid ratio, leaching temperature, ultrasonic power and frequency. The leaching of iron with conventional method at various temperatures was also performed. The results show the maximum iron leaching ratios are 87.4%, 94.5% for 80min-leaching with conventional method and 50min-leaching with ultrasound assistance, respectively. The leaching of waste-scrap with conventional method fits the chemical reaction-controlled model. The leaching with ultrasound assistance fits chemical reaction-controlled model, diffusion-controlled model for the first stage and second stage, respectively. The assistance of ultrasound can greatly improve the iron leaching ratio, accelerate the leaching rate, shorten leaching time and lower the residual iron, comparing with conventional method. The advantages of ultrasound-assisted leaching were also confirmed by the SEM-EDS analysis and elemental analysis of the raw material and leached solid samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Estimating Cyanobacteria Community Dynamics and its Relationship with Environmental Factors

    PubMed Central

    Luo, Wenhuai; Chen, Huirong; Lei, Anping; Lu, Jun; Hu, Zhangli

    2014-01-01

    The cyanobacteria community dynamics in two eutrophic freshwater bodies (Tiegang Reservoir and Shiyan Reservoir) was studied with both a traditional microscopic counting method and a PCR-DGGE genotyping method. Results showed that cyanobacterium Phormidium tenue was the predominant species; twenty-six cyanobacteria species were identified in water samples collected from the two reservoirs, among which fourteen were identified with the morphological method and sixteen with the PCR-DGGE method. The cyanobacteria community composition analysis showed a seasonal fluctuation from July to December. The cyanobacteria population peaked in August in both reservoirs, with cell abundances of 3.78 × 108 cells L-1 and 1.92 × 108 cells L-1 in the Tiegang and Shiyan reservoirs, respectively. Canonical Correspondence Analysis (CCA) was applied to further investigate the correlation between cyanobacteria community dynamics and environmental factors. The result indicated that the cyanobacteria community dynamics was mostly correlated with pH, temperature and total nitrogen. This study demonstrated that data obtained from PCR-DGGE combined with a traditional morphological method could reflect cyanobacteria community dynamics and its correlation with environmental factors in eutrophic freshwater bodies. PMID:24448632

  17. The successive projection algorithm as an initialization method for brain tumor segmentation using non-negative matrix factorization.

    PubMed

    Sauwen, Nicolas; Acou, Marjan; Bharath, Halandur N; Sima, Diana M; Veraart, Jelle; Maes, Frederik; Himmelreich, Uwe; Achten, Eric; Van Huffel, Sabine

    2017-01-01

    Non-negative matrix factorization (NMF) has become a widely used tool for additive parts-based analysis in a wide range of applications. As NMF is a non-convex problem, the quality of the solution will depend on the initialization of the factor matrices. In this study, the successive projection algorithm (SPA) is proposed as an initialization method for NMF. SPA builds on convex geometry and allocates endmembers based on successive orthogonal subspace projections of the input data. SPA is a fast and reproducible method, and it aligns well with the assumptions made in near-separable NMF analyses. SPA was applied to multi-parametric magnetic resonance imaging (MRI) datasets for brain tumor segmentation using different NMF algorithms. Comparison with common initialization methods shows that SPA achieves similar segmentation quality and it is competitive in terms of convergence rate. Whereas SPA was previously applied as a direct endmember extraction tool, we have shown improved segmentation results when using SPA as an initialization method, as it allows further enhancement of the sources during the NMF iterative procedure.

  18. Method and apparatus for determining material structural integrity

    DOEpatents

    Pechersky, M.J.

    1994-01-01

    Disclosed are a nondestructive method and apparatus for determining the structural integrity of materials by combining laser vibrometry with damping analysis to determine the damping loss factor. The method comprises the steps of vibrating the area being tested over a known frequency range and measuring vibrational force and velocity vs time over the known frequency range. Vibrational velocity is preferably measured by a laser vibrometer. Measurement of the vibrational force depends on the vibration method: if an electromagnetic coil is used to vibrate a magnet secured to the area being tested, then the vibrational force is determined by the coil current. If a reciprocating transducer is used, the vibrational force is determined by a force gauge in the transducer. Using vibrational analysis, a plot of the drive point mobility of the material over the preselected frequency range is generated from the vibrational force and velocity data. Damping loss factor is derived from a plot of the drive point mobility over the preselected frequency range using the resonance dwell method and compared with a reference damping loss factor for structural integrity evaluation.

  19. The crowding factor method applied to parafoveal vision

    PubMed Central

    Ghahghaei, Saeideh; Walker, Laura

    2016-01-01

    Crowding increases with eccentricity and is most readily observed in the periphery. During natural, active vision, however, central vision plays an important role. Measures of critical distance to estimate crowding are difficult in central vision, as these distances are small. Any overlap of flankers with the target may create an overlay masking confound. The crowding factor method avoids this issue by simultaneously modulating target size and flanker distance and using a ratio to compare crowded to uncrowded conditions. This method was developed and applied in the periphery (Petrov & Meleshkevich, 2011b). In this work, we apply the method to characterize crowding in parafoveal vision (<3.5 visual degrees) with spatial uncertainty. We find that eccentricity and hemifield have less impact on crowding than in the periphery, yet radial/tangential asymmetries are clearly preserved. There are considerable idiosyncratic differences observed between participants. The crowding factor method provides a powerful tool for examining crowding in central and peripheral vision, which will be useful in future studies that seek to understand visual processing under natural, active viewing conditions. PMID:27690170

  20. Evaluation of ergonomic physical risk factors in a truck manufacturing plant: case study in SCANIA Production Angers

    PubMed Central

    ZARE, Mohsen; MALINGE-OUDENOT, Agnes; HÖGLUND, Robert; BIAU, Sophie; ROQUELAURE, Yves

    2015-01-01

    The aims of this study were 1) to assess the ergonomic physical risk factors from practitioner’s viewpoint in a truck assembly plant with an in-house observational method and the NIOSH lifting equation, and 2) to compare the results of both methods and their differences. The in-house ergonomic observational method for truck assembly i.e. the SCANIA Ergonomics Standard (SES) and the NIOSH lifting equation were applied to evaluate physical risk factors and lifting of loads by operators. Both risk assessment approaches revealed various levels of risk, ranging from low to high. Two workstations were identified by the SES method as high risk. The NIOSH lifting index (LI) was greater than two for four lifting tasks. The results of the SES method disagreed with the NIOSH lifting equation for lifting tasks. Moreover, meaningful variations in ergonomic risk patterns were found for various truck models at each workstation. These results provide a better understanding of the physical ergonomic exposure from practitioner’s point of view in the automotive assembly plant. PMID:26423331

Top