Sample records for additive factors method

  1. A comparison study on detection of key geochemical variables and factors through three different types of factor analysis

    NASA Astrophysics Data System (ADS)

    Hoseinzade, Zohre; Mokhtari, Ahmad Reza

    2017-10-01

    Large numbers of variables have been measured to explain different phenomena. Factor analysis has widely been used in order to reduce the dimension of datasets. Additionally, the technique has been employed to highlight underlying factors hidden in a complex system. As geochemical studies benefit from multivariate assays, application of this method is widespread in geochemistry. However, the conventional protocols in implementing factor analysis have some drawbacks in spite of their advantages. In the present study, a geochemical dataset including 804 soil samples collected from a mining area in central Iran in order to search for MVT type Pb-Zn deposits was considered to outline geochemical analysis through various fractal methods. Routine factor analysis, sequential factor analysis, and staged factor analysis were applied to the dataset after opening the data with (additive logratio) alr-transformation to extract mineralization factor in the dataset. A comparison between these methods indicated that sequential factor analysis has more clearly revealed MVT paragenesis elements in surface samples with nearly 50% variation in F1. In addition, staged factor analysis has given acceptable results while it is easy to practice. It could detect mineralization related elements while larger factor loadings are given to these elements resulting in better pronunciation of mineralization.

  2. Calculation Method of Lateral Strengths and Ductility Factors of Constructions with Shear Walls of Different Ductility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamaguchi, Nobuyoshi; Nakao, Masato; Murakami, Masahide

    2008-07-08

    For seismic design, ductility-related force modification factors are named R factor in Uniform Building Code of U.S, q factor in Euro Code 8 and Ds (inverse of R) factor in Japanese Building Code. These ductility-related force modification factors for each type of shear elements are appeared in those codes. Some constructions use various types of shear walls that have different ductility, especially for their retrofit or re-strengthening. In these cases, engineers puzzle the decision of force modification factors of the constructions. Solving this problem, new method to calculate lateral strengths of stories for simple shear wall systems is proposed andmore » named 'Stiffness--Potential Energy Addition Method' in this paper. This method uses two design lateral strengths for each type of shear walls in damage limit state and safety limit state. Two lateral strengths of stories in both limit states are calculated from these two design lateral strengths for each type of shear walls in both limit states. Calculated strengths have the same quality as values obtained by strength addition method using many steps of load-deformation data of shear walls. The new method to calculate ductility factors is also proposed in this paper. This method is based on the new method to calculate lateral strengths of stories. This method can solve the problem to obtain ductility factors of stories with shear walls of different ductility.« less

  3. An Upscaling Method for Cover-Management Factor and Its Application in the Loess Plateau of China

    PubMed Central

    Zhao, Wenwu; Fu, Bojie; Qiu, Yang

    2013-01-01

    The cover-management factor (C-factor) is important for studying soil erosion. In addition, it is important to use sampling plot data to estimate the regional C-factor when assessing erosion and soil conservation. Here, the loess hill and gully region in Ansai County, China, was studied to determine a method for computing the C-factor. This C-factor is used in the Universal Soil Loss Equation (USLE) at a regional scale. After upscaling the slope-scale computational equation, the C-factor for Ansai County was calculated by using the soil loss ratio, precipitation and land use/cover type. The multi-year mean C-factor for Ansai County was 0.36. The C-factor values were greater in the eastern region of the county than in the western region. In addition, the lowest C-factor values were found in the southern region of the county near its southern border. These spatial differences were consistent with the spatial distribution of the soil loess ratios across areas with different land uses. Additional research is needed to determine the effects of seasonal vegetation growth changes on the C-factor, and the C-factor upscaling uncertainties at a regional scale. PMID:24113551

  4. An upscaling method for cover-management factor and its application in the loess Plateau of China.

    PubMed

    Zhao, Wenwu; Fu, Bojie; Qiu, Yang

    2013-10-09

    The cover-management factor (C-factor) is important for studying soil erosion. In addition, it is important to use sampling plot data to estimate the regional C-factor when assessing erosion and soil conservation. Here, the loess hill and gully region in Ansai County, China, was studied to determine a method for computing the C-factor. This C-factor is used in the Universal Soil Loss Equation (USLE) at a regional scale. After upscaling the slope-scale computational equation, the C-factor for Ansai County was calculated by using the soil loss ratio, precipitation and land use/cover type. The multi-year mean C-factor for Ansai County was 0.36. The C-factor values were greater in the eastern region of the county than in the western region. In addition, the lowest C-factor values were found in the southern region of the county near its southern border. These spatial differences were consistent with the spatial distribution of the soil loess ratios across areas with different land uses. Additional research is needed to determine the effects of seasonal vegetation growth changes on the C-factor, and the C-factor upscaling uncertainties at a regional scale.

  5. [Comparison of three stand-level biomass estimation methods].

    PubMed

    Dong, Li Hu; Li, Feng Ri

    2016-12-01

    At present, the forest biomass methods of regional scale attract most of attention of the researchers, and developing the stand-level biomass model is popular. Based on the forestry inventory data of larch plantation (Larix olgensis) in Jilin Province, we used non-linear seemly unrelated regression (NSUR) to estimate the parameters in two additive system of stand-level biomass equations, i.e., stand-level biomass equations including the stand variables and stand biomass equations including the biomass expansion factor (i.e., Model system 1 and Model system 2), listed the constant biomass expansion factor for larch plantation and compared the prediction accuracy of three stand-level biomass estimation methods. The results indicated that for two additive system of biomass equations, the adjusted coefficient of determination (R a 2 ) of the total and stem equations was more than 0.95, the root mean squared error (RMSE), the mean prediction error (MPE) and the mean absolute error (MAE) were smaller. The branch and foliage biomass equations were worse than total and stem biomass equations, and the adjusted coefficient of determination (R a 2 ) was less than 0.95. The prediction accuracy of a constant biomass expansion factor was relatively lower than the prediction accuracy of Model system 1 and Model system 2. Overall, although stand-level biomass equation including the biomass expansion factor belonged to the volume-derived biomass estimation method, and was different from the stand biomass equations including stand variables in essence, but the obtained prediction accuracy of the two methods was similar. The constant biomass expansion factor had the lower prediction accuracy, and was inappropriate. In addition, in order to make the model parameter estimation more effective, the established stand-level biomass equations should consider the additivity in a system of all tree component biomass and total biomass equations.

  6. Introduction of risk size in the determination of uncertainty factor UFL in risk assessment

    NASA Astrophysics Data System (ADS)

    Xue, Jinling; Lu, Yun; Velasquez, Natalia; Yu, Ruozhen; Hu, Hongying; Liu, Zhengtao; Meng, Wei

    2012-09-01

    The methodology for using uncertainty factors in health risk assessment has been developed for several decades. A default value is usually applied for the uncertainty factor UFL, which is used to extrapolate from LOAEL (lowest observed adverse effect level) to NAEL (no adverse effect level). Here, we have developed a new method that establishes a linear relationship between UFL and the additional risk level at LOAEL based on the dose-response information, which represents a very important factor that should be carefully considered. This linear formula makes it possible to select UFL properly in the additional risk range from 5.3% to 16.2%. Also the results remind us that the default value 10 may not be conservative enough when the additional risk level at LOAEL exceeds 16.2%. Furthermore, this novel method not only provides a flexible UFL instead of the traditional default value, but also can ensure a conservative estimation of the UFL with fewer errors, and avoid the benchmark response selection involved in the benchmark dose method. These advantages can improve the estimation of the extrapolation starting point in the risk assessment.

  7. Calibrated Bayes Factors Should Not Be Used: A Reply to Hoijtink, van Kooten, and Hulsker.

    PubMed

    Morey, Richard D; Wagenmakers, Eric-Jan; Rouder, Jeffrey N

    2016-01-01

    Hoijtink, Kooten, and Hulsker ( 2016 ) present a method for choosing the prior distribution for an analysis with Bayes factor that is based on controlling error rates, which they advocate as an alternative to our more subjective methods (Morey & Rouder, 2014 ; Rouder, Speckman, Sun, Morey, & Iverson, 2009 ; Wagenmakers, Wetzels, Borsboom, & van der Maas, 2011 ). We show that the method they advocate amounts to a simple significance test, and that the resulting Bayes factors are not interpretable. Additionally, their method fails in common circumstances, and has the potential to yield arbitrarily high Type II error rates. After critiquing their method, we outline the position on subjectivity that underlies our advocacy of Bayes factors.

  8. 21 CFR 113.100 - Processing and production records.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... critical factors specified in the scheduled process shall also be recorded. In addition, the following... preservation methods wherein critical factors such as water activity are used in conjunction with thermal... critical factors, as well as other critical factors, and results of aw determinations. (7) Other systems...

  9. Method for exploiting bias in factor analysis using constrained alternating least squares algorithms

    DOEpatents

    Keenan, Michael R.

    2008-12-30

    Bias plays an important role in factor analysis and is often implicitly made use of, for example, to constrain solutions to factors that conform to physical reality. However, when components are collinear, a large range of solutions may exist that satisfy the basic constraints and fit the data equally well. In such cases, the introduction of mathematical bias through the application of constraints may select solutions that are less than optimal. The biased alternating least squares algorithm of the present invention can offset mathematical bias introduced by constraints in the standard alternating least squares analysis to achieve factor solutions that are most consistent with physical reality. In addition, these methods can be used to explicitly exploit bias to provide alternative views and provide additional insights into spectral data sets.

  10. Application of Response Surface Methods To Determine Conditions for Optimal Genomic Prediction

    PubMed Central

    Howard, Réka; Carriquiry, Alicia L.; Beavis, William D.

    2017-01-01

    An epistatic genetic architecture can have a significant impact on prediction accuracies of genomic prediction (GP) methods. Machine learning methods predict traits comprised of epistatic genetic architectures more accurately than statistical methods based on additive mixed linear models. The differences between these types of GP methods suggest a diagnostic for revealing genetic architectures underlying traits of interest. In addition to genetic architecture, the performance of GP methods may be influenced by the sample size of the training population, the number of QTL, and the proportion of phenotypic variability due to genotypic variability (heritability). Possible values for these factors and the number of combinations of the factor levels that influence the performance of GP methods can be large. Thus, efficient methods for identifying combinations of factor levels that produce most accurate GPs is needed. Herein, we employ response surface methods (RSMs) to find the experimental conditions that produce the most accurate GPs. We illustrate RSM with an example of simulated doubled haploid populations and identify the combination of factors that maximize the difference between prediction accuracies of best linear unbiased prediction (BLUP) and support vector machine (SVM) GP methods. The greatest impact on the response is due to the genetic architecture of the population, heritability of the trait, and the sample size. When epistasis is responsible for all of the genotypic variance and heritability is equal to one and the sample size of the training population is large, the advantage of using the SVM method vs. the BLUP method is greatest. However, except for values close to the maximum, most of the response surface shows little difference between the methods. We also determined that the conditions resulting in the greatest prediction accuracy for BLUP occurred when genetic architecture consists solely of additive effects, and heritability is equal to one. PMID:28720710

  11. Reinforcing mechanism of anchors in slopes: a numerical comparison of results of LEM and FEM

    NASA Astrophysics Data System (ADS)

    Cai, Fei; Ugai, Keizo

    2003-06-01

    This paper reports the limitation of the conventional Bishop's simplified method to calculate the safety factor of slopes stabilized with anchors, and proposes a new approach to considering the reinforcing effect of anchors on the safety factor. The reinforcing effect of anchors can be explained using an additional shearing resistance on the slip surface. A three-dimensional shear strength reduction finite element method (SSRFEM), where soil-anchor interactions were simulated by three-dimensional zero-thickness elasto-plastic interface elements, was used to calculate the safety factor of slopes stabilized with anchors to verify the reinforcing mechanism of anchors. The results of SSRFEM were compared with those of the conventional and proposed approaches for Bishop's simplified method for various orientations, positions, and spacings of anchors, and shear strengths of soil-grouted body interfaces. For the safety factor, the proposed approach compared better with SSRFEM than the conventional approach. The additional shearing resistance can explain the influence of the orientation, position, and spacing of anchors, and the shear strength of soil-grouted body interfaces on the safety factor of slopes stabilized with anchors.

  12. Method for factor analysis of GC/MS data

    DOEpatents

    Van Benthem, Mark H; Kotula, Paul G; Keenan, Michael R

    2012-09-11

    The method of the present invention provides a fast, robust, and automated multivariate statistical analysis of gas chromatography/mass spectroscopy (GC/MS) data sets. The method can involve systematic elimination of undesired, saturated peak masses to yield data that follow a linear, additive model. The cleaned data can then be subjected to a combination of PCA and orthogonal factor rotation followed by refinement with MCR-ALS to yield highly interpretable results.

  13. Constrained Response Surface Optimisation and Taguchi Methods for Precisely Atomising Spraying Process

    NASA Astrophysics Data System (ADS)

    Luangpaiboon, P.; Suwankham, Y.; Homrossukon, S.

    2010-10-01

    This research presents a development of a design of experiment technique for quality improvement in automotive manufacturing industrial. The quality of interest is the colour shade, one of the key feature and exterior appearance for the vehicles. With low percentage of first time quality, the manufacturer has spent a lot of cost for repaired works as well as the longer production time. To permanently dissolve such problem, the precisely spraying condition should be optimized. Therefore, this work will apply the full factorial design, the multiple regression, the constrained response surface optimization methods or CRSOM, and Taguchi's method to investigate the significant factors and to determine the optimum factor level in order to improve the quality of paint shop. Firstly, 2κ full factorial was employed to study the effect of five factors including the paint flow rate at robot setting, the paint levelling agent, the paint pigment, the additive slow solvent, and non volatile solid at spraying of atomizing spraying machine. The response values of colour shade at 15 and 45 degrees were measured using spectrophotometer. Then the regression models of colour shade at both degrees were developed from the significant factors affecting each response. Consequently, both regression models were placed into the form of linear programming to maximize the colour shade subjected to 3 main factors including the pigment, the additive solvent and the flow rate. Finally, Taguchi's method was applied to determine the proper level of key variable factors to achieve the mean value target of colour shade. The factor of non volatile solid was found to be one more additional factor at this stage. Consequently, the proper level of all factors from both experiment design methods were used to set a confirmation experiment. It was found that the colour shades, both visual at 15 and 45 angel of measurement degrees of spectrophotometer, were nearly closed to the target and the defective at quality gate was also reduced from 0.35 WDPV to 0.10 WDPV. This reveals that the objective of this research is met and this procedure can be used as quality improvement guidance for paint shop of automotive vehicle.

  14. Factor Scores, Structure Coefficients, and Communality Coefficients

    ERIC Educational Resources Information Center

    Goodwyn, Fara

    2012-01-01

    This paper presents heuristic explanations of factor scores, structure coefficients, and communality coefficients. Common misconceptions regarding these topics are clarified. In addition, (a) the regression (b) Bartlett, (c) Anderson-Rubin, and (d) Thompson methods for calculating factor scores are reviewed. Syntax necessary to execute all four…

  15. Intracardiac Shunting and Stroke in Children: A Systematic Review

    PubMed Central

    Dowling, Michael M.; Ikemba, Catherine M.

    2017-01-01

    In adults, patent foramen ovale or other potential intracardiac shunts are established risk factors for stroke via paradoxical embolization. Stroke is less common in children and risk factors differ. The authors examined the literature on intracardiac shunting and stroke in children, identifying the methods employed, the prevalence of detectible intracardiac shunts, associated conditions, and treatments. PubMed searches with keywords related to intracardiac shunting and stroke in children identified articles of interest. Additional articles were identified via citations in these articles or in reviews. The authors found that studies of intracardiac shunting in children with stroke are limited. No controlled studies were identified. Detection methods vary and the prevalence of echocardiographically detectible intracardiac shunting appears lower than reported in adults and autopsy studies. Defining the role of intracardiac shunting in pediatric stroke will require controlled studies with unified detection methods in populations stratified by additional risk factors for paradoxical embolization. Optimal treatment is unclear. PMID:21212453

  16. Identification of suitable sites for mountain ginseng cultivation using GIS and geo-temperature.

    PubMed

    Kang, Hag Mo; Choi, Soo Im; Kim, Hyun

    2016-01-01

    This study was conducted to explore an accurate site identification technique using a geographic information system (GIS) and geo-temperature (gT) for locating suitable sites for growing cultivated mountain ginseng (CMG; Panax ginseng), which is highly sensitive to the environmental conditions in which it grows. The study site was Jinan-gun, South Korea. The spatial resolution for geographic data was set at 10 m × 10 m, and the temperatures for various climatic factors influencing CMG growth were calculated by averaging the 3-year temperatures obtained from the automatic weather stations of the Korea Meteorological Administration. Identification of suitable sites for CMG cultivation was undertaken using both a conventional method and a new method, in which the gT was added as one of the most important factors for crop cultivation. The results yielded by the 2 methods were then compared. When the gT was added as an additional factor (new method), the proportion of suitable sites identified decreased by 0.4 % compared with the conventional method. However, the proportion matching real CMG cultivation sites increased by 3.5 %. Moreover, only 68.2 % corresponded with suitable sites identified using the conventional factors; i.e., 31.8 % were newly detected suitable sites. The accuracy of GIS-based identification of suitable CMG cultivation sites improved by applying the temperature factor (i.e., gT) in addition to the conventionally used factors.

  17. Investigation of High-Angle-of-Attack Maneuver-Limiting Factors. Part 1. Analysis and Simulation

    DTIC Science & Technology

    1980-12-01

    useful, are not so satisfying or in- structive as the more positive identification of causal factors offered by the methods developed in Reference 5...same methods be applied to additional high-performance fighter aircraft having widely differing high AOA handling characteristics to see if further...predictions and the nonlinear model results were resolved. The second task involved development of methods , criteria, and an associated pilot rating scale, for

  18. A simple method for determining stress intensity factors for a crack in bi-material interface

    NASA Astrophysics Data System (ADS)

    Morioka, Yuta

    Because of violently oscillating nature of stress and displacement fields near the crack tip, it is difficult to obtain stress intensity factors for a crack between two dis-similar media. For a crack in a homogeneous medium, it is a common practice to find stress intensity factors through strain energy release rates. However, individual strain energy release rates do not exist for bi-material interface crack. Hence it is necessary to find alternative methods to evaluate stress intensity factors. Several methods have been proposed in the past. However they involve mathematical complexity and sometimes require additional finite element analysis. The purpose of this research is to develop a simple method to find stress intensity factors in bi-material interface cracks. A finite element based projection method is proposed in the research. It is shown that the projection method yields very accurate stress intensity factors for a crack in isotropic and anisotropic bi-material interfaces. The projection method is also compared to displacement ratio method and energy method proposed by other authors. Through comparison it is found that projection method is much simpler to apply with its accuracy comparable to that of displacement ratio method.

  19. Percent Mammographic Density and Dense Area as Risk Factors for Breast Cancer.

    PubMed

    Rauh, C; Hack, C C; Häberle, L; Hein, A; Engel, A; Schrauder, M G; Fasching, P A; Jud, S M; Ekici, A B; Loehberg, C R; Meier-Meitinger, M; Ozan, S; Schulz-Wendtland, R; Uder, M; Hartmann, A; Wachter, D L; Beckmann, M W; Heusinger, K

    2012-08-01

    Purpose: Mammographic characteristics are known to be correlated to breast cancer risk. Percent mammographic density (PMD), as assessed by computer-assisted methods, is an established risk factor for breast cancer. Along with this assessment the absolute dense area (DA) of the breast is reported as well. Aim of this study was to assess the predictive value of DA concerning breast cancer risk in addition to other risk factors and in addition to PMD. Methods: We conducted a case control study with hospital-based patients with a diagnosis of invasive breast cancer and healthy women as controls. A total of 561 patients and 376 controls with available mammographic density were included into this study. We describe the differences concerning the common risk factors BMI, parital status, use of hormone replacement therapy (HRT) and menopause between cases and controls and estimate the odds ratios for PMD and DA, adjusted for the mentioned risk factors. Furthermore we compare the prediction models with each other to find out whether the addition of DA improves the model. Results: Mammographic density and DA were highly correlated with each other. Both variables were as well correlated to the commonly known risk factors with an expected direction and strength, however PMD (ρ = -0.56) was stronger correlated to BMI than DA (ρ = -0.11). The group of women within the highest quartil of PMD had an OR of 2.12 (95 % CI: 1.25-3.62). This could not be seen for the fourth quartile concerning DA. However the assessment of breast cancer risk could be improved by including DA in a prediction model in addition to common risk factors and PMD. Conclusions: The inclusion of the parameter DA into a prediction model for breast cancer in addition to established risk factors and PMD could improve the breast cancer risk assessment. As DA is measured together with PMD in the process of computer-assisted assessment of PMD it might be considered to include it as one additional breast cancer risk factor that is obtained from breast imaging.

  20. Revealing Dimensions of Thinking in Open-Ended Self-Descriptions: An Automated Meaning Extraction Method for Natural Language

    PubMed Central

    2008-01-01

    A new method for extracting common themes from written text is introduced and applied to 1,165 open-ended self-descriptive narratives. Drawing on a lexical approach to personality, the most commonly-used adjectives within narratives written by college students were identified using computerized text analytic tools. A factor analysis on the use of these adjectives in the self-descriptions produced a 7-factor solution consisting of psychologically meaningful dimensions. Some dimensions were unipolar (e.g., Negativity factor, wherein most loaded items were negatively valenced adjectives); others were dimensional in that semantically opposite words clustered together (e.g., Sociability factor, wherein terms such as shy, outgoing, reserved, and loud all loaded in the same direction). The factors exhibited modest reliability across different types of writ writing samples and were correlated with self-reports and behaviors consistent with the dimensions. Similar analyses with additional content words (adjectives, adverbs, nouns, and verbs) yielded additional psychological dimensions associated with physical appearance, school, relationships, etc. in which people contextualize their self-concepts. The results suggest that the meaning extraction method is a promising strategy that determines the dimensions along which people think about themselves. PMID:18802499

  1. Combining Knowledge and Data Driven Insights for Identifying Risk Factors using Electronic Health Records

    PubMed Central

    Sun, Jimeng; Hu, Jianying; Luo, Dijun; Markatou, Marianthi; Wang, Fei; Edabollahi, Shahram; Steinhubl, Steven E.; Daar, Zahra; Stewart, Walter F.

    2012-01-01

    Background: The ability to identify the risk factors related to an adverse condition, e.g., heart failures (HF) diagnosis, is very important for improving care quality and reducing cost. Existing approaches for risk factor identification are either knowledge driven (from guidelines or literatures) or data driven (from observational data). No existing method provides a model to effectively combine expert knowledge with data driven insight for risk factor identification. Methods: We present a systematic approach to enhance known knowledge-based risk factors with additional potential risk factors derived from data. The core of our approach is a sparse regression model with regularization terms that correspond to both knowledge and data driven risk factors. Results: The approach is validated using a large dataset containing 4,644 heart failure cases and 45,981 controls. The outpatient electronic health records (EHRs) for these patients include diagnosis, medication, lab results from 2003–2010. We demonstrate that the proposed method can identify complementary risk factors that are not in the existing known factors and can better predict the onset of HF. We quantitatively compare different sets of risk factors in the context of predicting onset of HF using the performance metric, the Area Under the ROC Curve (AUC). The combined risk factors between knowledge and data significantly outperform knowledge-based risk factors alone. Furthermore, those additional risk factors are confirmed to be clinically meaningful by a cardiologist. Conclusion: We present a systematic framework for combining knowledge and data driven insights for risk factor identification. We demonstrate the power of this framework in the context of predicting onset of HF, where our approach can successfully identify intuitive and predictive risk factors beyond a set of known HF risk factors. PMID:23304365

  2. Evaluation of Parallel Analysis Methods for Determining the Number of Factors

    ERIC Educational Resources Information Center

    Crawford, Aaron V.; Green, Samuel B.; Levy, Roy; Lo, Wen-Juo; Scott, Lietta; Svetina, Dubravka; Thompson, Marilyn S.

    2010-01-01

    Population and sample simulation approaches were used to compare the performance of parallel analysis using principal component analysis (PA-PCA) and parallel analysis using principal axis factoring (PA-PAF) to identify the number of underlying factors. Additionally, the accuracies of the mean eigenvalue and the 95th percentile eigenvalue criteria…

  3. A Study of Algorithms for Covariance Structure Analysis with Specific Comparisons Using Factor Analysis.

    ERIC Educational Resources Information Center

    Lee, S. Y.; Jennrich, R. I.

    1979-01-01

    A variety of algorithms for analyzing covariance structures are considered. Additionally, two methods of estimation, maximum likelihood, and weighted least squares are considered. Comparisons are made between these algorithms and factor analysis. (Author/JKS)

  4. Influence of different factors on the destruction of films based on polylactic acid and oxidized polyethylene

    NASA Astrophysics Data System (ADS)

    Podzorova, M. V.; Tertyshnaya, Yu. V.; Pantyukhov, P. V.; Shibryaeva, L. S.; Popov, A. A.; Nikolaeva, S.

    2016-11-01

    Influence of different environmental factors on the degradation of film samples based on polylactic acid and low density polyethylene with the addition of oxidized polyethylene was studied in this work. Different methods were used to find the relationship between degradation and ultraviolet, moisture, oxygen. It was found that the addition of oxidized polyethylene, used as a model of recycled polyethylene, promotes the degradation of blends.

  5. Approximate method of variational Bayesian matrix factorization/completion with sparse prior

    NASA Astrophysics Data System (ADS)

    Kawasumi, Ryota; Takeda, Koujin

    2018-05-01

    We derive the analytical expression of a matrix factorization/completion solution by the variational Bayes method, under the assumption that the observed matrix is originally the product of low-rank, dense and sparse matrices with additive noise. We assume the prior of a sparse matrix is a Laplace distribution by taking matrix sparsity into consideration. Then we use several approximations for the derivation of a matrix factorization/completion solution. By our solution, we also numerically evaluate the performance of a sparse matrix reconstruction in matrix factorization, and completion of a missing matrix element in matrix completion.

  6. Method Effects on an Adaptation of the Rosenberg Self-Esteem Scale in Greek and the Role of Personality Traits.

    PubMed

    Michaelides, Michalis P; Koutsogiorgi, Chrystalla; Panayiotou, Georgia

    2016-01-01

    Rosenberg's Self-Esteem Scale is a balanced, 10-item scale designed to be unidimensional; however, research has repeatedly shown that its factorial structure is contaminated by method effects due to item wording. Beyond the substantive self-esteem factor, 2 additional factors linked to the positive and negative wording of items have been theoretically specified and empirically supported. Initial evidence has revealed systematic relations of the 2 method factors with variables expressing approach and avoidance motivation. This study assessed the fit of competing confirmatory factor analytic models for the Rosenberg Self-Esteem Scale using data from 2 samples of adult participants in Cyprus. Models that accounted for both positive and negative wording effects via 2 latent method factors had better fit compared to alternative models. Measures of experiential avoidance, social anxiety, and private self-consciousness were associated with the method factors in structural equation models. The findings highlight the need to specify models with wording effects for a more accurate representation of the scale's structure and support the hypothesis of method factors as response styles, which are associated with individual characteristics related to avoidance motivation, behavioral inhibition, and anxiety.

  7. Lipase, protease, and biofilm as the major virulence factors in staphylococci isolated from acne lesions.

    PubMed

    Saising, Jongkon; Singdam, Sudarat; Ongsakul, Metta; Voravuthikunchai, Supayang Piyawan

    2012-08-01

    Staphylococci involve infections in association with a number of bacterial virulence factors. Extracellular enzymes play an important role in staphylococcal pathogenesis. In addition, biofilm is known to be associated with their virulence. In this study, 149 staphylococcal isolates from acne lesions were investigated for their virulence factors including lipase, protease, and biofilm formation. Coagulase-negative staphylococci were demonstrated to present lipase and protease activities more often than coagulase-positive staphylococci. A microtiter plate method (quantitative method) and a Congo red agar method (qualitative method) were comparatively employed to assess biofilm formation. In addition, biofilm forming ability was commonly detected in a coagulase-negative group (97.7%, microtiter plate method and 84.7%, Congo red agar method) more frequently than in coagulase-positive organisms (68.8%, microtiter plate method and 62.5%, Congo red agar method). This study clearly confirms an important role for biofilm in coagulasenegative staphylococci which is of serious concern as a considerable infectious agent in patients with acnes and implanted medical devices. The Congo red agar method proved to be an easy method to quickly detect biofilm producers. Sensitivity of the Congo red agar method was 85.54% and 68.18% and accuracy was 84.7% and 62.5% in coagulase-negative and coagulase-positive staphylococci, respectively, while specificity was 50% in both groups. The results clearly demonstrated that a higher percentage of coagulasenegative staphylococci isolated from acne lesions exhibited lipase and protease activities, as well as biofilm formation, than coagulase-positive staphylococci.

  8. Using Bayes factors for multi-factor, biometric authentication

    NASA Astrophysics Data System (ADS)

    Giffin, A.; Skufca, J. D.; Lao, P. A.

    2015-01-01

    Multi-factor/multi-modal authentication systems are becoming the de facto industry standard. Traditional methods typically use rates that are point estimates and lack a good measure of uncertainty. Additionally, multiple factors are typically fused together in an ad hoc manner. To be consistent, as well as to establish and make proper use of uncertainties, we use a Bayesian method that will update our estimates and uncertainties as new information presents itself. Our algorithm compares competing classes (such as genuine vs. imposter) using Bayes Factors (BF). The importance of this approach is that we not only accept or reject one model (class), but compare it to others to make a decision. We show using a Receiver Operating Characteristic (ROC) curve that using BF for determining class will always perform at least as well as the traditional combining of factors, such as a voting algorithm. As the uncertainty decreases, the BF result continues to exceed the traditional methods result.

  9. Methods and compositions for regulating gene expression in plant cells

    NASA Technical Reports Server (NTRS)

    Dai, Shunhong (Inventor); Beachy, Roger N. (Inventor); Luis, Maria Isabel Ordiz (Inventor)

    2010-01-01

    Novel chimeric plant promoter sequences are provided, together with plant gene expression cassettes comprising such sequences. In certain preferred embodiments, the chimeric plant promoters comprise the BoxII cis element and/or derivatives thereof. In addition, novel transcription factors are provided, together with nucleic acid sequences encoding such transcription factors and plant gene expression cassettes comprising such nucleic acid sequences. In certain preferred embodiments, the novel transcription factors comprise the acidic domain, or fragments thereof, of the RF2a transcription factor. Methods for using the chimeric plant promoter sequences and novel transcription factors in regulating the expression of at least one gene of interest are provided, together with transgenic plants comprising such chimeric plant promoter sequences and novel transcription factors.

  10. Comparison of calculation methods for estimating annual carbon stock change in German forests under forest management in the German greenhouse gas inventory.

    PubMed

    Röhling, Steffi; Dunger, Karsten; Kändler, Gerald; Klatt, Susann; Riedel, Thomas; Stümer, Wolfgang; Brötz, Johannes

    2016-12-01

    The German greenhouse gas inventory in the land use change sector strongly depends on national forest inventory data. As these data were collected periodically 1987, 2002, 2008 and 2012, the time series on emissions show several "jumps" due to biomass stock change, especially between 2001 and 2002 and between 2007 and 2008 while within the periods the emissions seem to be constant due to the application of periodical average emission factors. This does not reflect inter-annual variability in the time series, which would be assumed as the drivers for the carbon stock changes fluctuate between the years. Therefore additional data, which is available on annual basis, should be introduced into the calculations of the emissions inventories in order to get more plausible time series. This article explores the possibility of introducing an annual rather than periodical approach to calculating emission factors with the given data and thus smoothing the trajectory of time series for emissions from forest biomass. Two approaches are introduced to estimate annual changes derived from periodic data: the so-called logging factor method and the growth factor method. The logging factor method incorporates annual logging data to project annual values from periodic values. This is less complex to implement than the growth factor method, which additionally adds growth data into the calculations. Calculation of the input variables is based on sound statistical methodologies and periodically collected data that cannot be altered. Thus a discontinuous trajectory of the emissions over time remains, even after the adjustments. It is intended to adopt this approach in the German greenhouse gas reporting in order to meet the request for annually adjusted values.

  11. RM-DEMATEL: a new methodology to identify the key factors in PM2.5.

    PubMed

    Chen, Yafeng; Liu, Jie; Li, Yunpeng; Sadiq, Rehan; Deng, Yong

    2015-04-01

    Weather system is a relative complex dynamic system, the factors of the system are mutually influenced PM2.5 concentration. In this paper, a new method is proposed to quantify the influence on PM2.5 by other factors in the weather system and identify the most important factors for PM2.5 with limited resources. The relation map (RM) is used to figure out the direct relation matrix of 14 factors in PM2.5. The decision making trial and evaluation laboratory(DEMATEL) is applied to calculate the causal relationship and extent to a mutual influence of 14 factors in PM2.5. According to the ranking results of our proposed method, the most important key factors is sulfur dioxide (SO2) and nitrogen oxides (NO(X)). In addition, the other factors, the ambient maximum temperature (T(max)), concentration of PM10, and wind direction (W(dir)), are important factors for PM2.5. The proposed method can also be applied to other environment management systems to identify key factors.

  12. Factors Influencing the Research Participation of Adults with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Haas, Kaaren; Costley, Debra; Falkmer, Marita; Richdale, Amanda; Sofronoff, Kate; Falkmer, Torbjörn

    2016-01-01

    Recruiting adults with autism spectrum disorders (ASD) into research poses particular difficulties; longitudinal studies face additional challenges. This paper reports on a mixed methods study to identify factors influencing the participation in longitudinal autism research of adults with ASD, including those with an intellectual disability, and…

  13. Human parvovirus B19 infection in hemophiliacs first infused with two high-purity, virally attenuated factor VIII concentrates.

    PubMed

    Azzi, A; Ciappi, S; Zakvrzewska, K; Morfini, M; Mariani, G; Mannucci, P M

    1992-03-01

    Human parvovirus B19 can be transmitted by coagulation factor concentrates and is highly resistant to virucidal methods. To evaluate whether the additional removal of virus by chromatographic methods during the manufacture of high-purity concentrates reduces the risk of B19 transmission, we have prospectively evaluated the rate of anti-B19 seroconversion in two groups of susceptible (anti-B19 negative) hemophiliacs infused with high-purity, heated (pasteurized) or solvent-detergent-treated factor VIII concentrates. Both products infected a relatively high proportion of patients (nine of 20).

  14. Overview of mycotoxin methods, present status and future needs.

    PubMed

    Gilbert, J

    1999-01-01

    This article reviews current requirements for the analysis for mycotoxins in foods and identifies legislative as well as other factors that are driving development and validation of new methods. New regulatory limits for mycotoxins and analytical quality assurance requirements for laboratories to only use validated methods are seen as major factors driving developments. Three major classes of methods are identified which serve different purposes and can be categorized as screening, official and research. In each case the present status and future needs are assessed. In addition to an overview of trends in analytical methods, some other areas of analytical quality assurance such as participation in proficiency testing and reference materials are identified.

  15. Contraceptive Method Choice Among Young Adults: Influence of Individual and Relationship Factors.

    PubMed

    Harvey, S Marie; Oakley, Lisa P; Washburn, Isaac; Agnew, Christopher R

    2018-01-26

    Because decisions related to contraceptive behavior are often made by young adults in the context of specific relationships, the relational context likely influences use of contraceptives. Data presented here are from in-person structured interviews with 536 Black, Hispanic, and White young adults from East Los Angeles, California. We collected partner-specific relational and contraceptive data on all sexual partnerships for each individual, on four occasions, over one year. Using three-level multinomial logistic regression models, we examined individual and relationship factors predictive of contraceptive use. Results indicated that both individual and relationship factors predicted contraceptive use, but factors varied by method. Participants reporting greater perceived partner exclusivity and relationship commitment were more likely to use hormonal/long-acting methods only or a less effective method/no method versus condoms only. Those with greater participation in sexual decision making were more likely to use any method over a less effective method/no method and were more likely to use condoms only or dual methods versus a hormonal/long-acting method only. In addition, for women only, those who reported greater relationship commitment were more likely to use hormonal/long-acting methods or a less effective method/no method versus a dual method. In summary, interactive relationship qualities and dynamics (commitment and sexual decision making) significantly predicted contraceptive use.

  16. Weed control in organic rice using plastic mulch and water seeding methods in addition to cover crops

    USDA-ARS?s Scientific Manuscript database

    Weeds are a major yield limiting factor in organic rice farming and are more problematic than in conventional production systems. Water seeding is a common method of reducing weed pressure in rice fields as many weeds connot tolerate flooded field conditions. The use of cover crops is another method...

  17. Analysis of the influencing factors of global energy interconnection development

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; He, Yongxiu; Ge, Sifan; Liu, Lin

    2018-04-01

    Under the background of building global energy interconnection and achieving green and low-carbon development, this paper grasps a new round of energy restructuring and the trend of energy technology change, based on the present situation of global and China's global energy interconnection development, established the index system of the impact of global energy interconnection development factors. A subjective and objective weight analysis of the factors affecting the development of the global energy interconnection was conducted separately by network level analysis and entropy method, and the weights are summed up by the method of additive integration, which gives the comprehensive weight of the influencing factors and the ranking of their influence.

  18. Unsupervised Bayesian linear unmixing of gene expression microarrays.

    PubMed

    Bazot, Cécile; Dobigeon, Nicolas; Tourneret, Jean-Yves; Zaas, Aimee K; Ginsburg, Geoffrey S; Hero, Alfred O

    2013-03-19

    This paper introduces a new constrained model and the corresponding algorithm, called unsupervised Bayesian linear unmixing (uBLU), to identify biological signatures from high dimensional assays like gene expression microarrays. The basis for uBLU is a Bayesian model for the data samples which are represented as an additive mixture of random positive gene signatures, called factors, with random positive mixing coefficients, called factor scores, that specify the relative contribution of each signature to a specific sample. The particularity of the proposed method is that uBLU constrains the factor loadings to be non-negative and the factor scores to be probability distributions over the factors. Furthermore, it also provides estimates of the number of factors. A Gibbs sampling strategy is adopted here to generate random samples according to the posterior distribution of the factors, factor scores, and number of factors. These samples are then used to estimate all the unknown parameters. Firstly, the proposed uBLU method is applied to several simulated datasets with known ground truth and compared with previous factor decomposition methods, such as principal component analysis (PCA), non negative matrix factorization (NMF), Bayesian factor regression modeling (BFRM), and the gradient-based algorithm for general matrix factorization (GB-GMF). Secondly, we illustrate the application of uBLU on a real time-evolving gene expression dataset from a recent viral challenge study in which individuals have been inoculated with influenza A/H3N2/Wisconsin. We show that the uBLU method significantly outperforms the other methods on the simulated and real data sets considered here. The results obtained on synthetic and real data illustrate the accuracy of the proposed uBLU method when compared to other factor decomposition methods from the literature (PCA, NMF, BFRM, and GB-GMF). The uBLU method identifies an inflammatory component closely associated with clinical symptom scores collected during the study. Using a constrained model allows recovery of all the inflammatory genes in a single factor.

  19. The Impact of Sample Size and Other Factors When Estimating Multilevel Logistic Models

    ERIC Educational Resources Information Center

    Schoeneberger, Jason A.

    2016-01-01

    The design of research studies utilizing binary multilevel models must necessarily incorporate knowledge of multiple factors, including estimation method, variance component size, or number of predictors, in addition to sample sizes. This Monte Carlo study examined the performance of random effect binary outcome multilevel models under varying…

  20. An examination of the wording effect in the Rosenberg Self-Esteem Scale among culturally Chinese people.

    PubMed

    Wu, Chia-Huei

    2008-10-01

    Previous psychometric studies of the Rosenberg Self-Esteem Scale (RSES; 1965) have shown that items with positive and negative words tend to form 2 factors instead of a single factor for global self-esteem. Recent studies using confirmatory factor analysis have indicated that there is an additional method effect behind negatively worded items. However, researchers conducted these studies using Western participants. Because J. L. Farh and B. S. Cheng (1997) suggested that culturally Chinese people tend to exhibit a modesty bias in self-evaluation, especially on positively worded items, researchers may infer that a wording effect of positively worded items would be evident for culturally Chinese people. The author examined the wording effect in the RSES for culturally Chinese people by comparing different confirmatory factor models. The author analyzed data from 2 independent samples of students at the National Taiwan University (ns = 393, 441) and a national sample of juniors recruited from 140 universities and colleges in Taiwan in 2004 (n = 28,862). Results showed that in addition to a global factor for self-esteem, method effects of positively and negatively worded items should also be specified for a model fitting culturally Chinese people.

  1. Matching factorization theorems with an inverse-error weighting

    NASA Astrophysics Data System (ADS)

    Echevarria, Miguel G.; Kasemets, Tomas; Lansberg, Jean-Philippe; Pisano, Cristian; Signori, Andrea

    2018-06-01

    We propose a new fast method to match factorization theorems applicable in different kinematical regions, such as the transverse-momentum-dependent and the collinear factorization theorems in Quantum Chromodynamics. At variance with well-known approaches relying on their simple addition and subsequent subtraction of double-counted contributions, ours simply builds on their weighting using the theory uncertainties deduced from the factorization theorems themselves. This allows us to estimate the unknown complete matched cross section from an inverse-error-weighted average. The method is simple and provides an evaluation of the theoretical uncertainty of the matched cross section associated with the uncertainties from the power corrections to the factorization theorems (additional uncertainties, such as the nonperturbative ones, should be added for a proper comparison with experimental data). Its usage is illustrated with several basic examples, such as Z boson, W boson, H0 boson and Drell-Yan lepton-pair production in hadronic collisions, and compared to the state-of-the-art Collins-Soper-Sterman subtraction scheme. It is also not limited to the transverse-momentum spectrum, and can straightforwardly be extended to match any (un)polarized cross section differential in other variables, including multi-differential measurements.

  2. Matching factorization theorems with an inverse-error weighting

    DOE PAGES

    Echevarria, Miguel G.; Kasemets, Tomas; Lansberg, Jean-Philippe; ...

    2018-04-03

    We propose a new fast method to match factorization theorems applicable in different kinematical regions, such as the transverse-momentum-dependent and the collinear factorization theorems in Quantum Chromodynamics. At variance with well-known approaches relying on their simple addition and subsequent subtraction of double-counted contributions, ours simply builds on their weighting using the theory uncertainties deduced from the factorization theorems themselves. This allows us to estimate the unknown complete matched cross section from an inverse-error-weighted average. The method is simple and provides an evaluation of the theoretical uncertainty of the matched cross section associated with the uncertainties from the power corrections tomore » the factorization theorems (additional uncertainties, such as the nonperturbative ones, should be added for a proper comparison with experimental data). Its usage is illustrated with several basic examples, such as Z boson, W boson, H 0 boson and Drell–Yan lepton-pair production in hadronic collisions, and compared to the state-of-the-art Collins–Soper–Sterman subtraction scheme. In conclusion, it is also not limited to the transverse-momentum spectrum, and can straightforwardly be extended to match any (un)polarized cross section differential in other variables, including multi-differential measurements.« less

  3. Matching factorization theorems with an inverse-error weighting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Echevarria, Miguel G.; Kasemets, Tomas; Lansberg, Jean-Philippe

    We propose a new fast method to match factorization theorems applicable in different kinematical regions, such as the transverse-momentum-dependent and the collinear factorization theorems in Quantum Chromodynamics. At variance with well-known approaches relying on their simple addition and subsequent subtraction of double-counted contributions, ours simply builds on their weighting using the theory uncertainties deduced from the factorization theorems themselves. This allows us to estimate the unknown complete matched cross section from an inverse-error-weighted average. The method is simple and provides an evaluation of the theoretical uncertainty of the matched cross section associated with the uncertainties from the power corrections tomore » the factorization theorems (additional uncertainties, such as the nonperturbative ones, should be added for a proper comparison with experimental data). Its usage is illustrated with several basic examples, such as Z boson, W boson, H 0 boson and Drell–Yan lepton-pair production in hadronic collisions, and compared to the state-of-the-art Collins–Soper–Sterman subtraction scheme. In conclusion, it is also not limited to the transverse-momentum spectrum, and can straightforwardly be extended to match any (un)polarized cross section differential in other variables, including multi-differential measurements.« less

  4. Anytime query-tuned kernel machine classifiers via Cholesky factorization

    NASA Technical Reports Server (NTRS)

    DeCoste, D.

    2002-01-01

    We recently demonstrated 2 to 64-fold query-time speedups of Support Vector Machine and Kernel Fisher classifiers via a new computational geometry method for anytime output bounds (DeCoste,2002). This new paper refines our approach in two key ways. First, we introduce a simple linear algebra formulation based on Cholesky factorization, yielding simpler equations and lower computational overhead. Second, this new formulation suggests new methods for achieving additional speedups, including tuning on query samples. We demonstrate effectiveness on benchmark datasets.

  5. Computer-generated holograms by multiple wavefront recording plane method with occlusion culling.

    PubMed

    Symeonidou, Athanasia; Blinder, David; Munteanu, Adrian; Schelkens, Peter

    2015-08-24

    We propose a novel fast method for full parallax computer-generated holograms with occlusion processing, suitable for volumetric data such as point clouds. A novel light wave propagation strategy relying on the sequential use of the wavefront recording plane method is proposed, which employs look-up tables in order to reduce the computational complexity in the calculation of the fields. Also, a novel technique for occlusion culling with little additional computation cost is introduced. Additionally, the method adheres a Gaussian distribution to the individual points in order to improve visual quality. Performance tests show that for a full-parallax high-definition CGH a speedup factor of more than 2,500 compared to the ray-tracing method can be achieved without hardware acceleration.

  6. Downdating a time-varying square root information filter

    NASA Technical Reports Server (NTRS)

    Muellerschoen, Ronald J.

    1990-01-01

    A new method to efficiently downdate an estimate and covariance generated by a discrete time Square Root Information Filter (SRIF) is presented. The method combines the QR factor downdating algorithm of Gill and the decentralized SRIF algorithm of Bierman. Efficient removal of either measurements or a priori information is possible without loss of numerical integrity. Moreover, the method includes features for detecting potential numerical degradation. Performance on a 300 parameter system with 5800 data points shows that the method can be used in real time and hence is a promising tool for interactive data analysis. Additionally, updating a time-varying SRIF filter with either additional measurements or a priori information proceeds analogously.

  7. Reducing the time-lag between onset of chest pain and seeking professional medical help: a theory-based review

    PubMed Central

    2013-01-01

    Background Research suggests that there are a number of factors which can be associated with delay in a patient seeking professional help following chest pain, including demographic and social factors. These factors may have an adverse impact on the efficacy of interventions which to date have had limited success in improving patient action times. Theory-based methods of review are becoming increasingly recognised as important additions to conventional systematic review methods. They can be useful to gain additional insights into the characteristics of effective interventions by uncovering complex underlying mechanisms. Methods This paper describes the further analysis of research papers identified in a conventional systematic review of published evidence. The aim of this work was to investigate the theoretical frameworks underpinning studies exploring the issue of why people having a heart attack delay seeking professional medical help. The study used standard review methods to identify papers meeting the inclusion criterion, and carried out a synthesis of data relating to theoretical underpinnings. Results Thirty six papers from the 53 in the original systematic review referred to a particular theoretical perspective, or contained data which related to theoretical assumptions. The most frequently mentioned theory was the self-regulatory model of illness behaviour. Papers reported the potential significance of aspects of this model including different coping mechanisms, strategies of denial and varying models of treatment seeking. Studies also drew attention to the potential role of belief systems, applied elements of attachment theory, and referred to models of maintaining integrity, ways of knowing, and the influence of gender. Conclusions The review highlights the need to examine an individual’s subjective experience of and response to health threats, and confirms the gap between knowledge and changed behaviour. Interventions face key challenges if they are to influence patient perceptions regarding seriousness of symptoms; varying processes of coping; and obstacles created by patient perceptions of their role and responsibilities. A theoretical approach to review of these papers provides additional insight into the assumptions underpinning interventions, and illuminates factors which may impact on their efficacy. The method thus offers a useful supplement to conventional systematic review methods. PMID:23388093

  8. Factors Influencing Hearing Aid Use in the Classroom: A Pilot Study

    ERIC Educational Resources Information Center

    Gustafson, Samantha J.; Davis, Hilary; Hornsby, Benjamin W. Y.; Bess, Fred H.

    2015-01-01

    Purpose: This pilot study examined factors influencing classroom hearing aid use in school-age children with hearing loss. Method: The research team visited classrooms of 38 children with mild-to-moderate hearing loss (Grades 1-7) on 2 typical school days, twice per day, to document hearing aid use. In addition, parents reported the number of…

  9. Source apportionment of PAH in Hamilton Harbour suspended sediments: comparison of two factor analysis methods.

    PubMed

    Sofowote, Uwayemi M; McCarry, Brian E; Marvin, Christopher H

    2008-08-15

    A total of 26 suspended sediment samples collected over a 5-year period in Hamilton Harbour, Ontario, Canada and surrounding creeks were analyzed for a suite of polycyclic aromatic hydrocarbons and sulfur heterocycles. Hamilton Harbour sediments contain relatively high levels of polycyclic aromatic compounds and heavy metals due to emissions from industrial and mobile sources. Two receptor modeling methods using factor analyses were compared to determine the profiles and relative contributions of pollution sources to the harbor; these methods are principal component analyses (PCA) with multiple linear regression analysis (MLR) and positive matrix factorization (PMF). Both methods identified four factors and gave excellent correlation coefficients between predicted and measured levels of 25 aromatic compounds; both methods predicted similar contributions from coal tar/coal combustion sources to the harbor (19 and 26%, respectively). One PCA factor was identified as contributions from vehicular emissions (61%); PMF was able to differentiate vehicular emissions into two factors, one attributed to gasoline emissions sources (28%) and the other to diesel emissions sources (24%). Overall, PMF afforded better source identification than PCA with MLR. This work constitutes one of the few examples of the application of PMF to the source apportionment of sediments; the addition of sulfur heterocycles to the analyte list greatly aided in the source identification process.

  10. A Method of Reducing Random Drift in the Combined Signal of an Array of Inertial Sensors

    DTIC Science & Technology

    2015-09-30

    stability of the collective output, Bayard et al, US Patent 6,882,964. The prior art methods rely upon the use of Kalman filtering and averaging...including scale-factor errors, quantization effects, temperature effects, random drift, and additive noise. A comprehensive account of all of these

  11. Sufficient Forecasting Using Factor Models

    PubMed Central

    Fan, Jianqing; Xue, Lingzhou; Yao, Jiawei

    2017-01-01

    We consider forecasting a single time series when there is a large number of predictors and a possible nonlinear effect. The dimensionality was first reduced via a high-dimensional (approximate) factor model implemented by the principal component analysis. Using the extracted factors, we develop a novel forecasting method called the sufficient forecasting, which provides a set of sufficient predictive indices, inferred from high-dimensional predictors, to deliver additional predictive power. The projected principal component analysis will be employed to enhance the accuracy of inferred factors when a semi-parametric (approximate) factor model is assumed. Our method is also applicable to cross-sectional sufficient regression using extracted factors. The connection between the sufficient forecasting and the deep learning architecture is explicitly stated. The sufficient forecasting correctly estimates projection indices of the underlying factors even in the presence of a nonparametric forecasting function. The proposed method extends the sufficient dimension reduction to high-dimensional regimes by condensing the cross-sectional information through factor models. We derive asymptotic properties for the estimate of the central subspace spanned by these projection directions as well as the estimates of the sufficient predictive indices. We further show that the natural method of running multiple regression of target on estimated factors yields a linear estimate that actually falls into this central subspace. Our method and theory allow the number of predictors to be larger than the number of observations. We finally demonstrate that the sufficient forecasting improves upon the linear forecasting in both simulation studies and an empirical study of forecasting macroeconomic variables. PMID:29731537

  12. Cross-Cultural Adaptation and Validation of the MPAM-R to Brazilian Portuguese and Proposal of a New Method to Calculate Factor Scores

    PubMed Central

    Albuquerque, Maicon R.; Lopes, Mariana C.; de Paula, Jonas J.; Faria, Larissa O.; Pereira, Eveline T.; da Costa, Varley T.

    2017-01-01

    In order to understand the reasons that lead individuals to practice physical activity, researchers developed the Motives for Physical Activity Measure-Revised (MPAM-R) scale. In 2010, a translation of MPAM-R to Portuguese and its validation was performed. However, psychometric measures were not acceptable. In addition, factor scores in some sports psychology scales are calculated by the mean of scores by items of the factor. Nevertheless, it seems appropriate that items with higher factor loadings, extracted by Factor Analysis, have greater weight in the factor score, as items with lower factor loadings have less weight in the factor score. The aims of the present study are to translate, validate the MPAM-R for Portuguese versions, and investigate agreement between two methods used to calculate factor scores. Three hundred volunteers who were involved in physical activity programs for at least 6 months were collected. Confirmatory Factor Analysis of the 30 items indicated that the version did not fit the model. After excluding four items, the final model with 26 items showed acceptable model fit measures by Exploratory Factor Analysis, as well as it conceptually supports the five factors as the original proposal. When two methods are compared to calculate factors scores, our results showed that only “Enjoyment” and “Appearance” factors showed agreement between methods to calculate factor scores. So, the Portuguese version of the MPAM-R can be used in a Brazilian context, and a new proposal for the calculation of the factor score seems to be promising. PMID:28293203

  13. The Method of Space-time Conservation Element and Solution Element: Development of a New Implicit Solver

    NASA Technical Reports Server (NTRS)

    Chang, S. C.; Wang, X. Y.; Chow, C. Y.; Himansu, A.

    1995-01-01

    The method of space-time conservation element and solution element is a nontraditional numerical method designed from a physicist's perspective, i.e., its development is based more on physics than numerics. It uses only the simplest approximation techniques and yet is capable of generating nearly perfect solutions for a 2-D shock reflection problem used by Helen Yee and others. In addition to providing an overall view of the new method, we introduce a new concept in the design of implicit schemes, and use it to construct a highly accurate solver for a convection-diffusion equation. It is shown that, in the inviscid case, this new scheme becomes explicit and its amplification factors are identical to those of the Leapfrog scheme. On the other hand, in the pure diffusion case, its principal amplification factor becomes the amplification factor of the Crank-Nicolson scheme.

  14. Factors Leading to Membership in Professional Associations and Levels of Professional Commitment as Determined by Active and Inactive Members of Delta Pi Epsilon

    ERIC Educational Resources Information Center

    McCroskey, Stacey; O'Neil, Sharon Lund

    2010-01-01

    Purpose: This study was undertaken with grant funds provided by the Delta Pi Epsilon (DPE) Research Foundation, Inc., to assess the factors of professional commitment related to membership. Additionally, the respondents' perceptions about DPE affiliating with the National Business Education Association (NBEA) were investigated. Method: Of the…

  15. Should particle size analysis data be combined with EPA approved sampling method data in the development of AP-42 emission factors?

    USDA-ARS?s Scientific Manuscript database

    A cotton ginning industry-supported project was initiated in 2008 and completed in 2013 to collect additional data for U.S. Environmental Protection Agency’s (EPA) Compilation of Air Pollution Emission Factors (AP-42) for PM10 and PM2.5. Stack emissions were collected using particle size distributio...

  16. A novel statistical approach for identification of the master regulator transcription factor.

    PubMed

    Sikdar, Sinjini; Datta, Susmita

    2017-02-02

    Transcription factors are known to play key roles in carcinogenesis and therefore, are gaining popularity as potential therapeutic targets in drug development. A 'master regulator' transcription factor often appears to control most of the regulatory activities of the other transcription factors and the associated genes. This 'master regulator' transcription factor is at the top of the hierarchy of the transcriptomic regulation. Therefore, it is important to identify and target the master regulator transcription factor for proper understanding of the associated disease process and identifying the best therapeutic option. We present a novel two-step computational approach for identification of master regulator transcription factor in a genome. At the first step of our method we test whether there exists any master regulator transcription factor in the system. We evaluate the concordance of two ranked lists of transcription factors using a statistical measure. In case the concordance measure is statistically significant, we conclude that there is a master regulator. At the second step, our method identifies the master regulator transcription factor, if there exists one. In the simulation scenario, our method performs reasonably well in validating the existence of a master regulator when the number of subjects in each treatment group is reasonably large. In application to two real datasets, our method ensures the existence of master regulators and identifies biologically meaningful master regulators. An R code for implementing our method in a sample test data can be found in http://www.somnathdatta.org/software . We have developed a screening method of identifying the 'master regulator' transcription factor just using only the gene expression data. Understanding the regulatory structure and finding the master regulator help narrowing the search space for identifying biomarkers for complex diseases such as cancer. In addition to identifying the master regulator our method provides an overview of the regulatory structure of the transcription factors which control the global gene expression profiles and consequently the cell functioning.

  17. Storage and computationally efficient permutations of factorized covariance and square-root information matrices

    NASA Technical Reports Server (NTRS)

    Muellerschoen, R. J.

    1988-01-01

    A unified method to permute vector-stored upper-triangular diagonal factorized covariance (UD) and vector stored upper-triangular square-root information filter (SRIF) arrays is presented. The method involves cyclical permutation of the rows and columns of the arrays and retriangularization with appropriate square-root-free fast Givens rotations or elementary slow Givens reflections. A minimal amount of computation is performed and only one scratch vector of size N is required, where N is the column dimension of the arrays. To make the method efficient for large SRIF arrays on a virtual memory machine, three additional scratch vectors each of size N are used to avoid expensive paging faults. The method discussed is compared with the methods and routines of Bierman's Estimation Subroutine Library (ESL).

  18. Denoising Sparse Images from GRAPPA using the Nullspace Method (DESIGN)

    PubMed Central

    Weller, Daniel S.; Polimeni, Jonathan R.; Grady, Leo; Wald, Lawrence L.; Adalsteinsson, Elfar; Goyal, Vivek K

    2011-01-01

    To accelerate magnetic resonance imaging using uniformly undersampled (nonrandom) parallel imaging beyond what is achievable with GRAPPA alone, the Denoising of Sparse Images from GRAPPA using the Nullspace method (DESIGN) is developed. The trade-off between denoising and smoothing the GRAPPA solution is studied for different levels of acceleration. Several brain images reconstructed from uniformly undersampled k-space data using DESIGN are compared against reconstructions using existing methods in terms of difference images (a qualitative measure), PSNR, and noise amplification (g-factors) as measured using the pseudo-multiple replica method. Effects of smoothing, including contrast loss, are studied in synthetic phantom data. In the experiments presented, the contrast loss and spatial resolution are competitive with existing methods. Results for several brain images demonstrate significant improvements over GRAPPA at high acceleration factors in denoising performance with limited blurring or smoothing artifacts. In addition, the measured g-factors suggest that DESIGN mitigates noise amplification better than both GRAPPA and L1 SPIR-iT (the latter limited here by uniform undersampling). PMID:22213069

  19. Factor structure of the Halstead-Reitan Neuropsychological Battery for children: a brief report supplement.

    PubMed

    Ross, Sylvia An; Allen, Daniel N; Goldstein, Gerald

    2014-01-01

    The Halstead-Reitan Neuropsychological Battery (HRNB) is the first factor-analyzed neuropsychological battery and consists of three batteries for young children, older children, and adults. Halstead's original factor analysis extracted four factors from the adult version of the battery, which were the basis for his theory of biological intelligence. These factors were called Central Integrative Field, Abstraction, Power, and Directional. Since this original analysis, Reitan's additions to the battery, and the development of the child versions of the test, this factor-analytic research continued. An introduction and the adult literature are reviewed in Ross, Allen, and Goldstein ( in press ). In this supplemental article, factor-analytic studies of the HRNB with children are reviewed. It is concluded that factor analysis of the HRNB or Reitan-Indiana Neuropsychological Battery with children does not replicate the extensiveness of the adult literature, although there is some evidence that when the traditional battery for older children is used, the factor structure is similar to what is found in adult studies. Reitan's changes to the battery appear to have added factors including language and sensory-perceptual factors. When other tests and scoring methods are used in addition to the core battery, differing solutions are produced.

  20. Evaluation of immunoturbidimetric rheumatoid factor method from Diagam on Abbott c8000 analyzer: comparison with immunonephelemetric method.

    PubMed

    Dupuy, Anne Marie; Hurstel, Rémy; Bargnoux, Anne Sophie; Badiou, Stéphanie; Cristol, Jean Paul

    2014-01-01

    Rheumatoid factor (RF) consists of autoantibodies and because of its heterogeneity its determination is not easy. Currently, nephelometry and Elisa method are considered as reference methods. Due to consolidation, many laboratories have fully automated turbidimetric apparatus, and specific nephelemetric systems are not always available. In addition, nephelemetry is more accurate, but time consuming, expensive, and requires a specific device, resulting in a lower efficiency. Turbidimetry could be an attractive alternative. The turbidimetric RF test from Diagam meets the requirements of accuracy and precision for optimal clinical use, with an acceptable measuring range, and could be an alternative in the determination of RF, without the associated cost of a dedicated instrument, making consolidation and saving blood possible.

  1. Identification of M-CSF agonists and antagonists

    DOEpatents

    Pandit, Jayvardhan [Mystic, CT; Jancarik, Jarmila [Walnut Creek, CA; Kim, Sung-Hou [Moraga, CA; Koths, Kirston [El Cerrito, CA; Halenbeck, Robert [San Rafael, CA; Fear, Anna Lisa [Oakland, CA; Taylor, Eric [Oakland, CA; Yamamoto, Ralph [Martinez, CA; Bohm, Andrew [Armonk, NY

    2000-02-15

    The present invention is directed to methods for crystallizing macrophage colony stimulating factor. The present invention is also directed to methods for designing and producing M-CSF agonists and antagonists using information derived from the crystallographic structure of M-CSF. The invention is also directed to methods for screening M-CSF agonists and antagonists. In addition, the present invention is directed to an isolated, purified, soluble and functional M-CSF receptor.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shim, Yunsic; Amar, Jacques G.

    While temperature-accelerated dynamics (TAD) is a powerful method for carrying out non-equilibrium simulations of systems over extended time scales, the computational cost of serial TAD increases approximately as N{sup 3} where N is the number of atoms. In addition, although a parallel TAD method based on domain decomposition [Y. Shim et al., Phys. Rev. B 76, 205439 (2007)] has been shown to provide significantly improved scaling, the dynamics in such an approach is only approximate while the size of activated events is limited by the spatial decomposition size. Accordingly, it is of interest to develop methods to improve the scalingmore » of serial TAD. As a first step in understanding the factors which determine the scaling behavior, we first present results for the overall scaling of serial TAD and its components, which were obtained from simulations of Ag/Ag(100) growth and Ag/Ag(100) annealing, and compare with theoretical predictions. We then discuss two methods based on localization which may be used to address two of the primary “bottlenecks” to the scaling of serial TAD with system size. By implementing both of these methods, we find that for intermediate system-sizes, the scaling is improved by almost a factor of N{sup 1/2}. Some additional possible methods to improve the scaling of TAD are also discussed.« less

  3. Improved scaling of temperature-accelerated dynamics using localization

    NASA Astrophysics Data System (ADS)

    Shim, Yunsic; Amar, Jacques G.

    2016-07-01

    While temperature-accelerated dynamics (TAD) is a powerful method for carrying out non-equilibrium simulations of systems over extended time scales, the computational cost of serial TAD increases approximately as N3 where N is the number of atoms. In addition, although a parallel TAD method based on domain decomposition [Y. Shim et al., Phys. Rev. B 76, 205439 (2007)] has been shown to provide significantly improved scaling, the dynamics in such an approach is only approximate while the size of activated events is limited by the spatial decomposition size. Accordingly, it is of interest to develop methods to improve the scaling of serial TAD. As a first step in understanding the factors which determine the scaling behavior, we first present results for the overall scaling of serial TAD and its components, which were obtained from simulations of Ag/Ag(100) growth and Ag/Ag(100) annealing, and compare with theoretical predictions. We then discuss two methods based on localization which may be used to address two of the primary "bottlenecks" to the scaling of serial TAD with system size. By implementing both of these methods, we find that for intermediate system-sizes, the scaling is improved by almost a factor of N1/2. Some additional possible methods to improve the scaling of TAD are also discussed.

  4. Random phase detection in multidimensional NMR.

    PubMed

    Maciejewski, Mark W; Fenwick, Matthew; Schuyler, Adam D; Stern, Alan S; Gorbatyuk, Vitaliy; Hoch, Jeffrey C

    2011-10-04

    Despite advances in resolution accompanying the development of high-field superconducting magnets, biomolecular applications of NMR require multiple dimensions in order to resolve individual resonances, and the achievable resolution is typically limited by practical constraints on measuring time. In addition to the need for measuring long evolution times to obtain high resolution, the need to distinguish the sign of the frequency constrains the ability to shorten measuring times. Sign discrimination is typically accomplished by sampling the signal with two different receiver phases or by selecting a reference frequency outside the range of frequencies spanned by the signal and then sampling at a higher rate. In the parametrically sampled (indirect) time dimensions of multidimensional NMR experiments, either method imposes an additional factor of 2 sampling burden for each dimension. We demonstrate that by using a single detector phase at each time sample point, but randomly altering the phase for different points, the sign ambiguity that attends fixed single-phase detection is resolved. Random phase detection enables a reduction in experiment time by a factor of 2 for each indirect dimension, amounting to a factor of 8 for a four-dimensional experiment, albeit at the cost of introducing sampling artifacts. Alternatively, for fixed measuring time, random phase detection can be used to double resolution in each indirect dimension. Random phase detection is complementary to nonuniform sampling methods, and their combination offers the potential for additional benefits. In addition to applications in biomolecular NMR, random phase detection could be useful in magnetic resonance imaging and other signal processing contexts.

  5. A comparative study of smart spectrophotometric methods for simultaneous determination of sitagliptin phosphate and metformin hydrochloride in their binary mixture.

    PubMed

    Lotfy, Hayam M; Mohamed, Dalia; Mowaka, Shereen

    2015-01-01

    Simple, specific, accurate and precise spectrophotometric methods were developed and validated for the simultaneous determination of the oral antidiabetic drugs; sitagliptin phosphate (STG) and metformin hydrochloride (MET) in combined pharmaceutical formulations. Three methods were manipulating ratio spectra namely; ratio difference (RD), ratio subtraction (RS) and a novel approach of induced amplitude modulation (IAM) methods. The first two methods were used for determination of STG, while MET was directly determined by measuring its absorbance at λmax 232 nm. However, (IAM) was used for the simultaneous determination of both drugs. Moreover, another three methods were developed based on derivative spectroscopy followed by mathematical manipulation steps namely; amplitude factor (P-factor), amplitude subtraction (AS) and modified amplitude subtraction (MAS). In addition, in this work the novel sample enrichment technique named spectrum addition was adopted. The proposed spectrophotometric methods did not require any preliminary separation step. The accuracy, precision and linearity ranges of the proposed methods were determined. The selectivity of the developed methods was investigated by analyzing laboratory prepared mixtures of the drugs and their combined pharmaceutical formulations. Standard deviation values were less than 1.5 in the assay of raw materials and tablets. The obtained results were statistically compared to that of a reported spectrophotometric method. The statistical comparison showed that there was no significant difference between the proposed methods and the reported one regarding both accuracy and precision. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Syndemic Theory and HIV-Related Risk Among Young Transgender Women: The Role of Multiple, Co-Occurring Health Problems and Social Marginalization

    PubMed Central

    Brennan, Julia; Kuhns, Lisa M.; Johnson, Amy K.; Belzer, Marvin; Wilson, Erin C.

    2012-01-01

    Objectives. We assessed whether multiple psychosocial factors are additive in their relationship to sexual risk behavior and self-reported HIV status (i.e., can be characterized as a syndemic) among young transgender women and the relationship of indicators of social marginalization to psychosocial factors. Methods. Participants (n = 151) were aged 15 to 24 years and lived in Chicago or Los Angeles. We collected data on psychosocial factors (low self-esteem, polysubstance use, victimization related to transgender identity, and intimate partner violence) and social marginalization indicators (history of commercial sex work, homelessness, and incarceration) through an interviewer-administered survey. Results. Syndemic factors were positively and additively related to sexual risk behavior and self-reported HIV infection. In addition, our syndemic index was significantly related to 2 indicators of social marginalization: a history of sex work and previous incarceration. Conclusions. These findings provide evidence for a syndemic of co-occurring psychosocial and health problems in young transgender women, taking place in a context of social marginalization. PMID:22873480

  7. Comparison of point-of-care methods for preparation of platelet concentrate (platelet-rich plasma).

    PubMed

    Weibrich, Gernot; Kleis, Wilfried K G; Streckbein, Philipp; Moergel, Maximilian; Hitzler, Walter E; Hafner, Gerd

    2012-01-01

    This study analyzed the concentrations of platelets and growth factors in platelet-rich plasma (PRP), which are likely to depend on the method used for its production. The cellular composition and growth factor content of platelet concentrates (platelet-rich plasma) produced by six different procedures were quantitatively analyzed and compared. Platelet and leukocyte counts were determined on an automatic cell counter, and analysis of growth factors was performed using enzyme-linked immunosorbent assay. The principal differences between the analyzed PRP production methods (blood bank method of intermittent flow centrifuge system/platelet apheresis and by the five point-of-care methods) and the resulting platelet concentrates were evaluated with regard to resulting platelet, leukocyte, and growth factor levels. The platelet counts in both whole blood and PRP were generally higher in women than in men; no differences were observed with regard to age. Statistical analysis of platelet-derived growth factor AB (PDGF-AB) and transforming growth factor β1 (TGF-β1) showed no differences with regard to age or gender. Platelet counts and TGF-β1 concentration correlated closely, as did platelet counts and PDGF-AB levels. There were only rare correlations between leukocyte counts and PDGF-AB levels, but comparison of leukocyte counts and PDGF-AB levels demonstrated certain parallel tendencies. TGF-β1 levels derive in substantial part from platelets and emphasize the role of leukocytes, in addition to that of platelets, as a source of growth factors in PRP. All methods of producing PRP showed high variability in platelet counts and growth factor levels. The highest growth factor levels were found in the PRP prepared using the Platelet Concentrate Collection System manufactured by Biomet 3i.

  8. Optical factors determined by the T-matrix method in turbidity measurement of absolute coagulation rate constants.

    PubMed

    Xu, Shenghua; Liu, Jie; Sun, Zhiwei

    2006-12-01

    Turbidity measurement for the absolute coagulation rate constants of suspensions has been extensively adopted because of its simplicity and easy implementation. A key factor in deriving the rate constant from experimental data is how to theoretically evaluate the so-called optical factor involved in calculating the extinction cross section of doublets formed during aggregation. In a previous paper, we have shown that compared with other theoretical approaches, the T-matrix method provides a robust solution to this problem and is effective in extending the applicability range of the turbidity methodology, as well as increasing measurement accuracy. This paper will provide a more comprehensive discussion of the physical insight for using the T-matrix method in turbidity measurement and associated technical details. In particular, the importance of ensuring the correct value for the refractive indices for colloidal particles and the surrounding medium used in the calculation is addressed, because the indices generally vary with the wavelength of the incident light. The comparison of calculated results with experiments shows that the T-matrix method can correctly calculate optical factors even for large particles, whereas other existing theories cannot. In addition, the data of the optical factor calculated by the T-matrix method for a range of particle radii and incident light wavelengths are listed.

  9. A simple and accurate method for calculation of the structure factor of interacting charged spheres.

    PubMed

    Wu, Chu; Chan, Derek Y C; Tabor, Rico F

    2014-07-15

    Calculation of the structure factor of a system of interacting charged spheres based on the Ginoza solution of the Ornstein-Zernike equation has been developed and implemented on a stand-alone spreadsheet. This facilitates direct interactive numerical and graphical comparisons between experimental structure factors with the pioneering theoretical model of Hayter-Penfold that uses the Hansen-Hayter renormalisation correction. The method is used to fit example experimental structure factors obtained from the small-angle neutron scattering of a well-characterised charged micelle system, demonstrating that this implementation, available in the supplementary information, gives identical results to the Hayter-Penfold-Hansen approach for the structure factor, S(q) and provides direct access to the pair correlation function, g(r). Additionally, the intermediate calculations and outputs can be readily accessed and modified within the familiar spreadsheet environment, along with information on the normalisation procedure. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Dose estimates for the solid waste performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rittman, P.D.

    1994-08-30

    The Solid Waste Performance Assessment calculations by PNL in 1990 were redone to incorporate changes in methods and parameters since then. The ten scenarios found in their report were reduced to three, the Post-Drilling Resident, the Post-Excavation Resident, and an All Pathways Irrigator. In addition, estimates of population dose to people along the Columbia River are also included. The attached report describes the methods and parameters used in the calculations, and derives dose factors for each scenario. In addition, waste concentrations, ground water concentrations, and river water concentrations needed to reach the performance objectives of 100 mrem/yr and 500 person-rem/yrmore » are computed. Internal dose factors from DOE-0071 were applied when computing internal dose. External dose rate factors came from the GENII Version 1.485 software package. Dose calculations were carried out on a spreadsheet. The calculations are described in detail in the report for 63 nuclides, including 5 not presently in the GENII libraries. The spreadsheet calculations were checked by comparison with GENII, as described in Appendix D.« less

  11. Literature review of the benefits and obstacle of horizontal directional drilling

    NASA Astrophysics Data System (ADS)

    Norizam, M. S. Mohd; Nuzul Azam, H.; Helmi Zulhaidi, S.; Aziz, A. Abdul; Nadzrol Fadzilah, A.

    2017-11-01

    In this new era the construction industry not only need to be completed within budget, timely, at acceptable quality and safety but the stakeholders especially the local authorities and the public realises for the important need of sustainable construction method to be used for our younger generation to heritage if not better a safer world for them to live and raise up their children’s. Horizontal Directional Drilling method is the most commonly recognised trenchless utilities method as a preferred construction method in this age. Among the reasons HDD method offers less disturbance on traffic, the public, business activities and neighbourhood, lower restoration cost, less noise, dust and minimum import/export of the construction materials. In addition HDD method can drill through congested utilities areas with minimum cutting and shorter time. This paper aims to appraise the benefits and obstacle of HDD method in construction industry. It is an endeavour to fulfil the local authorities cry for alternative method that less damages to the roads, road furniture’s and public complaints compared to the conventional open cut method. In addition HDD method is seem to be in line with sustainable development requirements e.g. reduce, reuse, recycle and etc. Hence, it is important to determine the benefits and obstacle factors of HDD implementation. The factors are based on the literature review conducted by the author on the subject matters gathered from previous studies, journals, text books, guidelines, magazine articles, newspaper cutting and etc.

  12. Enhancing non-refractory aerosol apportionment from an urban industrial site through receptor modeling of complete high time-resolution aerosol mass spectra

    NASA Astrophysics Data System (ADS)

    McGuire, M. L.; Chang, R. Y.-W.; Slowik, J. G.; Jeong, C.-H.; Healy, R. M.; Lu, G.; Mihele, C.; Abbatt, J. P. D.; Brook, J. R.; Evans, G. J.

    2014-08-01

    Receptor modeling was performed on quadrupole unit mass resolution aerosol mass spectrometer (Q-AMS) sub-micron particulate matter (PM) chemical speciation measurements from Windsor, Ontario, an industrial city situated across the Detroit River from Detroit, Michigan. Aerosol and trace gas measurements were collected on board Environment Canada's Canadian Regional and Urban Investigation System for Environmental Research (CRUISER) mobile laboratory. Positive matrix factorization (PMF) was performed on the AMS full particle-phase mass spectrum (PMFFull MS) encompassing both organic and inorganic components. This approach compared to the more common method of analyzing only the organic mass spectra (PMFOrg MS). PMF of the full mass spectrum revealed that variability in the non-refractory sub-micron aerosol concentration and composition was best explained by six factors: an amine-containing factor (Amine); an ammonium sulfate- and oxygenated organic aerosol-containing factor (Sulfate-OA); an ammonium nitrate- and oxygenated organic aerosol-containing factor (Nitrate-OA); an ammonium chloride-containing factor (Chloride); a hydrocarbon-like organic aerosol (HOA) factor; and a moderately oxygenated organic aerosol factor (OOA). PMF of the organic mass spectrum revealed three factors of similar composition to some of those revealed through PMFFull MS: Amine, HOA and OOA. Including both the inorganic and organic mass proved to be a beneficial approach to analyzing the unit mass resolution AMS data for several reasons. First, it provided a method for potentially calculating more accurate sub-micron PM mass concentrations, particularly when unusual factors are present, in this case the Amine factor. As this method does not rely on a priori knowledge of chemical species, it circumvents the need for any adjustments to the traditional AMS species fragmentation patterns to account for atypical species, and can thus lead to more complete factor profiles. It is expected that this method would be even more useful for HR-ToF-AMS data, due to the ability to understand better the chemical nature of atypical factors from high-resolution mass spectra. Second, utilizing PMF to extract factors containing inorganic species allowed for the determination of the extent of neutralization, which could have implications for aerosol parameterization. Third, subtler differences in organic aerosol components were resolved through the incorporation of inorganic mass into the PMF matrix. The additional temporal features provided by the inorganic aerosol components allowed for the resolution of more types of oxygenated organic aerosol than could be reliably resolved from PMF of organics alone. Comparison of findings from the PMFFull MS and PMFOrg MS methods showed that for the Windsor airshed, the PMFFull MS method enabled additional conclusions to be drawn in terms of aerosol sources and chemical processes. While performing PMFOrg MS can provide important distinctions between types of organic aerosol, it is shown that including inorganic species in the PMF analysis can permit further apportionment of organics for unit mass resolution AMS mass spectra.

  13. Enhancing non-refractory aerosol apportionment from an urban industrial site through receptor modelling of complete high time-resolution aerosol mass spectra

    NASA Astrophysics Data System (ADS)

    McGuire, M. L.; Chang, R. Y.-W.; Slowik, J. G.; Jeong, C.-H.; Healy, R. M.; Lu, G.; Mihele, C.; Abbatt, J. P. D.; Brook, J. R.; Evans, G. J.

    2014-02-01

    Receptor modelling was performed on quadrupole unit mass resolution aerosol mass spectrometer (Q-AMS) sub-micron particulate matter (PM) chemical speciation measurements from Windsor, Ontario, an industrial city situated across the Detroit River from Detroit, Michigan. Aerosol and trace gas measurements were collected on board Environment Canada's CRUISER mobile laboratory. Positive matrix factorization (PMF) was performed on the AMS full particle-phase mass spectrum (PMFFull MS) encompassing both organic and inorganic components. This approach was compared to the more common method of analysing only the organic mass spectra (PMFOrg MS). PMF of the full mass spectrum revealed that variability in the non-refractory sub-micron aerosol concentration and composition was best explained by six factors: an amine-containing factor (Amine); an ammonium sulphate and oxygenated organic aerosol containing factor (Sulphate-OA); an ammonium nitrate and oxygenated organic aerosol containing factor (Nitrate-OA); an ammonium chloride containing factor (Chloride); a hydrocarbon-like organic aerosol (HOA) factor; and a moderately oxygenated organic aerosol factor (OOA). PMF of the organic mass spectrum revealed three factors of similar composition to some of those revealed through PMFFull MS: Amine, HOA and OOA. Including both the inorganic and organic mass proved to be a beneficial approach to analysing the unit mass resolution AMS data for several reasons. First, it provided a method for potentially calculating more accurate sub-micron PM mass concentrations, particularly when unusual factors are present, in this case, an Amine factor. As this method does not rely on a priori knowledge of chemical species, it circumvents the need for any adjustments to the traditional AMS species fragmentation patterns to account for atypical species, and can thus lead to more complete factor profiles. It is expected that this method would be even more useful for HR-ToF-AMS data, due to the ability to better understand the chemical nature of atypical factors from high resolution mass spectra. Second, utilizing PMF to extract factors containing inorganic species allowed for the determination of extent of neutralization, which could have implications for aerosol parameterization. Third, subtler differences in organic aerosol components were resolved through the incorporation of inorganic mass into the PMF matrix. The additional temporal features provided by the inorganic aerosol components allowed for the resolution of more types of oxygenated organic aerosol than could be reliably resolved from PMF of organics alone. Comparison of findings from the PMFFull MS and PMFOrg MS methods showed that for the Windsor airshed, the PMFFull MS method enabled additional conclusions to be drawn in terms of aerosol sources and chemical processes. While performing PMFOrg MS can provide important distinctions between types of organic aerosol, it is shown that including inorganic species in the PMF analysis can permit further apportionment of organics for unit mass resolution AMS mass spectra.

  14. Measurement of Blood Coagulation Factor Synthesis in Cultures of Human Hepatocytes.

    PubMed

    Heinz, Stefan; Braspenning, Joris

    2015-01-01

    An important function of the liver is the synthesis and secretion of blood coagulation factors. Within the liver, hepatocytes are involved in the synthesis of most blood coagulation factors, such as fibrinogen, prothrombin, factor V, VII, IX, X, XI, XII, as well as protein C and S, and antithrombin, whereas liver sinusoidal endothelial cells produce factor VIII and von Willebrand factor. Here, we describe methods for the detection and quantification of most blood coagulation factors in hepatocytes in vitro. Hepatocyte cultures indeed provide a valuable tool to study blood coagulation factors. In addition, the generation and expansion of hepatocytes or hepatocyte-like cells may be used in future for cell-based therapies of liver diseases, including blood coagulation factor deficiencies.

  15. Evaluation of gene expression classification studies: factors associated with classification performance.

    PubMed

    Novianti, Putri W; Roes, Kit C B; Eijkemans, Marinus J C

    2014-01-01

    Classification methods used in microarray studies for gene expression are diverse in the way they deal with the underlying complexity of the data, as well as in the technique used to build the classification model. The MAQC II study on cancer classification problems has found that performance was affected by factors such as the classification algorithm, cross validation method, number of genes, and gene selection method. In this paper, we study the hypothesis that the disease under study significantly determines which method is optimal, and that additionally sample size, class imbalance, type of medical question (diagnostic, prognostic or treatment response), and microarray platform are potentially influential. A systematic literature review was used to extract the information from 48 published articles on non-cancer microarray classification studies. The impact of the various factors on the reported classification accuracy was analyzed through random-intercept logistic regression. The type of medical question and method of cross validation dominated the explained variation in accuracy among studies, followed by disease category and microarray platform. In total, 42% of the between study variation was explained by all the study specific and problem specific factors that we studied together.

  16. Observational methods for solar origin diagnostics of energetic protons

    NASA Astrophysics Data System (ADS)

    Miteva, Rositsa

    2017-12-01

    The aim of the present report is to outline the observational methods used to determine the solar origin - in terms of flares and coronal mass ejections (CMEs) - of the in situ observed solar energetic protons. Several widely used guidelines are given and different sources of uncertainties are summarized and discussed. In the present study, a new quality factor is proposed as a certainty check on the so-identified flare-CME pairs. In addition, the correlations between the proton peak intensity and the properties of their solar origin are evaluated as a function of the quality factor.

  17. Using the Reliability Theory for Assessing the Decision Confidence Probability for Comparative Life Cycle Assessments.

    PubMed

    Wei, Wei; Larrey-Lassalle, Pyrène; Faure, Thierry; Dumoulin, Nicolas; Roux, Philippe; Mathias, Jean-Denis

    2016-03-01

    Comparative decision making process is widely used to identify which option (system, product, service, etc.) has smaller environmental footprints and for providing recommendations that help stakeholders take future decisions. However, the uncertainty problem complicates the comparison and the decision making. Probability-based decision support in LCA is a way to help stakeholders in their decision-making process. It calculates the decision confidence probability which expresses the probability of a option to have a smaller environmental impact than the one of another option. Here we apply the reliability theory to approximate the decision confidence probability. We compare the traditional Monte Carlo method with a reliability method called FORM method. The Monte Carlo method needs high computational time to calculate the decision confidence probability. The FORM method enables us to approximate the decision confidence probability with fewer simulations than the Monte Carlo method by approximating the response surface. Moreover, the FORM method calculates the associated importance factors that correspond to a sensitivity analysis in relation to the probability. The importance factors allow stakeholders to determine which factors influence their decision. Our results clearly show that the reliability method provides additional useful information to stakeholders as well as it reduces the computational time.

  18. Boosting structured additive quantile regression for longitudinal childhood obesity data.

    PubMed

    Fenske, Nora; Fahrmeir, Ludwig; Hothorn, Torsten; Rzehak, Peter; Höhle, Michael

    2013-07-25

    Childhood obesity and the investigation of its risk factors has become an important public health issue. Our work is based on and motivated by a German longitudinal study including 2,226 children with up to ten measurements on their body mass index (BMI) and risk factors from birth to the age of 10 years. We introduce boosting of structured additive quantile regression as a novel distribution-free approach for longitudinal quantile regression. The quantile-specific predictors of our model include conventional linear population effects, smooth nonlinear functional effects, varying-coefficient terms, and individual-specific effects, such as intercepts and slopes. Estimation is based on boosting, a computer intensive inference method for highly complex models. We propose a component-wise functional gradient descent boosting algorithm that allows for penalized estimation of the large variety of different effects, particularly leading to individual-specific effects shrunken toward zero. This concept allows us to flexibly estimate the nonlinear age curves of upper quantiles of the BMI distribution, both on population and on individual-specific level, adjusted for further risk factors and to detect age-varying effects of categorical risk factors. Our model approach can be regarded as the quantile regression analog of Gaussian additive mixed models (or structured additive mean regression models), and we compare both model classes with respect to our obesity data.

  19. Estimating interaction on an additive scale between continuous determinants in a logistic regression model.

    PubMed

    Knol, Mirjam J; van der Tweel, Ingeborg; Grobbee, Diederick E; Numans, Mattijs E; Geerlings, Mirjam I

    2007-10-01

    To determine the presence of interaction in epidemiologic research, typically a product term is added to the regression model. In linear regression, the regression coefficient of the product term reflects interaction as departure from additivity. However, in logistic regression it refers to interaction as departure from multiplicativity. Rothman has argued that interaction estimated as departure from additivity better reflects biologic interaction. So far, literature on estimating interaction on an additive scale using logistic regression only focused on dichotomous determinants. The objective of the present study was to provide the methods to estimate interaction between continuous determinants and to illustrate these methods with a clinical example. and results From the existing literature we derived the formulas to quantify interaction as departure from additivity between one continuous and one dichotomous determinant and between two continuous determinants using logistic regression. Bootstrapping was used to calculate the corresponding confidence intervals. To illustrate the theory with an empirical example, data from the Utrecht Health Project were used, with age and body mass index as risk factors for elevated diastolic blood pressure. The methods and formulas presented in this article are intended to assist epidemiologists to calculate interaction on an additive scale between two variables on a certain outcome. The proposed methods are included in a spreadsheet which is freely available at: http://www.juliuscenter.nl/additive-interaction.xls.

  20. Weighted minimum-norm source estimation of magnetoencephalography utilizing the temporal information of the measured data

    NASA Astrophysics Data System (ADS)

    Iwaki, Sunao; Ueno, Shoogo

    1998-06-01

    The weighted minimum-norm estimation (wMNE) is a popular method to obtain the source distribution in the human brain from magneto- and electro- encephalograpic measurements when detailed information about the generator profile is not available. We propose a method to reconstruct current distributions in the human brain based on the wMNE technique with the weighting factors defined by a simplified multiple signal classification (MUSIC) prescanning. In this method, in addition to the conventional depth normalization technique, weighting factors of the wMNE were determined by the cost values previously calculated by a simplified MUSIC scanning which contains the temporal information of the measured data. We performed computer simulations of this method and compared it with the conventional wMNE method. The results show that the proposed method is effective for the reconstruction of the current distributions from noisy data.

  1. Building a Formal Model of a Human-Interactive System: Insights into the Integration of Formal Methods and Human Factors Engineering

    NASA Technical Reports Server (NTRS)

    Bolton, Matthew L.; Bass, Ellen J.

    2009-01-01

    Both the human factors engineering (HFE) and formal methods communities are concerned with finding and eliminating problems with safety-critical systems. This work discusses a modeling effort that leveraged methods from both fields to use model checking with HFE practices to perform formal verification of a human-interactive system. Despite the use of a seemingly simple target system, a patient controlled analgesia pump, the initial model proved to be difficult for the model checker to verify in a reasonable amount of time. This resulted in a number of model revisions that affected the HFE architectural, representativeness, and understandability goals of the effort. If formal methods are to meet the needs of the HFE community, additional modeling tools and technological developments are necessary.

  2. [Evaluation of Contextual Factors in Psychosomatic Rehabilitation].

    PubMed

    Bülau, N I; Kessemeier, F; Petermann, F; Bassler, M; Kobelt, A

    2016-12-01

    Objectives: Although individualized and ICF-oriented implementation of rehabilitation treatment requires knowledge of relevant contextual factors, there is a lack of operationalized documentation and measurement tools to evaluate these factors. Therefore, an ICF-oriented semi-structured interview was designed. Methods: 20 contextual factors were externally assessed whether they negatively affected mental functioning and participation of psychosomatic patients. Additionally, psychometric scales were applied. Results: Six relevant impairing contextual factors were identified. Contextual factors significantly correlated with psychometric scales. Patients with higher contextual impairment showed significantly higher psychological stress levels. Conclusions: Anamnesis of contextual factors at the beginning of psychosomatic rehabilitation yields important information for therapy planning. Further research on contextual factors in medical rehabilitation is needed. © Georg Thieme Verlag KG Stuttgart · New York.

  3. Research on design method of the full form ship with minimum thrust deduction factor

    NASA Astrophysics Data System (ADS)

    Zhang, Bao-ji; Miao, Ai-qin; Zhang, Zhu-xin

    2015-04-01

    In the preliminary design stage of the full form ships, in order to obtain a hull form with low resistance and maximum propulsion efficiency, an optimization design program for a full form ship with the minimum thrust deduction factor has been developed, which combined the potential flow theory and boundary layer theory with the optimization technique. In the optimization process, the Sequential Unconstrained Minimization Technique (SUMT) interior point method of Nonlinear Programming (NLP) was proposed with the minimum thrust deduction factor as the objective function. An appropriate displacement is a basic constraint condition, and the boundary layer separation is an additional one. The parameters of the hull form modification function are used as design variables. At last, the numerical optimization example for lines of after-body of 50000 DWT product oil tanker was provided, which indicated that the propulsion efficiency was improved distinctly by this optimal design method.

  4. Vibration Testing of Electrical Cables to Quantify Loads at Tie-Down Locations

    NASA Technical Reports Server (NTRS)

    Dutson, Joseph D.

    2013-01-01

    The standard method for defining static equivalent structural load factors for components is based on Mile s equation. Unless test data is available, 5% critical damping is assumed for all components when calculating loads. Application of this method to electrical cable tie-down hardware often results in high loads, which often exceed the capability of typical tie-down options such as cable ties and P-clamps. Random vibration testing of electrical cables was used to better understand the factors that influence component loads: natural frequency, damping, and mass participation. An initial round of vibration testing successfully identified variables of interest, checked out the test fixture and instrumentation, and provided justification for removing some conservatism in the standard method. Additional testing is planned that will include a larger range of cable sizes for the most significant contributors to load as variables to further refine loads at cable tie-down points. Completed testing has provided justification to reduce loads at cable tie-downs by 45% with additional refinement based on measured cable natural frequencies.

  5. Use of multiple methods to determine factors affecting quality of care of patients with diabetes.

    PubMed

    Khunti, K

    1999-10-01

    The process of care of patients with diabetes is complex; however, GPs are playing a greater role in its management. Despite the research evidence, the quality of care of patients with diabetes is variable. In order to improve care, information is required on the obstacles faced by practices in improving care. Qualitative and quantitative methods can be used for formation of hypotheses and the development of survey procedures. However, to date few examples exist in general practice research on the use of multiple methods using both quantitative and qualitative techniques for hypothesis generation. We aimed to determine information on all factors that may be associated with delivery of care to patients with diabetes. Factors for consideration on delivery of diabetes care were generated by multiple qualitative methods including brainstorming with health professionals and patients, a focus group and interviews with key informants which included GPs and practice nurses. Audit data showing variations in care of patients with diabetes were used to stimulate the brainstorming session. A systematic literature search focusing on quality of care of patients with diabetes in primary care was also conducted. Fifty-four potential factors were identified by multiple methods. Twenty (37.0%) were practice-related factors, 14 (25.9%) were patient-related factors and 20 (37.0%) were organizational factors. A combination of brainstorming and the literature review identified 51 (94.4%) factors. Patients did not identify factors in addition to those identified by other methods. The complexity of delivery of care to patients with diabetes is reflected in the large number of potential factors identified in this study. This study shows the feasibility of using multiple methods for hypothesis generation. Each evaluation method provided unique data which could not otherwise be easily obtained. This study highlights a way of combining various traditional methods in an attempt to overcome the deficiencies and bias that may occur when using a single method. Similar methods can also be used to generate hypotheses for other exploratory research. An important responsibility of health authorities and primary care groups will be to assess the health needs of their local populations. Multiple methods could also be used to identify and commission services to meet these needs.

  6. Longitudinal Effects on Early Adolescent Language: A Twin Study

    PubMed Central

    DeThorne, Laura Segebart; Smith, Jamie Mahurin; Betancourt, Mariana Aparicio; Petrill, Stephen A.

    2016-01-01

    Purpose We evaluated genetic and environmental contributions to individual differences in language skills during early adolescence, measured by both language sampling and standardized tests, and examined the extent to which these genetic and environmental effects are stable across time. Method We used structural equation modeling on latent factors to estimate additive genetic, shared environmental, and nonshared environmental effects on variance in standardized language skills (i.e., Formal Language) and productive language-sample measures (i.e., Productive Language) in a sample of 527 twins across 3 time points (mean ages 10–12 years). Results Individual differences in the Formal Language factor were influenced primarily by genetic factors at each age, whereas individual differences in the Productive Language factor were primarily due to nonshared environmental influences. For the Formal Language factor, the stability of genetic effects was high across all 3 time points. For the Productive Language factor, nonshared environmental effects showed low but statistically significant stability across adjacent time points. Conclusions The etiology of language outcomes may differ substantially depending on assessment context. In addition, the potential mechanisms for nonshared environmental influences on language development warrant further investigation. PMID:27732720

  7. Assessing Stream Channel Stability at Bridges in Physiographic Regions

    DOT National Transportation Integrated Search

    2006-07-01

    The objective of this study was to expand and improve a rapid channel stability assessment method developed previously by Johnson et al. to include additional factors, such as major physiographic units across the United States, a greater range of ban...

  8. A method for economic evaluation of redundancy levels for aerospace systems

    NASA Technical Reports Server (NTRS)

    Hodge, P. W.; Frumkin, B.

    1973-01-01

    Principle comprises primary cost impacts, such as operational delays, reflown missions due to aborts, procurement of equipment, and vehicle expansion to accommodate additional equipment. Economics are estimated by criterion which is relatively insensitive to impertinent cost factors.

  9. 14 CFR 23.621 - Casting factors.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... either magnetic particle, penetrant or other approved equivalent non-destructive inspection method; or... percent approved non-destructive inspection. When an approved quality control procedure is established and...) of this section must be applied in addition to those necessary to establish foundry quality control...

  10. Slick Science.

    ERIC Educational Resources Information Center

    Howard, Jane O.

    1989-01-01

    Some of the background factors and deleterious effects of oil spills are discussed. A classroom activity which demonstrates an oil spill, how the oil affects feathers, and one clean-up method is presented. A list of recent oil spills and three additional resources are included. (CW)

  11. The successive projection algorithm as an initialization method for brain tumor segmentation using non-negative matrix factorization.

    PubMed

    Sauwen, Nicolas; Acou, Marjan; Bharath, Halandur N; Sima, Diana M; Veraart, Jelle; Maes, Frederik; Himmelreich, Uwe; Achten, Eric; Van Huffel, Sabine

    2017-01-01

    Non-negative matrix factorization (NMF) has become a widely used tool for additive parts-based analysis in a wide range of applications. As NMF is a non-convex problem, the quality of the solution will depend on the initialization of the factor matrices. In this study, the successive projection algorithm (SPA) is proposed as an initialization method for NMF. SPA builds on convex geometry and allocates endmembers based on successive orthogonal subspace projections of the input data. SPA is a fast and reproducible method, and it aligns well with the assumptions made in near-separable NMF analyses. SPA was applied to multi-parametric magnetic resonance imaging (MRI) datasets for brain tumor segmentation using different NMF algorithms. Comparison with common initialization methods shows that SPA achieves similar segmentation quality and it is competitive in terms of convergence rate. Whereas SPA was previously applied as a direct endmember extraction tool, we have shown improved segmentation results when using SPA as an initialization method, as it allows further enhancement of the sources during the NMF iterative procedure.

  12. Crystallization of M-CSF.alpha.

    DOEpatents

    Pandit, Jayvardhan; Jancarik, Jarmila; Kim, Sung-Hou; Koths, Kirston; Halenbeck, Robert; Fear, Anna Lisa; Taylor, Eric; Yamamoto, Ralph; Bohm, Andrew

    1999-01-01

    The present invention is directed to methods for crystallizing macrophage colony stimulating factor (M-CSF) and to a crystalline M-CSF produced thereby. The present invention is also directed to methods for designing and producing M-CSF agonists and antagonists using information derived from the crystallographic structure of M-CSF. The invention is also directed to methods for screening M-CSF agonists and antagonists. In addition, the present invention is directed to an isolated, purified, soluble and functional M-CSF receptor.

  13. Boar Semen Studies

    PubMed Central

    King, G. J.; Macpherson, J. W.

    1966-01-01

    A successful method for low temperature preservation of bull semen was modified for use with boar semen. Observations were made on the effects of varying cooling rate, equilibration time, freezing rate, glycerol concentration, method of glycerol addition, packaging containers, extender pH and tonicity. Observations indicate that boar semen should be cooled and frozen at a slower rate than bull semen. Within the ranges or methods examined, the other factors had little effect on recovery of motility after freezing. PMID:4226548

  14. Modification of polymers by polymeric additives

    NASA Astrophysics Data System (ADS)

    Nesterov, A. E.; Lebedev, E. V.

    1989-08-01

    The conditions for the thermodynamic compatibility of polymers and methods for its enhancement are examined. The study of the influence of various factors on the concentration-temperature limits of compatibility, dispersion stabilisation processes, and methods for the improvement of adhesion between phases in mixtures of thermodynamically incompatible polymers is described. Questions concerning the improvement of the physicomechanical characteristics of polymer dispersions are considered. The bibliography includes 200 references.

  15. Molecularly imprinted polymers for the detection of illegal drugs and additives: a review.

    PubMed

    Xiao, Deli; Jiang, Yue; Bi, Yanping

    2018-04-04

    This review (with 154 refs.) describes the current status of using molecularly imprinted polymers in the extraction and quantitation of illicit drugs and additives. The review starts with an introduction into some synthesis methods (lump MIPs, spherical MIPs, surface imprinting) of MIPs using illicit drugs and additives as templates. The next section covers applications, with subsections on the detection of illegal additives in food, of doping in sports, and of illicit addictive drugs. A particular focus is directed towards current limitations and challenges, on the optimization of methods for preparation of MIPs, their applicability to aqueous samples, the leakage of template molecules, and the identification of the best balance between adsorption capacity and selectivity factor. At last, the need for convincing characterization methods, the lack of uniform parameters for defining selectivity, and the merits and demerits of MIPs prepared using nanomaterials are addressed. Strategies are suggested to solve existing problems, and future developments are discussed with respect to a more widespread use in relevant fields. Graphical abstract This review gives a comprehensive overview of the advances made in molecularly imprinting of polymers for use in the extraction and quantitation of illicit drugs and additives. Methods for syntheses, highlighted applications, limitations and current challenges are specifically addressed.

  16. Single cell qPCR reveals that additional HAND2 and microRNA-1 facilitate the early reprogramming progress of seven-factor-induced human myocytes

    PubMed Central

    Bektik, Emre; Dennis, Adrienne; Prasanna, Prateek; Madabhushi, Anant

    2017-01-01

    The direct reprogramming of cardiac fibroblasts into induced cardiomyocyte (CM)-like cells (iCMs) holds great promise in restoring heart function. We previously found that human fibroblasts could be reprogrammed toward CM-like cells by 7 reprogramming factors; however, iCM reprogramming in human fibroblasts is both more difficult and more time-intensive than that in mouse cells. In this study, we investigated if additional reprogramming factors could quantitatively and/or qualitatively improve 7-factor-mediated human iCM reprogramming by single-cell quantitative PCR. We first validated 46 pairs of TaqMan® primers/probes that had sufficient efficiency and sensitivity to detect the significant difference of gene expression between individual H9 human embryonic stem cell (ESC)-differentiated CMs (H9CMs) and human fibroblasts. The expression profile of these 46 genes revealed an improved reprogramming in 12-week iCMs compared to 4-week iCMs reprogrammed by 7 factors, indicating a prolonged stochastic phase during human iCM reprogramming. Although none of additional one reprogramming factor yielded a greater number of iCMs, our single-cell qPCR revealed that additional HAND2 or microRNA-1 could facilitate the silencing of fibroblast genes and yield a better degree of reprogramming in more reprogrammed iCMs. Noticeably, the more HAND2 expressed, the higher-level were cardiac genes activated in 7Fs+HAND2-reprogrammed iCMs. In conclusion, HAND2 and microRNA-1 could help 7 factors to facilitate the early progress of iCM-reprogramming from human fibroblasts. Our study provides valuable information to further optimize a method of direct iCM-reprogramming in human cells. PMID:28796841

  17. Single cell qPCR reveals that additional HAND2 and microRNA-1 facilitate the early reprogramming progress of seven-factor-induced human myocytes.

    PubMed

    Bektik, Emre; Dennis, Adrienne; Prasanna, Prateek; Madabhushi, Anant; Fu, Ji-Dong

    2017-01-01

    The direct reprogramming of cardiac fibroblasts into induced cardiomyocyte (CM)-like cells (iCMs) holds great promise in restoring heart function. We previously found that human fibroblasts could be reprogrammed toward CM-like cells by 7 reprogramming factors; however, iCM reprogramming in human fibroblasts is both more difficult and more time-intensive than that in mouse cells. In this study, we investigated if additional reprogramming factors could quantitatively and/or qualitatively improve 7-factor-mediated human iCM reprogramming by single-cell quantitative PCR. We first validated 46 pairs of TaqMan® primers/probes that had sufficient efficiency and sensitivity to detect the significant difference of gene expression between individual H9 human embryonic stem cell (ESC)-differentiated CMs (H9CMs) and human fibroblasts. The expression profile of these 46 genes revealed an improved reprogramming in 12-week iCMs compared to 4-week iCMs reprogrammed by 7 factors, indicating a prolonged stochastic phase during human iCM reprogramming. Although none of additional one reprogramming factor yielded a greater number of iCMs, our single-cell qPCR revealed that additional HAND2 or microRNA-1 could facilitate the silencing of fibroblast genes and yield a better degree of reprogramming in more reprogrammed iCMs. Noticeably, the more HAND2 expressed, the higher-level were cardiac genes activated in 7Fs+HAND2-reprogrammed iCMs. In conclusion, HAND2 and microRNA-1 could help 7 factors to facilitate the early progress of iCM-reprogramming from human fibroblasts. Our study provides valuable information to further optimize a method of direct iCM-reprogramming in human cells.

  18. Taguchi method of experimental design in materials education

    NASA Technical Reports Server (NTRS)

    Weiser, Martin W.

    1993-01-01

    Some of the advantages and disadvantages of the Taguchi Method of experimental design as applied to Materials Science will be discussed. This is a fractional factorial method that employs the minimum number of experimental trials for the information obtained. The analysis is also very simple to use and teach, which is quite advantageous in the classroom. In addition, the Taguchi loss function can be easily incorporated to emphasize that improvements in reproducibility are often at least as important as optimization of the response. The disadvantages of the Taguchi Method include the fact that factor interactions are normally not accounted for, there are zero degrees of freedom if all of the possible factors are used, and randomization is normally not used to prevent environmental biasing. In spite of these disadvantages it is felt that the Taguchi Method is extremely useful for both teaching experimental design and as a research tool, as will be shown with a number of brief examples.

  19. Testing all six person-oriented principles in dynamic factor analysis.

    PubMed

    Molenaar, Peter C M

    2010-05-01

    All six person-oriented principles identified by Sterba and Bauer's Keynote Article can be tested by means of dynamic factor analysis in its current form. In particular, it is shown how complex interactions and interindividual differences/intraindividual change can be tested in this way. In addition, the necessity to use single-subject methods in the analysis of developmental processes is emphasized, and attention is drawn to the possibility to optimally treat developmental psychopathology by means of new computational techniques that can be integrated with dynamic factor analysis.

  20. Simulation of tropical cyclone activity over the western North Pacific based on CMIP5 models

    NASA Astrophysics Data System (ADS)

    Shen, Haibo; Zhou, Weican; Zhao, Haikun

    2017-09-01

    Based on the Coupled Model Inter-comparison Project 5 (CMIP5) models, the tropical cyclone (TC) activity in the summers of 1965-2005 over the western North Pacific (WNP) is simulated by a TC dynamically downscaling system. In consideration of diversity among climate models, Bayesian model averaging (BMA) and equal-weighed model averaging (EMA) methods are applied to produce the ensemble large-scale environmental factors of the CMIP5 model outputs. The environmental factors generated by BMA and EMA methods are compared, as well as the corresponding TC simulations by the downscaling system. Results indicate that BMA method shows a significant advantage over the EMA. In addition, impacts of model selections on BMA method are examined. To each factor, ten models with better performance are selected from 30 CMIP5 models and then conduct BMA, respectively. As a consequence, the ensemble environmental factors and simulated TC activity are similar with the results from the 30 models' BMA, which verifies the BMA method can afford corresponding weight for each model in the ensemble based on the model's predictive skill. Thereby, the existence of poor performance models will not particularly affect the BMA effectiveness and the ensemble outcomes are improved. Finally, based upon the BMA method and downscaling system, we analyze the sensitivity of TC activity to three important environmental factors, i.e., sea surface temperature (SST), large-scale steering flow, and vertical wind shear. Among three factors, SST and large-scale steering flow greatly affect TC tracks, while average intensity distribution is sensitive to all three environmental factors. Moreover, SST and vertical wind shear jointly play a critical role in the inter-annual variability of TC lifetime maximum intensity and frequency of intense TCs.

  1. Factors Influencing Relapse-Free Survival in Merkel Cell Carcinoma of the Lower Limb-A Review of 60 Cases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poulsen, Michael, E-mail: michael_poulsen@health.qld.gov.a; Round, Caroline; Keller, Jacqui

    2010-02-01

    Purpose: Factors affecting relapse-free survival (RFS) in patients with Merkel cell carcinoma (MCC) of the lower limb were reviewed. Methods and Materials: The records of 60 patients from 1986 to 2005 with a diagnosis of MCC of the lower limb or buttock were retrospectively reviewed. The patients were treated with curative intent with surgery, radiation, or chemotherapy. Results: The 5-year overall survival, disease-specific survival, and RFS were 53%, 61%, and 20%, respectively. Factors influencing RFS were analyzed using univariate analysis. It appeared that recurrent disease worsened RFS (p = 0.03) and the addition of any radiotherapy improved RFS (p <0.001),more » as did radiotherapy to the inguinal nodes (p = 0.01) or primary site and inguinal nodes (p = 0.003). Age, surgical margins, and stage were not statistically significant. On multivariate analysis, the only significant factor was the addition of radiotherapy (hazard ratio = 0.51 p = 0.03). Conclusion: The addition of radiotherapy improves RFS compared with surgery alone. Elective treatment should be given to the inguinal nodes to reduce the risk of relapse.« less

  2. Optical Analog to Electromagnetically Induced Transparency in Cascaded Ring-Resonator Systems.

    PubMed

    Wang, Yonghua; Zheng, Hua; Xue, Chenyang; Zhang, Wendong

    2016-07-25

    The analogue of electromagnetically induced transparency in optical methods has shown great potential in slow light and sensing applications. Here, we experimentally demonstrated a coupled resonator induced transparency system with three cascaded ring coupled resonators in a silicon chip. The structure was modeled by using the transfer matrix method. Influences of various parameters including coupling ratio of couplers, waveguide loss and additional loss of couplers on transmission characteristic and group index have been investigated theoretically and numerically in detail. The transmission character of the system was measured by the vertical grating coupling method. The enhanced quality factor reached 1.22 × 10⁵. In addition, we further test the temperature performance of the device. The results provide a new method for the manipulation of light in highly integrated optical circuits and sensing applications.

  3. Fast HPLC-DAD quantification of nine polyphenols in honey by using second-order calibration method based on trilinear decomposition algorithm.

    PubMed

    Zhang, Xiao-Hua; Wu, Hai-Long; Wang, Jian-Yao; Tu, De-Zhu; Kang, Chao; Zhao, Juan; Chen, Yao; Miu, Xiao-Xia; Yu, Ru-Qin

    2013-05-01

    This paper describes the use of second-order calibration for development of HPLC-DAD method to quantify nine polyphenols in five kinds of honey samples. The sample treatment procedure was simplified effectively relative to the traditional ways. Baselines drift was also overcome by means of regarding the drift as additional factor(s) as well as the analytes of interest in the mathematical model. The contents of polyphenols obtained by the alternating trilinear decomposition (ATLD) method have been successfully used to distinguish different types of honey. This method shows good linearity (r>0.99), rapidity (t<7.60 min) and accuracy, which may be extremely promising as an excellent routine strategy for identification and quantification of polyphenols in the complex matrices. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Eco-Material Selection for Auto Bodies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayyas, Ahmad T; Omar, Mohammed; Hayajneh, Mohammed T.

    In the last decades, majority of automakers started to include lightweight materials in their vehicles to meet hard environmental regulations and to improve fuel efficiency of their vehicles. As a result, eco-material selection for vehicles emerged as a new discipline under design for environment. This chapter will summarize methods of eco-material selections for automotive applications with more emphasis into auto-bodies. A set of metrics for eco-material selection that takes into account all economic, environmental and social factors will be developed using numerical and qualitative methods. These metrics cover products' environmental impact, functionality and manufacturability, in addition to the economic andmore » societal factors.« less

  5. Learning to perform ear reconstruction.

    PubMed

    Wilkes, Gordon H

    2009-08-01

    Learning how to perform ear reconstruction is very difficult. There are no standardized teaching methods. This has resulted in many ear reconstructions being suboptimal. Learning requires a major commitment by the surgeon. Factors to be seriously considered by those considering performing this surgery are (1) commitment, (2) aptitude, (3) training methods available, (4) surgical skills and experience, and (5) additional equipment needs. Unless all these factors are addressed in a surgeon's decision to perform this form of reconstruction, the end result will be compromised, and patient care will not be optimized. It is hoped that considering these factors and following this approach will result in a higher quality of aesthetic result. The future of ear reconstruction lies in the use of advanced digital technologies and tissue engineering. Copyright Thieme Medical Publishers.

  6. [Difficulty influence factors of dental caries clinical treatment].

    PubMed

    Xuedong, Zhou; Junqi, Ling; Jingping, Liang; Jiyao, Li; Lei, Cheng; Qing, Yu; Yumei, Niu; Bin, Guo; Hui, Chen

    2017-02-01

    Dental caries is a major disease that threaten human's oral healthy severely with the characteristics of high incidence, low rate of treatment and high rate of retreatment. At present, restorative treatment remains the main method for caries treatment. With the development of the Minimally Invasive Cosmetic Dentistry (MICD), reasonable application of various treatment technologies, maximum preservation of tooth tissues and realizing the maximization of treatment effects become problems that call for immediate solution in dental clinics. In addition, there still exist a large number of old restorations that need standard retreatments. Here, some difficulty influence factors of dental caries clinical treatment such as systemic and oral factors, individual caries susceptibility, treatment technologies and materials, retreatment methods of old restorations and technique sensitivity are analyzed, and corresponding processing strategies are also put forward.

  7. Appearance of cell-adhesion factor in osteoblast proliferation and differentiation of apatite coating titanium by blast coating method.

    PubMed

    Umeda, Hirotsugu; Mano, Takamitsu; Harada, Koji; Tarannum, Ferdous; Ueyama, Yoshiya

    2017-08-01

    We have already reported that the apatite coating of titanium by the blast coating (BC) method could show a higher rate of bone contact from the early stages in vivo, when compared to the pure titanium (Ti) and the apatite coating of titanium by the flame spraying (FS) method. However, the detailed mechanism by which BC resulted in satisfactory bone contact is still unknown. In the present study, we investigated the importance of various factors including cell adhesion factor in osteoblast proliferation and differentiation that could affect the osteoconductivity of the BC disks. Cell proliferation assay revealed that Saos-2 could grow fastest on BC disks, and that a spectrophotometric method using a LabAssay TM ALP kit showed that ALP activity was increased in cells on BC disks compared to Ti disks and FS disks. In addition, higher expression of E-cadherin and Fibronectin was observed in cells on BC disks than Ti disks and FS disks by relative qPCR as well as Western blotting. These results suggested that the expression of cell-adhesion factors, proliferation and differentiation of osteoblast might be enhanced on BC disks, which might result higher osteoconductivity.

  8. Biological and analytical variations of 16 parameters related to coagulation screening tests and the activity of coagulation factors.

    PubMed

    Chen, Qian; Shou, Weiling; Wu, Wei; Guo, Ye; Zhang, Yujuan; Huang, Chunmei; Cui, Wei

    2015-04-01

    To accurately estimate longitudinal changes in individuals, it is important to take into consideration the biological variability of the measurement. The few studies available on the biological variations of coagulation parameters are mostly outdated. We confirmed the published results using modern, fully automated methods. Furthermore, we added data for additional coagulation parameters. At 8:00 am, 12:00 pm, and 4:00 pm on days 1, 3, and 5, venous blood was collected from 31 healthy volunteers. A total of 16 parameters related to coagulation screening tests as well as the activity of coagulation factors were analyzed; these included prothrombin time, fibrinogen (Fbg), activated partial thromboplastin time, thrombin time, international normalized ratio, prothrombin time activity, activated partial thromboplastin time ratio, fibrin(-ogen) degradation products, as well as the activity of factor II, factor V, factor VII, factor VIII, factor IX, and factor X. All intraindividual coefficients of variation (CVI) values for the parameters of the screening tests (except Fbg) were less than 5%. Conversely, the CVI values for the activity of coagulation factors were all greater than 5%. In addition, we calculated the reference change value to determine whether a significant difference exists between two test results from the same individual. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  9. Health-related quality of life and related factors of military police officers

    PubMed Central

    2014-01-01

    Purpose The present study aimed to determine the effect of demographic characteristics, occupation, anthropometric indices, and leisure-time physical activity levels on coronary risk and health-related quality of life among military police officers from the State of Santa Catarina, Brazil. Methods The sample included 165 military police officers who fulfilled the study’s inclusion criteria. The International Physical Activity Questionnaire and the Short Form Health Survey were used, in addition to a spreadsheet of socio-demographic, occupational and anthropometric data. Statistical analyses were performed using descriptive analysis followed by Spearman Correlation and multiple linear regression analysis using the backward method. Results The waist-to-height ratio was identified as a risk factor low health-related quality of life. In addition, the conicity index, fat percentage, years of service in the military police, minutes of work per day and leisure-time physical activity levels were identified as risk factors for coronary disease among police officers. Conclusions These findings suggest that the Military Police Department should adopt an institutional policy that allows police officers to practice regular physical activity in order to maintain and improve their physical fitness, health, job performance, and quality of life. PMID:24766910

  10. Strategies for Controlled Delivery of Biologics for Cartilage Repair

    PubMed Central

    Lam, Johnny; Lu, Steven; Kasper, F. Kurtis; Mikos, Antonios G.

    2014-01-01

    The delivery of biologics is an important component in the treatment of osteoarthritis and the functional restoration of articular cartilage. Numerous factors have been implicated in the cartilage repair process, but the uncontrolled delivery of these factors may not only reduce their full reparative potential and can also cause unwanted morphological effects. It is therefore imperative to consider the type of biologic to be delivered, the method of delivery, and the temporal as well as spatial presentation of the biologic to achieve the desired effect in cartilage repair. Additionally, the delivery of a single factor may not be sufficient in guiding neo-tissue formation, motivating recent research towards the delivery of multiple factors. This review will discuss the roles of various biologics involved in cartilage repair and the different methods of delivery for appropriate healing responses. A number of spatiotemporal strategies will then be emphasized for the controlled delivery of single and multiple bioactive factors in both in vitro and in vivo cartilage tissue engineering applications. PMID:24993610

  11. Bayesian CP Factorization of Incomplete Tensors with Automatic Rank Determination.

    PubMed

    Zhao, Qibin; Zhang, Liqing; Cichocki, Andrzej

    2015-09-01

    CANDECOMP/PARAFAC (CP) tensor factorization of incomplete data is a powerful technique for tensor completion through explicitly capturing the multilinear latent factors. The existing CP algorithms require the tensor rank to be manually specified, however, the determination of tensor rank remains a challenging problem especially for CP rank . In addition, existing approaches do not take into account uncertainty information of latent factors, as well as missing entries. To address these issues, we formulate CP factorization using a hierarchical probabilistic model and employ a fully Bayesian treatment by incorporating a sparsity-inducing prior over multiple latent factors and the appropriate hyperpriors over all hyperparameters, resulting in automatic rank determination. To learn the model, we develop an efficient deterministic Bayesian inference algorithm, which scales linearly with data size. Our method is characterized as a tuning parameter-free approach, which can effectively infer underlying multilinear factors with a low-rank constraint, while also providing predictive distributions over missing entries. Extensive simulations on synthetic data illustrate the intrinsic capability of our method to recover the ground-truth of CP rank and prevent the overfitting problem, even when a large amount of entries are missing. Moreover, the results from real-world applications, including image inpainting and facial image synthesis, demonstrate that our method outperforms state-of-the-art approaches for both tensor factorization and tensor completion in terms of predictive performance.

  12. Intercomparison of methods for image quality characterization. II. Noise power spectrum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobbins, James T. III; Samei, Ehsan; Ranger, Nicole T.

    Second in a two-part series comparing measurement techniques for the assessment of basic image quality metrics in digital radiography, in this paper we focus on the measurement of the image noise power spectrum (NPS). Three methods were considered: (1) a method published by Dobbins et al. [Med. Phys. 22, 1581-1593 (1995)] (2) a method published by Samei et al. [Med. Phys. 30, 608-622 (2003)], and (3) a new method sanctioned by the International Electrotechnical Commission (IEC 62220-1, 2003), developed as part of an international standard for the measurement of detective quantum efficiency. In addition to an overall comparison of themore » estimated NPS between the three techniques, the following factors were also evaluated for their effect on the measured NPS: horizontal versus vertical directional dependence, the use of beam-limiting apertures, beam spectrum, and computational methods of NPS analysis, including the region-of-interest (ROI) size and the method of ROI normalization. Of these factors, none was found to demonstrate a substantial impact on the amplitude of the NPS estimates ({<=}3.1% relative difference in NPS averaged over frequency, for each factor considered separately). Overall, the three methods agreed to within 1.6%{+-}0.8% when averaged over frequencies >0.15 mm{sup -1}.« less

  13. A novel second-order standard addition analytical method based on data processing with multidimensional partial least-squares and residual bilinearization.

    PubMed

    Lozano, Valeria A; Ibañez, Gabriela A; Olivieri, Alejandro C

    2009-10-05

    In the presence of analyte-background interactions and a significant background signal, both second-order multivariate calibration and standard addition are required for successful analyte quantitation achieving the second-order advantage. This report discusses a modified second-order standard addition method, in which the test data matrix is subtracted from the standard addition matrices, and quantitation proceeds via the classical external calibration procedure. It is shown that this novel data processing method allows one to apply not only parallel factor analysis (PARAFAC) and multivariate curve resolution-alternating least-squares (MCR-ALS), but also the recently introduced and more flexible partial least-squares (PLS) models coupled to residual bilinearization (RBL). In particular, the multidimensional variant N-PLS/RBL is shown to produce the best analytical results. The comparison is carried out with the aid of a set of simulated data, as well as two experimental data sets: one aimed at the determination of salicylate in human serum in the presence of naproxen as an additional interferent, and the second one devoted to the analysis of danofloxacin in human serum in the presence of salicylate.

  14. Determination of important topographic factors for landslide mapping analysis using MLP network.

    PubMed

    Alkhasawneh, Mutasem Sh; Ngah, Umi Kalthum; Tay, Lea Tien; Mat Isa, Nor Ashidi; Al-batah, Mohammad Subhi

    2013-01-01

    Landslide is one of the natural disasters that occur in Malaysia. Topographic factors such as elevation, slope angle, slope aspect, general curvature, plan curvature, and profile curvature are considered as the main causes of landslides. In order to determine the dominant topographic factors in landslide mapping analysis, a study was conducted and presented in this paper. There are three main stages involved in this study. The first stage is the extraction of extra topographic factors. Previous landslide studies had identified mainly six topographic factors. Seven new additional factors have been proposed in this study. They are longitude curvature, tangential curvature, cross section curvature, surface area, diagonal line length, surface roughness, and rugosity. The second stage is the specification of the weight of each factor using two methods. The methods are multilayer perceptron (MLP) network classification accuracy and Zhou's algorithm. At the third stage, the factors with higher weights were used to improve the MLP performance. Out of the thirteen factors, eight factors were considered as important factors, which are surface area, longitude curvature, diagonal length, slope angle, elevation, slope aspect, rugosity, and profile curvature. The classification accuracy of multilayer perceptron neural network has increased by 3% after the elimination of five less important factors.

  15. Determination of Important Topographic Factors for Landslide Mapping Analysis Using MLP Network

    PubMed Central

    Alkhasawneh, Mutasem Sh.; Ngah, Umi Kalthum; Mat Isa, Nor Ashidi; Al-batah, Mohammad Subhi

    2013-01-01

    Landslide is one of the natural disasters that occur in Malaysia. Topographic factors such as elevation, slope angle, slope aspect, general curvature, plan curvature, and profile curvature are considered as the main causes of landslides. In order to determine the dominant topographic factors in landslide mapping analysis, a study was conducted and presented in this paper. There are three main stages involved in this study. The first stage is the extraction of extra topographic factors. Previous landslide studies had identified mainly six topographic factors. Seven new additional factors have been proposed in this study. They are longitude curvature, tangential curvature, cross section curvature, surface area, diagonal line length, surface roughness, and rugosity. The second stage is the specification of the weight of each factor using two methods. The methods are multilayer perceptron (MLP) network classification accuracy and Zhou's algorithm. At the third stage, the factors with higher weights were used to improve the MLP performance. Out of the thirteen factors, eight factors were considered as important factors, which are surface area, longitude curvature, diagonal length, slope angle, elevation, slope aspect, rugosity, and profile curvature. The classification accuracy of multilayer perceptron neural network has increased by 3% after the elimination of five less important factors. PMID:24453846

  16. Enhanced Component Performance Study. Emergency Diesel Generators 1998–2013

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schroeder, John Alton

    2014-11-01

    This report presents an enhanced performance evaluation of emergency diesel generators (EDGs) at U.S. commercial nuclear power plants. This report evaluates component performance over time using Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES) data from 1998 through 2013 and maintenance unavailability (UA) performance data using Mitigating Systems Performance Index (MSPI) Basis Document data from 2002 through 2013. The objective is to present an analysis of factors that could influence the system and component trends in addition to annual performance trends of failure rates and probabilities. The factors analyzed for the EDG component are the differences in failuresmore » between all demands and actual unplanned engineered safety feature (ESF) demands, differences among manufacturers, and differences among EDG ratings. Statistical analyses of these differences are performed and results showing whether pooling is acceptable across these factors. In addition, engineering analyses were performed with respect to time period and failure mode. The factors analyzed are: sub-component, failure cause, detection method, recovery, manufacturer, and EDG rating.« less

  17. SOCIAL-ECOLOGICAL RESILIENCE AND ADAPTATION ON THE EASTERN SHORE OF THE CHESAPEAKE BAY

    EPA Science Inventory

    It is expected that this research will yield methods for operationalizing and assessing the presence of factors of resilience in social-ecological systems, as well as further understanding on the relationship between vulnerability, adaptation and resilience. In addition, it...

  18. Test Assembly Implications for Providing Reliable and Valid Subscores

    ERIC Educational Resources Information Center

    Lee, Minji K.; Sweeney, Kevin; Melican, Gerald J.

    2017-01-01

    This study investigates the relationships among factor correlations, inter-item correlations, and the reliability estimates of subscores, providing a guideline with respect to psychometric properties of useful subscores. In addition, it compares subscore estimation methods with respect to reliability and distinctness. The subscore estimation…

  19. Sample pre-concentration with high enrichment factors at a fixed location in paper-based microfluidic devices.

    PubMed

    Yeh, Shih-Hao; Chou, Kuang-Hua; Yang, Ruey-Jen

    2016-03-07

    The lack of sensitivity is a major problem among microfluidic paper-based analytical devices (μPADs) for early disease detection and diagnosis. Accordingly, the present study presents a method for improving the enrichment factor of low-concentration biomarkers by using shallow paper-based channels realized through a double-sided wax-printing process. In addition, the enrichment factor is further enhanced by exploiting the ion concentration polarization (ICP) effect on the cathodic side of the nanoporous membrane, in which a stationary sample plug is obtained. The occurrence of ICP on the shallow-channel μPAD is confirmed by measuring the current-voltage response as the external voltage is increased from 0 to 210 V (or the field strength from 0 to 1.05 × 10(4) V m(-1)) over 600 s. In addition, to the best of our knowledge, the electroosmotic flow (EOF) speed on the μPAD fabricated with a wax-channel is measured for the first time using a current monitoring method. The experimental results show that for a fluorescein sample, the concentration factor is increased from 130-fold in a conventional full-thickness paper channel to 944-fold in the proposed shallow channel. Furthermore, for a fluorescein isothiocyanate-labeled bovine serum albumin (FITC-BSA) sample, the proposed shallow-channel μPAD achieves an 835-fold improvement in the concentration factor. The concentration technique presented here provides a novel strategy for enhancing the detection sensitivity of μPAD applications.

  20. An Improved Multi-Sensor Fusion Navigation Algorithm Based on the Factor Graph

    PubMed Central

    Zeng, Qinghua; Chen, Weina; Liu, Jianye; Wang, Huizhe

    2017-01-01

    An integrated navigation system coupled with additional sensors can be used in the Micro Unmanned Aerial Vehicle (MUAV) applications because the multi-sensor information is redundant and complementary, which can markedly improve the system accuracy. How to deal with the information gathered from different sensors efficiently is an important problem. The fact that different sensors provide measurements asynchronously may complicate the processing of these measurements. In addition, the output signals of some sensors appear to have a non-linear character. In order to incorporate these measurements and calculate a navigation solution in real time, the multi-sensor fusion algorithm based on factor graph is proposed. The global optimum solution is factorized according to the chain structure of the factor graph, which allows for a more general form of the conditional probability density. It can convert the fusion matter into connecting factors defined by these measurements to the graph without considering the relationship between the sensor update frequency and the fusion period. An experimental MUAV system has been built and some experiments have been performed to prove the effectiveness of the proposed method. PMID:28335570

  1. Combinatorial programming of human neuronal progenitors using magnetically-guided stoichiometric mRNA delivery.

    PubMed

    Azimi, Sayyed M; Sheridan, Steven D; Ghannad-Rezaie, Mostafa; Eimon, Peter M; Yanik, Mehmet Fatih

    2018-05-01

    Identification of optimal transcription-factor expression patterns to direct cellular differentiation along a desired pathway presents significant challenges. We demonstrate massively combinatorial screening of temporally-varying mRNA transcription factors to direct differentiation of neural progenitor cells using a dynamically-reconfigurable magnetically-guided spotting technology for localizing mRNA, enabling experiments on millimetre size spots. In addition, we present a time-interleaved delivery method that dramatically reduces fluctuations in the delivered transcription-factor copy-numbers per cell. We screened combinatorial and temporal delivery of a pool of midbrain-specific transcription factors to augment the generation of dopaminergic neurons. We show that the combinatorial delivery of LMX1A, FOXA2 and PITX3 is highly effective in generating dopaminergic neurons from midbrain progenitors. We show that LMX1A significantly increases TH -expression levels when delivered to neural progenitor cells either during proliferation or after induction of neural differentiation, while FOXA2 and PITX3 increase expression only when delivered prior to induction, demonstrating temporal dependence of factor addition. © 2018, Azimi et al.

  2. An Improved Multi-Sensor Fusion Navigation Algorithm Based on the Factor Graph.

    PubMed

    Zeng, Qinghua; Chen, Weina; Liu, Jianye; Wang, Huizhe

    2017-03-21

    An integrated navigation system coupled with additional sensors can be used in the Micro Unmanned Aerial Vehicle (MUAV) applications because the multi-sensor information is redundant and complementary, which can markedly improve the system accuracy. How to deal with the information gathered from different sensors efficiently is an important problem. The fact that different sensors provide measurements asynchronously may complicate the processing of these measurements. In addition, the output signals of some sensors appear to have a non-linear character. In order to incorporate these measurements and calculate a navigation solution in real time, the multi-sensor fusion algorithm based on factor graph is proposed. The global optimum solution is factorized according to the chain structure of the factor graph, which allows for a more general form of the conditional probability density. It can convert the fusion matter into connecting factors defined by these measurements to the graph without considering the relationship between the sensor update frequency and the fusion period. An experimental MUAV system has been built and some experiments have been performed to prove the effectiveness of the proposed method.

  3. A Simple Method to Reduce both Lactic Acid and Ammonium Production in Industrial Animal Cell Culture

    PubMed Central

    Freund, Nathaniel W.; Croughan, Matthew S.

    2018-01-01

    Fed-batch animal cell culture is the most common method for commercial production of recombinant proteins. However, higher cell densities in these platforms are still limited due to factors such as excessive ammonium production, lactic acid production, nutrient limitation, and/or hyperosmotic stress related to nutrient feeds and base additions to control pH. To partly overcome these factors, we investigated a simple method to reduce both ammonium and lactic acid production—termed Lactate Supplementation and Adaptation (LSA) technology—through the use of CHO cells adapted to a lactate-supplemented medium. Using this simple method, we achieved a reduction of nearly 100% in lactic acid production with a simultaneous 50% reduction in ammonium production in batch shaker flasks cultures. In subsequent fed-batch bioreactor cultures, lactic acid production and base addition were both reduced eight-fold. Viable cell densities of 35 million cells per mL and integral viable cell days of 273 million cell-days per mL were achieved, both among the highest currently reported for a fed-batch animal cell culture. Investigating the benefits of LSA technology in animal cell culture is worthy of further consideration and may lead to process conditions more favorable for advanced industrial applications. PMID:29382079

  4. A Simple Method to Reduce both Lactic Acid and Ammonium Production in Industrial Animal Cell Culture.

    PubMed

    Freund, Nathaniel W; Croughan, Matthew S

    2018-01-28

    Fed-batch animal cell culture is the most common method for commercial production of recombinant proteins. However, higher cell densities in these platforms are still limited due to factors such as excessive ammonium production, lactic acid production, nutrient limitation, and/or hyperosmotic stress related to nutrient feeds and base additions to control pH. To partly overcome these factors, we investigated a simple method to reduce both ammonium and lactic acid production-termed Lactate Supplementation and Adaptation (LSA) technology-through the use of CHO cells adapted to a lactate-supplemented medium. Using this simple method, we achieved a reduction of nearly 100% in lactic acid production with a simultaneous 50% reduction in ammonium production in batch shaker flasks cultures. In subsequent fed-batch bioreactor cultures, lactic acid production and base addition were both reduced eight-fold. Viable cell densities of 35 million cells per mL and integral viable cell days of 273 million cell-days per mL were achieved, both among the highest currently reported for a fed-batch animal cell culture. Investigating the benefits of LSA technology in animal cell culture is worthy of further consideration and may lead to process conditions more favorable for advanced industrial applications.

  5. An appraisal of statistical procedures used in derivation of reference intervals.

    PubMed

    Ichihara, Kiyoshi; Boyd, James C

    2010-11-01

    When conducting studies to derive reference intervals (RIs), various statistical procedures are commonly applied at each step, from the planning stages to final computation of RIs. Determination of the necessary sample size is an important consideration, and evaluation of at least 400 individuals in each subgroup has been recommended to establish reliable common RIs in multicenter studies. Multiple regression analysis allows identification of the most important factors contributing to variation in test results, while accounting for possible confounding relationships among these factors. Of the various approaches proposed for judging the necessity of partitioning reference values, nested analysis of variance (ANOVA) is the likely method of choice owing to its ability to handle multiple groups and being able to adjust for multiple factors. Box-Cox power transformation often has been used to transform data to a Gaussian distribution for parametric computation of RIs. However, this transformation occasionally fails. Therefore, the non-parametric method based on determination of the 2.5 and 97.5 percentiles following sorting of the data, has been recommended for general use. The performance of the Box-Cox transformation can be improved by introducing an additional parameter representing the origin of transformation. In simulations, the confidence intervals (CIs) of reference limits (RLs) calculated by the parametric method were narrower than those calculated by the non-parametric approach. However, the margin of difference was rather small owing to additional variability in parametrically-determined RLs introduced by estimation of parameters for the Box-Cox transformation. The parametric calculation method may have an advantage over the non-parametric method in allowing identification and exclusion of extreme values during RI computation.

  6. Using multi-criteria decision making for selection of the optimal strategy for municipal solid waste management.

    PubMed

    Jovanovic, Sasa; Savic, Slobodan; Jovicic, Nebojsa; Boskovic, Goran; Djordjevic, Zorica

    2016-09-01

    Multi-criteria decision making (MCDM) is a relatively new tool for decision makers who deal with numerous and often contradictory factors during their decision making process. This paper presents a procedure to choose the optimal municipal solid waste (MSW) management system for the area of the city of Kragujevac (Republic of Serbia) based on the MCDM method. Two methods of multiple attribute decision making, i.e. SAW (simple additive weighting method) and TOPSIS (technique for order preference by similarity to ideal solution), respectively, were used to compare the proposed waste management strategies (WMS). Each of the created strategies was simulated using the software package IWM2. Total values for eight chosen parameters were calculated for all the strategies. Contribution of each of the six waste treatment options was valorized. The SAW analysis was used to obtain the sum characteristics for all the waste management treatment strategies and they were ranked accordingly. The TOPSIS method was used to calculate the relative closeness factors to the ideal solution for all the alternatives. Then, the proposed strategies were ranked in form of tables and diagrams obtained based on both MCDM methods. As shown in this paper, the results were in good agreement, which additionally confirmed and facilitated the choice of the optimal MSW management strategy. © The Author(s) 2016.

  7. A novel diagnosis method for a Hall plates-based rotary encoder with a magnetic concentrator.

    PubMed

    Meng, Bumin; Wang, Yaonan; Sun, Wei; Yuan, Xiaofang

    2014-07-31

    In the last few years, rotary encoders based on two-dimensional complementary metal oxide semiconductors (CMOS) Hall plates with a magnetic concentrator have been developed to measure contactless absolute angle. There are various error factors influencing the measuring accuracy, which are difficult to locate after the assembly of encoder. In this paper, a model-based rapid diagnosis method is presented. Based on an analysis of the error mechanism, an error model is built to compare minimum residual angle error and to quantify the error factors. Additionally, a modified particle swarm optimization (PSO) algorithm is used to reduce the calculated amount. The simulation and experimental results show that this diagnosis method is feasible to quantify the causes of the error and to reduce iteration significantly.

  8. Multidimensional QoE of Multiview Video and Selectable Audio IP Transmission

    PubMed Central

    Nunome, Toshiro; Ishida, Takuya

    2015-01-01

    We evaluate QoE of multiview video and selectable audio (MVV-SA), in which users can switch not only video but also audio according to a viewpoint change request, transmitted over IP networks by a subjective experiment. The evaluation is performed by the semantic differential (SD) method with 13 adjective pairs. In the subjective experiment, we ask assessors to evaluate 40 stimuli which consist of two kinds of UDP load traffic, two kinds of fixed additional delay, five kinds of playout buffering time, and selectable or unselectable audio (i.e., MVV-SA or the previous MVV-A). As a result, MVV-SA gives higher presence to the user than MVV-A and then enhances QoE. In addition, we employ factor analysis for subjective assessment results to clarify the component factors of QoE. We then find that three major factors affect QoE in MVV-SA. PMID:26106640

  9. Adherence to nutritional therapy in obese adolescents; a review.

    PubMed

    França, Silvana Lima Guimarães; Sahade, Viviane; Nunes, Mônica; Adan, Luis F

    2013-01-01

    Considering the controversies existent on the subject, the aim of this review is to discuss adherence to diet in obese adolescents. The selection of articles was made in the SCOPUS, COCHRANE, APA Psyc Net, SciELO, LILACS, CAPES Journals, PUBMED/MEDLINE and GOOGLE ACADEMIC databases. Studies published between 2002 and 2012 were selected. There was lack of evidence of conceptual discussion about adherence to diet in obesity in the child-youth context, in addition to scarcity of data on adherence to diet itself in obese adolescents and the methods of evaluating this. Lastly, multiple interdependent factors were found which both facilitated and made the process of adherence to diet difficult for obese youngsters. The majority of these (factors) belong to the socioeconomic and cultural dimension, in addition to pointing out cognitive and psychological factors and those associated with health services and professionals. Copyright © AULA MEDICA EDICIONES 2013. Published by AULA MEDICA. All rights reserved.

  10. Employment and the associated impact on quality of life in people diagnosed with schizophrenia.

    PubMed

    Bouwmans, Clazien; de Sonneville, Caroline; Mulder, Cornelis L; Hakkaart-van Roijen, Leona

    2015-01-01

    A systematic review was conducted to assess the employment rate of people with schizophrenia. Additionally, information from the selected studies concerning factors associated with employment and health-related quality of life (HRQoL) was examined. Employment rates ranged from 4% to 50.4%. The studies differed considerably in design, patient settings, and methods of recruitment. The most frequently reported factors associated with employment were negative and cognitive symptoms, age of onset, and duration and course of the disease. Individual characteristics associated with unemployment were older age, lower education, and sex (female). Additionally, environmental factors, eg, the availability of welfare benefits and vocational support programs, seemed to play a role. Generally, being employed was positively associated with HRQoL. However, the causal direction of this association remained unclear, as studies on the bidirectional relationship between employment and HRQoL were lacking.

  11. Recovery of failed solid-state anaerobic digesters.

    PubMed

    Yang, Liangcheng; Ge, Xumeng; Li, Yebo

    2016-08-01

    This study examined the performance of three methods for recovering failed solid-state anaerobic digesters. The 9-L digesters, which were fed with corn stover, failed at a feedstock/inoculum (F/I) ratio of 10 with negligible methane yields. To recover the systems, inoculum was added to bring the F/I ratio to 4. Inoculum was either added to the top of a failed digester, injected into it, or well-mixed with the existing feedstock. Digesters using top-addition and injection methods quickly resumed and achieved peak yields in 10days, while digesters using well-mixed method recovered slowly but showed 50% higher peak yields. Overall, these methods recovered 30-40% methane from failed digesters. The well-mixed method showed the highest methane yield, followed by the injection and top-addition methods. Recovered digesters outperformed digesters had a constant F/I ratio of 4. Slow mass transfer and slow growth of microbes were believed to be the major limiting factors for recovery. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Assessment of composite motif discovery methods.

    PubMed

    Klepper, Kjetil; Sandve, Geir K; Abul, Osman; Johansen, Jostein; Drablos, Finn

    2008-02-26

    Computational discovery of regulatory elements is an important area of bioinformatics research and more than a hundred motif discovery methods have been published. Traditionally, most of these methods have addressed the problem of single motif discovery - discovering binding motifs for individual transcription factors. In higher organisms, however, transcription factors usually act in combination with nearby bound factors to induce specific regulatory behaviours. Hence, recent focus has shifted from single motifs to the discovery of sets of motifs bound by multiple cooperating transcription factors, so called composite motifs or cis-regulatory modules. Given the large number and diversity of methods available, independent assessment of methods becomes important. Although there have been several benchmark studies of single motif discovery, no similar studies have previously been conducted concerning composite motif discovery. We have developed a benchmarking framework for composite motif discovery and used it to evaluate the performance of eight published module discovery tools. Benchmark datasets were constructed based on real genomic sequences containing experimentally verified regulatory modules, and the module discovery programs were asked to predict both the locations of these modules and to specify the single motifs involved. To aid the programs in their search, we provided position weight matrices corresponding to the binding motifs of the transcription factors involved. In addition, selections of decoy matrices were mixed with the genuine matrices on one dataset to test the response of programs to varying levels of noise. Although some of the methods tested tended to score somewhat better than others overall, there were still large variations between individual datasets and no single method performed consistently better than the rest in all situations. The variation in performance on individual datasets also shows that the new benchmark datasets represents a suitable variety of challenges to most methods for module discovery.

  13. Detecting and correcting the bias of unmeasured factors using perturbation analysis: a data-mining approach.

    PubMed

    Lee, Wen-Chung

    2014-02-05

    The randomized controlled study is the gold-standard research method in biomedicine. In contrast, the validity of a (nonrandomized) observational study is often questioned because of unknown/unmeasured factors, which may have confounding and/or effect-modifying potential. In this paper, the author proposes a perturbation test to detect the bias of unmeasured factors and a perturbation adjustment to correct for such bias. The proposed method circumvents the problem of measuring unknowns by collecting the perturbations of unmeasured factors instead. Specifically, a perturbation is a variable that is readily available (or can be measured easily) and is potentially associated, though perhaps only very weakly, with unmeasured factors. The author conducted extensive computer simulations to provide a proof of concept. Computer simulations show that, as the number of perturbation variables increases from data mining, the power of the perturbation test increased progressively, up to nearly 100%. In addition, after the perturbation adjustment, the bias decreased progressively, down to nearly 0%. The data-mining perturbation analysis described here is recommended for use in detecting and correcting the bias of unmeasured factors in observational studies.

  14. Factorization in large-scale many-body calculations

    DOE PAGES

    Johnson, Calvin W.; Ormand, W. Erich; Krastev, Plamen G.

    2013-08-07

    One approach for solving interacting many-fermion systems is the configuration-interaction method, also sometimes called the interacting shell model, where one finds eigenvalues of the Hamiltonian in a many-body basis of Slater determinants (antisymmetrized products of single-particle wavefunctions). The resulting Hamiltonian matrix is typically very sparse, but for large systems the nonzero matrix elements can nonetheless require terabytes or more of storage. An alternate algorithm, applicable to a broad class of systems with symmetry, in our case rotational invariance, is to exactly factorize both the basis and the interaction using additive/multiplicative quantum numbers; such an algorithm recreates the many-body matrix elementsmore » on the fly and can reduce the storage requirements by an order of magnitude or more. Here, we discuss factorization in general and introduce a novel, generalized factorization method, essentially a ‘double-factorization’ which speeds up basis generation and set-up of required arrays. Although we emphasize techniques, we also place factorization in the context of a specific (unpublished) configuration-interaction code, BIGSTICK, which runs both on serial and parallel machines, and discuss the savings in memory due to factorization.« less

  15. Study on Commercialization of Biogasification Systems in Ishikari Bay New Port Area - Proposal of Estimation Method of Collectable Amount of Food Waste by using Binary Logit Model -

    NASA Astrophysics Data System (ADS)

    Watanabe, Sho; Furuichi, Toru; Ishii, Kazuei

    This study proposed an estimation method for collectable amount of food waste considering the food waste generator's cooperation ratio ant the amount of food waste generation, and clarified the factors influencing the collectable amount of food waste. In our method, the cooperation ratio was calculated by using the binary logit model which is often used for the traffic multiple choice question. In order to develop a more precise binary logit model, the factors influencing on the cooperation ratio were extracted by a questionnaire survey asking food waste generator's intention, and the preference investigation was then conducted at the second step. As a result, the collectable amount of food waste was estimated to be 72 [t/day] in the Ishikari bay new port area under a condition of current collection system by using our method. In addition, the most critical factor influencing on the collectable amount of food waste was the treatment fee for households, and was the permitted mixture degree of improper materials for retail trade and restaurant businesses

  16. A Fatigue Life Prediction Method Based on Strain Intensity Factor

    PubMed Central

    Zhang, Wei; Liu, Huili; Wang, Qiang; He, Jingjing

    2017-01-01

    In this paper, a strain-intensity-factor-based method is proposed to calculate the fatigue crack growth under the fully reversed loading condition. A theoretical analysis is conducted in detail to demonstrate that the strain intensity factor is likely to be a better driving parameter correlated with the fatigue crack growth rate than the stress intensity factor (SIF), especially for some metallic materials (such as 316 austenitic stainless steel) in the low cycle fatigue region with negative stress ratios R (typically R = −1). For fully reversed cyclic loading, the constitutive relation between stress and strain should follow the cyclic stress-strain curve rather than the monotonic one (it is a nonlinear function even within the elastic region). Based on that, a transformation algorithm between the SIF and the strain intensity factor is developed, and the fatigue crack growth rate testing data of 316 austenitic stainless steel and AZ31 magnesium alloy are employed to validate the proposed model. It is clearly observed that the scatter band width of crack growth rate vs. strain intensity factor is narrower than that vs. the SIF for different load ranges (which indicates that the strain intensity factor is a better parameter than the stress intensity factor under the fully reversed load condition). It is also shown that the crack growth rate is not uniquely determined by the SIF range even under the same R, but is also influenced by the maximum loading. Additionally, the fatigue life data (strain-life curve) of smooth cylindrical specimens are also used for further comparison, where a modified Paris equation and the equivalent initial flaw size (EIFS) are involved. The results of the proposed method have a better agreement with the experimental data compared to the stress intensity factor based method. Overall, the strain intensity factor method shows a fairly good ability in calculating the fatigue crack propagation, especially for the fully reversed cyclic loading condition. PMID:28773049

  17. USING DOSE ADDITION TO ESTIMATE CUMULATIVE RISKS FROM EXPOSURES TO MULTIPLE CHEMICALS

    EPA Science Inventory

    The Food Quality Protection Act (FQPA) of 1996 requires the EPA to consider the cumulative risk from exposure to multiple chemicals that have a common mechanism of toxicity. Three methods, hazard index (HI), point-of-departure index (PODI), and toxicity equivalence factor (TEF), ...

  18. A fast marching algorithm for the factored eikonal equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Treister, Eran, E-mail: erantreister@gmail.com; Haber, Eldad, E-mail: haber@math.ubc.ca; Department of Mathematics, The University of British Columbia, Vancouver, BC

    The eikonal equation is instrumental in many applications in several fields ranging from computer vision to geoscience. This equation can be efficiently solved using the iterative Fast Sweeping (FS) methods and the direct Fast Marching (FM) methods. However, when used for a point source, the original eikonal equation is known to yield inaccurate numerical solutions, because of a singularity at the source. In this case, the factored eikonal equation is often preferred, and is known to yield a more accurate numerical solution. One application that requires the solution of the eikonal equation for point sources is travel time tomography. Thismore » inverse problem may be formulated using the eikonal equation as a forward problem. While this problem has been solved using FS in the past, the more recent choice for applying it involves FM methods because of the efficiency in which sensitivities can be obtained using them. However, while several FS methods are available for solving the factored equation, the FM method is available only for the original eikonal equation. In this paper we develop a Fast Marching algorithm for the factored eikonal equation, using both first and second order finite-difference schemes. Our algorithm follows the same lines as the original FM algorithm and requires the same computational effort. In addition, we show how to obtain sensitivities using this FM method and apply travel time tomography, formulated as an inverse factored eikonal equation. Numerical results in two and three dimensions show that our algorithm solves the factored eikonal equation efficiently, and demonstrate the achieved accuracy for computing the travel time. We also demonstrate a recovery of a 2D and 3D heterogeneous medium by travel time tomography using the eikonal equation for forward modeling and inversion by Gauss–Newton.« less

  19. The effects of deterioration and technological levels on pollutant emission factors for gasoline light-duty trucks.

    PubMed

    Zhang, Qingyu; Fan, Juwang; Yang, Weidong; Chen, Bixin; Zhang, Lijuan; Liu, Jiaoyu; Wang, Jingling; Zhou, Chunyao; Chen, Xuan

    2017-07-01

    Vehicle deterioration and technological change influence emission factors (EFs). In this study, the impacts of vehicle deterioration and emission standards on EFs of regulated pollutants (carbon monoxide [CO], hydrocarbon [HC], and nitrogen oxides [NO x ]) for gasoline light-duty trucks (LDTs) were investigated according to the inspection and maintenance (I/M) data using a chassis dynamometer method. Pollutant EFs for LDTs markedly varied with accumulated mileages and emission standards, and the trends of EFs are associated with accumulated mileages. In addition, the study also found that in most cases, the median EFs of CO, HC, and NO x are higher than those of basic EFs in the International Vehicle Emissions (IVE) model; therefore, the present study provides correction factors for the IVE model relative to the corresponding emission standards and mileages. Currently, vehicle emissions are great contributors to air pollution in cities, especially in developing countries. Emission factors play a key role in creating emission inventory and estimating emissions. Deterioration represented by vehicle age and accumulated mileage and changes of emission standards markedly influence emission factors. In addition, the results provide collection factors for implication in the IVE model in the region levels.

  20. Improved Conjugate Gradient Bundle Adjustment of Dunhuang Wall Painting Images

    NASA Astrophysics Data System (ADS)

    Hu, K.; Huang, X.; You, H.

    2017-09-01

    Bundle adjustment with additional parameters is identified as a critical step for precise orthoimage generation and 3D reconstruction of Dunhuang wall paintings. Due to the introduction of self-calibration parameters and quasi-planar constraints, the structure of coefficient matrix of the reduced normal equation is banded-bordered, making the solving process of bundle adjustment complex. In this paper, Conjugate Gradient Bundle Adjustment (CGBA) method is deduced by calculus of variations. A preconditioning method based on improved incomplete Cholesky factorization is adopt to reduce the condition number of coefficient matrix, as well as to accelerate the iteration rate of CGBA. Both theoretical analysis and experimental results comparison with conventional method indicate that, the proposed method can effectively conquer the ill-conditioned problem of normal equation and improve the calculation efficiency of bundle adjustment with additional parameters considerably, while maintaining the actual accuracy.

  1. Fingerprinting of music scores

    NASA Astrophysics Data System (ADS)

    Irons, Jonathan; Schmucker, Martin

    2004-06-01

    Publishers of sheet music are generally reluctant in distributing their content via the Internet. Although online sheet music distribution's advantages are numerous the potential risk of Intellectual Property Rights (IPR) infringement, e.g. illegal online distributions, disables any innovation propensity. While active protection techniques only deter external risk factors, additional technology is necessary to adequately treat further risk factors. For several media types including music scores watermarking technology has been developed, which ebeds information in data by suitable data modifications. Furthermore, fingerprinting or perceptual hasing methods have been developed and are being applied especially for audio. These methods allow the identification of content without prior modifications. In this article we motivate the development of watermarking and fingerprinting technologies for sheet music. Outgoing from potential limitations of watermarking methods we explain why fingerprinting methods are important for sheet music and address potential applications. Finally we introduce a condept for fingerprinting of sheet music.

  2. Development and Implementation of a Coagulation Factor Testing Method Utilizing Autoverification in a High-volume Clinical Reference Laboratory Environment

    PubMed Central

    Riley, Paul W.; Gallea, Benoit; Valcour, Andre

    2017-01-01

    Background: Testing coagulation factor activities requires that multiple dilutions be assayed and analyzed to produce a single result. The slope of the line created by plotting measured factor concentration against sample dilution is evaluated to discern the presence of inhibitors giving rise to nonparallelism. Moreover, samples producing results on initial dilution falling outside the analytic measurement range of the assay must be tested at additional dilutions to produce reportable results. Methods: The complexity of this process has motivated a large clinical reference laboratory to develop advanced computer algorithms with automated reflex testing rules to complete coagulation factor analysis. A method was developed for autoverification of coagulation factor activity using expert rules developed with on an off the shelf commercially available data manager system integrated into an automated coagulation platform. Results: Here, we present an approach allowing for the autoverification and reporting of factor activity results with greatly diminished technologist effort. Conclusions: To the best of our knowledge, this is the first report of its kind providing a detailed procedure for implementation of autoverification expert rules as applied to coagulation factor activity testing. Advantages of this system include ease of training for new operators, minimization of technologist time spent, reduction of staff fatigue, minimization of unnecessary reflex tests, optimization of turnaround time, and assurance of the consistency of the testing and reporting process. PMID:28706751

  3. Using logic model methods in systematic review synthesis: describing complex pathways in referral management interventions

    PubMed Central

    2014-01-01

    Background There is increasing interest in innovative methods to carry out systematic reviews of complex interventions. Theory-based approaches, such as logic models, have been suggested as a means of providing additional insights beyond that obtained via conventional review methods. Methods This paper reports the use of an innovative method which combines systematic review processes with logic model techniques to synthesise a broad range of literature. The potential value of the model produced was explored with stakeholders. Results The review identified 295 papers that met the inclusion criteria. The papers consisted of 141 intervention studies and 154 non-intervention quantitative and qualitative articles. A logic model was systematically built from these studies. The model outlines interventions, short term outcomes, moderating and mediating factors and long term demand management outcomes and impacts. Interventions were grouped into typologies of practitioner education, process change, system change, and patient intervention. Short-term outcomes identified that may result from these interventions were changed physician or patient knowledge, beliefs or attitudes and also interventions related to changed doctor-patient interaction. A range of factors which may influence whether these outcomes lead to long term change were detailed. Demand management outcomes and intended impacts included content of referral, rate of referral, and doctor or patient satisfaction. Conclusions The logic model details evidence and assumptions underpinning the complex pathway from interventions to demand management impact. The method offers a useful addition to systematic review methodologies. Trial registration number PROSPERO registration number: CRD42013004037. PMID:24885751

  4. Potential capabilities for compression of information of certain data processing systems

    NASA Technical Reports Server (NTRS)

    Khodarev, Y. K.; Yevdokimov, V. P.; Pokras, V. M.

    1974-01-01

    This article undertakes to study a generalized block diagram of a data collection and processing system of a spacecraft in which a number of sensors or outputs of scientific instruments are cyclically interrogated by a commutator, methods of writing the supplementary information in a frame on the example of a certain hypothetical telemetry system, and the influence of statistics of number of active channels in a frame on frame compression factor. The separation of the data compression factor of the collection and processing system of spacecraft into two parts used in this work allows determination of the compression factor of an active frame depending not only on the statistics of activity of channels in the telemetry frame, but also on the method of introduction of the additional address and time information to each frame.

  5. Novel Selective Detection Method of Tumor Angiogenesis Factors Using Living Nano-Robots.

    PubMed

    Al-Fandi, Mohamed; Alshraiedeh, Nida; Owies, Rami; Alshdaifat, Hala; Al-Mahaseneh, Omamah; Al-Tall, Khadijah; Alawneh, Rawan

    2017-07-14

    This paper reports a novel self-detection method for tumor cells using living nano-robots. These living robots are a nonpathogenic strain of E. coli bacteria equipped with naturally synthesized bio-nano-sensory systems that have an affinity to VEGF, an angiogenic factor overly-expressed by cancer cells. The VEGF-affinity/chemotaxis was assessed using several assays including the capillary chemotaxis assay, chemotaxis assay on soft agar, and chemotaxis assay on solid agar. In addition, a microfluidic device was developed to possibly discover tumor cells through the overexpressed vascular endothelial growth factor (VEGF). Various experiments to study the sensing characteristic of the nano-robots presented a strong response toward the VEGF. Thus, a new paradigm of selective targeting therapies for cancer can be advanced using swimming E. coli as self-navigator miniaturized robots as well as drug-delivery vehicles.

  6. Gene Ranking of RNA-Seq Data via Discriminant Non-Negative Matrix Factorization.

    PubMed

    Jia, Zhilong; Zhang, Xiang; Guan, Naiyang; Bo, Xiaochen; Barnes, Michael R; Luo, Zhigang

    2015-01-01

    RNA-sequencing is rapidly becoming the method of choice for studying the full complexity of transcriptomes, however with increasing dimensionality, accurate gene ranking is becoming increasingly challenging. This paper proposes an accurate and sensitive gene ranking method that implements discriminant non-negative matrix factorization (DNMF) for RNA-seq data. To the best of our knowledge, this is the first work to explore the utility of DNMF for gene ranking. When incorporating Fisher's discriminant criteria and setting the reduced dimension as two, DNMF learns two factors to approximate the original gene expression data, abstracting the up-regulated or down-regulated metagene by using the sample label information. The first factor denotes all the genes' weights of two metagenes as the additive combination of all genes, while the second learned factor represents the expression values of two metagenes. In the gene ranking stage, all the genes are ranked as a descending sequence according to the differential values of the metagene weights. Leveraging the nature of NMF and Fisher's criterion, DNMF can robustly boost the gene ranking performance. The Area Under the Curve analysis of differential expression analysis on two benchmarking tests of four RNA-seq data sets with similar phenotypes showed that our proposed DNMF-based gene ranking method outperforms other widely used methods. Moreover, the Gene Set Enrichment Analysis also showed DNMF outweighs others. DNMF is also computationally efficient, substantially outperforming all other benchmarked methods. Consequently, we suggest DNMF is an effective method for the analysis of differential gene expression and gene ranking for RNA-seq data.

  7. A mixture model with a reference-based automatic selection of components for disease classification from protein and/or gene expression levels

    PubMed Central

    2011-01-01

    Background Bioinformatics data analysis is often using linear mixture model representing samples as additive mixture of components. Properly constrained blind matrix factorization methods extract those components using mixture samples only. However, automatic selection of extracted components to be retained for classification analysis remains an open issue. Results The method proposed here is applied to well-studied protein and genomic datasets of ovarian, prostate and colon cancers to extract components for disease prediction. It achieves average sensitivities of: 96.2 (sd = 2.7%), 97.6% (sd = 2.8%) and 90.8% (sd = 5.5%) and average specificities of: 93.6% (sd = 4.1%), 99% (sd = 2.2%) and 79.4% (sd = 9.8%) in 100 independent two-fold cross-validations. Conclusions We propose an additive mixture model of a sample for feature extraction using, in principle, sparseness constrained factorization on a sample-by-sample basis. As opposed to that, existing methods factorize complete dataset simultaneously. The sample model is composed of a reference sample representing control and/or case (disease) groups and a test sample. Each sample is decomposed into two or more components that are selected automatically (without using label information) as control specific, case specific and not differentially expressed (neutral). The number of components is determined by cross-validation. Automatic assignment of features (m/z ratios or genes) to particular component is based on thresholds estimated from each sample directly. Due to the locality of decomposition, the strength of the expression of each feature across the samples can vary. Yet, they will still be allocated to the related disease and/or control specific component. Since label information is not used in the selection process, case and control specific components can be used for classification. That is not the case with standard factorization methods. Moreover, the component selected by proposed method as disease specific can be interpreted as a sub-mode and retained for further analysis to identify potential biomarkers. As opposed to standard matrix factorization methods this can be achieved on a sample (experiment)-by-sample basis. Postulating one or more components with indifferent features enables their removal from disease and control specific components on a sample-by-sample basis. This yields selected components with reduced complexity and generally, it increases prediction accuracy. PMID:22208882

  8. Predictors of language service availability in U.S. hospitals

    PubMed Central

    Schiaffino, Melody K.; Al-Amin, Mona; Schumacher, Jessica R.

    2014-01-01

    Background: Hispanics comprise 17% of the total U.S. population, surpassing African-Americans as the largest minority group. Linguistically, almost 60 million people speak a language other than English. This language diversity can create barriers and additional burden and risk when seeking health services. Patients with Limited English Proficiency (LEP) for example, have been shown to experience a disproportionate risk of poor health outcomes, making the provision of Language Services (LS) in healthcare facilities critical. Research on the determinants of LS adoption has focused more on overall cultural competence and internal managerial decision-making than on measuring LS adoption as a process outcome influenced by contextual or external factors. The current investigation examines the relationship between state policy, service area factors, and hospital characteristics on hospital LS adoption. Methods: We employ a cross-sectional analysis of survey data from a national sample of hospitals in the American Hospital Association (AHA) database for 2011 (N= 4876) to analyze hospital characteristics and outcomes, augmented with additional population data from the American Community Survey (ACS) to estimate language diversity in the hospital service area. Additional data from the National Health Law Program (NHeLP) facilitated the state level Medicaid reimbursement factor. Results: Only 64% of hospitals offered LS. Hospitals that adopted LS were more likely to be not-for-profit, in areas with higher than average language diversity, larger, and urban. Hospitals in above average language diverse counties had more than 2-fold greater odds of adopting LS than less language diverse areas [Adjusted Odds Ratio (AOR): 2.26, P< 0.01]. Further, hospitals with a strategic orientation toward diversity had nearly 2-fold greater odds of adopting LS (AOR: 1.90, P< 0.001). Conclusion: Our findings support the importance of structural and contextual factors as they relate to healthcare delivery. Healthcare organizations must address the needs of the population they serve and align their efforts internally. Current financial incentives do not appear to influence adoption of LS, nor do Medicaid reimbursement funds, thus suggesting that further alignment of incentives. Organizational and system level factors have a place in disparities research and warrant further analysis; additional spatial methods could enhance our understanding of population factors critical to system-level health services research. PMID:25337600

  9. Dereverberation and denoising based on generalized spectral subtraction by multi-channel LMS algorithm using a small-scale microphone array

    NASA Astrophysics Data System (ADS)

    Wang, Longbiao; Odani, Kyohei; Kai, Atsuhiko

    2012-12-01

    A blind dereverberation method based on power spectral subtraction (SS) using a multi-channel least mean squares algorithm was previously proposed to suppress the reverberant speech without additive noise. The results of isolated word speech recognition experiments showed that this method achieved significant improvements over conventional cepstral mean normalization (CMN) in a reverberant environment. In this paper, we propose a blind dereverberation method based on generalized spectral subtraction (GSS), which has been shown to be effective for noise reduction, instead of power SS. Furthermore, we extend the missing feature theory (MFT), which was initially proposed to enhance the robustness of additive noise, to dereverberation. A one-stage dereverberation and denoising method based on GSS is presented to simultaneously suppress both the additive noise and nonstationary multiplicative noise (reverberation). The proposed dereverberation method based on GSS with MFT is evaluated on a large vocabulary continuous speech recognition task. When the additive noise was absent, the dereverberation method based on GSS with MFT using only 2 microphones achieves a relative word error reduction rate of 11.4 and 32.6% compared to the dereverberation method based on power SS and the conventional CMN, respectively. For the reverberant and noisy speech, the dereverberation and denoising method based on GSS achieves a relative word error reduction rate of 12.8% compared to the conventional CMN with GSS-based additive noise reduction method. We also analyze the effective factors of the compensation parameter estimation for the dereverberation method based on SS, such as the number of channels (the number of microphones), the length of reverberation to be suppressed, and the length of the utterance used for parameter estimation. The experimental results showed that the SS-based method is robust in a variety of reverberant environments for both isolated and continuous speech recognition and under various parameter estimation conditions.

  10. AZTEC: A parallel iterative package for the solving linear systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hutchinson, S.A.; Shadid, J.N.; Tuminaro, R.S.

    1996-12-31

    We describe a parallel linear system package, AZTEC. The package incorporates a number of parallel iterative methods (e.g. GMRES, biCGSTAB, CGS, TFQMR) and preconditioners (e.g. Jacobi, Gauss-Seidel, polynomial, domain decomposition with LU or ILU within subdomains). Additionally, AZTEC allows for the reuse of previous preconditioning factorizations within Newton schemes for nonlinear methods. Currently, a number of different users are using this package to solve a variety of PDE applications.

  11. Estimating the impact of grouping misclassification on risk prediction when using the relative potency factors method to assess mixtures risk

    EPA Science Inventory

    Environmental health risk assessments of chemical mixtures that rely on component approaches often begin by grouping the chemicals of concern according to toxicological similarity. Approaches that assume dose addition typically are used for groups of similarly-acting chemicals an...

  12. Applicability of Herzberg's Motivator-Hygiene Theory in Studying Academic Motivation.

    ERIC Educational Resources Information Center

    Magoon, Robert A.; James, Aaron

    1978-01-01

    Forty-one community college students were asked to recall one college-related event which made them feel good and one which made them feel bad, and provide additional information about each. Results were analyzed using Herzberg's methods to identify factors related to student motivation, as "satisfiers" or "dissatisfiers" and…

  13. The Social Physique Anxiety Scale: an example of the potential consequence of negatively worded items in factorial validity studies.

    PubMed

    Motl, R W; Conroy, D E; Horan, P M

    2000-01-01

    Social physique anxiety (SPA) based on Hart, Leary, and Rejeski's (1989) Social Physique Anxiety Scale (SPAS) was originally conceptualized to be a unidimensional construct. Empirical evidence on the factorial validity of the SPAS has been contradictory, yielding both one- and two-factor models. The two-factor model, which consists of separate factors associated with positively and negatively worded items, has stimulated an ongoing debate about the dimensionality and content of the SPAS. The present study employed confirmatory factor analysis (CFA) to examine whether the two-factor solution to the 12-item SPAS was substantively meaningful or a methodological artifact. Results of the CFAs, which were performed on responses from four different samples (Eklund, Kelley, and Wilson, 1997; Eklund, Mack, and Hart, 1996), supported the existence of a single substantive SPA factor underlying responses to the 12-item SPAS. There were, in addition, method effects associated with the negatively worded items that could be modeled to achieve good fit. Therefore, it was concluded that a single substantive factor and a non-substantive method effect primarily related to the negatively worded items best represented the 12-item SPAS.

  14. Confirmatory factor analysis of the Child Health Questionnaire-Parent Form 50 in a predominantly minority sample.

    PubMed

    Hepner, Kimberly A; Sechrest, Lee

    2002-12-01

    The Child Health Questionnaire-Parent Form 50 (CHQ-PF50; Landgraf JM et al., The CHQ User's Manual. Boston, MA: The Health Institute, New England Medical Centre, 1996) appears to be a useful method of assessing children's health. The CHQ-PF50 is designed to measure general functional status and well-being and is available in several versions to suit the needs of the health researcher. Several publications have reported favorably on the psychometric properties of the CHQ. Landgraf et al. reported the results of an exploratory factor analysis at the scale level that provided evidence for a two-factor structure representing physical and psychosocial dimensions of health. In order to cross-validate and extend these results, a confirmatory factor analysis was conducted with an independent sample of generally healthy, predominantly minority children. Results of the analysis indicate that a two-factor model provides a good fit to the data, confirming previous exploratory analyses with this questionnaire. One additional method factor seems likely because of the substantial similarity of three of the scales, but that does not affect the substantive two-factor interpretation overall.

  15. Kinetics of Hydrogen Abstraction and Addition Reactions of 3-Hexene by ȮH Radicals.

    PubMed

    Yang, Feiyu; Deng, Fuquan; Pan, Youshun; Zhang, Yingjia; Tang, Chenglong; Huang, Zuohua

    2017-03-09

    Rate coefficients of H atom abstraction and H atom addition reactions of 3-hexene by the hydroxyl radicals were determined using both conventional transition-state theory and canonical variational transition-state theory, with the potential energy surface (PES) evaluated at the CCSD(T)/CBS//BHandHLYP/6-311G(d,p) level and quantum mechanical effect corrected by the compounded methods including one-dimensional Wigner method, multidimensional zero-curvature tunneling method, and small-curvature tunneling method. Results reveal that accounting for approximate 70% of the overall H atom abstractions occur in the allylic site via both direct and indirect channels. The indirect channel containing two van der Waals prereactive complexes exhibits two times larger rate coefficient relative to the direct one. The OH addition reaction also contains two van der Waals complexes, and its submerged barrier results in a negative temperature coefficient behavior at low temperatures. In contrast, The OH addition pathway dominates only at temperatures below 450 K whereas the H atom abstraction reactions dominate overwhelmingly at temperature over 1000 K. All of the rate coefficients calculated with an uncertainty of a factor of 5 were fitted in a quasi-Arrhenius formula. Analyses on the PES, minimum reaction path and activation free Gibbs energy were also performed in this study.

  16. Enhancing the estimation of blood pressure using pulse arrival time and two confounding factors.

    PubMed

    Baek, Hyun Jae; Kim, Ko Keun; Kim, Jung Soo; Lee, Boreom; Park, Kwang Suk

    2010-02-01

    A new method of blood pressure (BP) estimation using multiple regression with pulse arrival time (PAT) and two confounding factors was evaluated in clinical and unconstrained monitoring situations. For the first analysis with clinical data, electrocardiogram (ECG), photoplethysmogram (PPG) and invasive BP signals were obtained by a conventional patient monitoring device during surgery. In the second analysis, ECG, PPG and non-invasive BP were measured using systems developed to obtain data under conditions in which the subject was not constrained. To enhance the performance of BP estimation methods, heart rate (HR) and arterial stiffness were considered as confounding factors in regression analysis. The PAT and HR were easily extracted from ECG and PPG signals. For arterial stiffness, the duration from the maximum derivative point to the maximum of the dicrotic notch in the PPG signal, a parameter called TDB, was employed. In two experiments that normally cause BP variation, the correlation between measured BP and the estimated BP was investigated. Multiple-regression analysis with the two confounding factors improved correlation coefficients for diastolic blood pressure and systolic blood pressure to acceptable confidence levels, compared to existing methods that consider PAT only. In addition, reproducibility for the proposed method was determined using constructed test sets. Our results demonstrate that non-invasive, non-intrusive BP estimation can be obtained using methods that can be applied in both clinical and daily healthcare situations.

  17. Systematic evaluation of implementation fidelity of complex interventions in health and social care

    PubMed Central

    2010-01-01

    Background Evaluation of an implementation process and its fidelity can give insight into the 'black box' of interventions. However, a lack of standardized methods for studying fidelity and implementation process have been reported, which might be one reason for the fact that few prior studies in the field of health service research have systematically evaluated interventions' implementation processes. The aim of this project is to systematically evaluate implementation fidelity and possible factors influencing fidelity of complex interventions in health and social care. Methods A modified version of The Conceptual Framework for Implementation Fidelity will be used as a conceptual model for the evaluation. The modification implies two additional moderating factors: context and recruitment. A systematic evaluation process was developed. Multiple case study method is used to investigate implementation of three complex health service interventions. Each case will be investigated in depth and longitudinally, using both quantitative and qualitative methods. Discussion This study is the first attempt to empirically test The Conceptual Framework for Implementation Fidelity. The study can highlight mechanism and factors of importance when implementing complex interventions. Especially the role of the moderating factors on implementation fidelity can be clarified. Trial Registration Supported Employment, SE, among people with severe mental illness -- a randomized controlled trial: NCT00960024. PMID:20815872

  18. Distinguishing Error from Chaos in Ecological Time Series

    NASA Astrophysics Data System (ADS)

    Sugihara, George; Grenfell, Bryan; May, Robert M.

    1990-11-01

    Over the years, there has been much discussion about the relative importance of environmental and biological factors in regulating natural populations. Often it is thought that environmental factors are associated with stochastic fluctuations in population density, and biological ones with deterministic regulation. We revisit these ideas in the light of recent work on chaos and nonlinear systems. We show that completely deterministic regulatory factors can lead to apparently random fluctuations in population density, and we then develop a new method (that can be applied to limited data sets) to make practical distinctions between apparently noisy dynamics produced by low-dimensional chaos and population variation that in fact derives from random (high-dimensional)noise, such as environmental stochasticity or sampling error. To show its practical use, the method is first applied to models where the dynamics are known. We then apply the method to several sets of real data, including newly analysed data on the incidence of measles in the United Kingdom. Here the additional problems of secular trends and spatial effects are explored. In particular, we find that on a city-by-city scale measles exhibits low-dimensional chaos (as has previously been found for measles in New York City), whereas on a larger, country-wide scale the dynamics appear as a noisy two-year cycle. In addition to shedding light on the basic dynamics of some nonlinear biological systems, this work dramatizes how the scale on which data is collected and analysed can affect the conclusions drawn.

  19. Finite element techniques applied to cracks interacting with selected singularities

    NASA Technical Reports Server (NTRS)

    Conway, J. C.

    1975-01-01

    The finite-element method for computing the extensional stress-intensity factor for cracks approaching selected singularities of varied geometry is described. Stress-intensity factors are generated using both displacement and J-integral techniques, and numerical results are compared to those obtained experimentally in a photoelastic investigation. The selected singularities considered are a colinear crack, a circular penetration, and a notched circular penetration. Results indicate that singularities greatly influence the crack-tip stress-intensity factor as the crack approaches the singularity. In addition, the degree of influence can be regulated by varying the overall geometry of the singularity. Local changes in singularity geometry have little effect on the stress-intensity factor for the cases investigated.

  20. A quasi-likelihood approach to non-negative matrix factorization

    PubMed Central

    Devarajan, Karthik; Cheung, Vincent C.K.

    2017-01-01

    A unified approach to non-negative matrix factorization based on the theory of generalized linear models is proposed. This approach embeds a variety of statistical models, including the exponential family, within a single theoretical framework and provides a unified view of such factorizations from the perspective of quasi-likelihood. Using this framework, a family of algorithms for handling signal-dependent noise is developed and its convergence proven using the Expectation-Maximization algorithm. In addition, a measure to evaluate the goodness-of-fit of the resulting factorization is described. The proposed methods allow modeling of non-linear effects via appropriate link functions and are illustrated using an application in biomedical signal processing. PMID:27348511

  1. Stress-intensity factors and crack-opening displacements for round compact specimens. [fracture toughness of metallic materials

    NASA Technical Reports Server (NTRS)

    Newman, J. C., Jr.

    1979-01-01

    A two dimensional, boundary collocation stress analysis was used to analyze various round compact specimens. The influence of the round external boundary and of pin-loaded holes on stress intensity factors and crack opening displacements was determined as a function of crack-length-to-specimen-width ratios. A wide-range equation for the stress intensity factors was developed. Equations for crack-surface displacements and load-point displacements were also developed. In addition, stress intensity factors were calculated from compliance methods to demonstrate that load-displacement records must be made at the loading points and not along the crack line for crack-length-to-specimen-width ratios less than about 0.4.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yaping; Williams, Brent J.; Goldstein, Allen H.

    Here, we present a rapid method for apportioning the sources of atmospheric organic aerosol composition measured by gas chromatography–mass spectrometry methods. Here, we specifically apply this new analysis method to data acquired on a thermal desorption aerosol gas chromatograph (TAG) system. Gas chromatograms are divided by retention time into evenly spaced bins, within which the mass spectra are summed. A previous chromatogram binning method was introduced for the purpose of chromatogram structure deconvolution (e.g., major compound classes) (Zhang et al., 2014). Here we extend the method development for the specific purpose of determining aerosol samples' sources. Chromatogram bins are arrangedmore » into an input data matrix for positive matrix factorization (PMF), where the sample number is the row dimension and the mass-spectra-resolved eluting time intervals (bins) are the column dimension. Then two-dimensional PMF can effectively do three-dimensional factorization on the three-dimensional TAG mass spectra data. The retention time shift of the chromatogram is corrected by applying the median values of the different peaks' shifts. Bin width affects chemical resolution but does not affect PMF retrieval of the sources' time variations for low-factor solutions. A bin width smaller than the maximum retention shift among all samples requires retention time shift correction. A six-factor PMF comparison among aerosol mass spectrometry (AMS), TAG binning, and conventional TAG compound integration methods shows that the TAG binning method performs similarly to the integration method. However, the new binning method incorporates the entirety of the data set and requires significantly less pre-processing of the data than conventional single compound identification and integration. In addition, while a fraction of the most oxygenated aerosol does not elute through an underivatized TAG analysis, the TAG binning method does have the ability to achieve molecular level resolution on other bulk aerosol components commonly observed by the AMS.« less

  3. [Acceptance and rejection of vasectomy in rural males].

    PubMed

    García Moreno, Juan; Solano Sainos, Luis Miguel

    2005-01-01

    One problem in rural population is the gap between coverage of contraception and scant masculine participation, which could be due to lack of information of to other sociocultural factors. We investigated, in two stages, the characteristics or the profile of the sexual and reproductive behavior of males in an exploratory study by means of focus groups to determine their relevant motivations and characteristics and subsequently, a structured questionnaire to ascertain the magnitude of the factors explored. The population corresponded to zones of rural hospital medical services zones of medical services in seven ethnic groups of the Mexican Republic and included men who accepted and who rejected vasectomy. The profile of males who accepted vasectomy allowed to determine that there exist a unsatisfied demand for contraceptive protection and the desire of not having additional children; in addition, we found that the decision to accept vasectomy is determined to a greater extent for reasons different from that of information on the contraceptive method. The important proportion of males who were non-users of contraceptive methods who accepted vasectomy supposed information on contraception to be the most consistent reason; nonetheless, this information was not considered sufficient and timely; thus, adverse economic situation, a certain condition related with the couple such as health or love for the female partner are the more weighty reasons for deciding to accept vasectomy, while the fear of poor sexual performance is the most powerful factor for rejection of vasectomy. Masculine participation in family planning is a factor that conditions contraceptive coverage and its respective benefits. The profile of the male who accepts vasectomy aids in identifying candidates forthe procedure and in reducing unsatisfied demand. Greater diffusion of information of the contraceptive method of vasectomy, greater links between male needs and vasectomy, and maintaining or increasing access to family planning are required.

  4. Accelerating an Ordered-Subset Low-Dose X-Ray Cone Beam Computed Tomography Image Reconstruction with a Power Factor and Total Variation Minimization.

    PubMed

    Huang, Hsuan-Ming; Hsiao, Ing-Tsung

    2016-01-01

    In recent years, there has been increased interest in low-dose X-ray cone beam computed tomography (CBCT) in many fields, including dentistry, guided radiotherapy and small animal imaging. Despite reducing the radiation dose, low-dose CBCT has not gained widespread acceptance in routine clinical practice. In addition to performing more evaluation studies, developing a fast and high-quality reconstruction algorithm is required. In this work, we propose an iterative reconstruction method that accelerates ordered-subsets (OS) reconstruction using a power factor. Furthermore, we combine it with the total-variation (TV) minimization method. Both simulation and phantom studies were conducted to evaluate the performance of the proposed method. Results show that the proposed method can accelerate conventional OS methods, greatly increase the convergence speed in early iterations. Moreover, applying the TV minimization to the power acceleration scheme can further improve the image quality while preserving the fast convergence rate.

  5. Accelerating an Ordered-Subset Low-Dose X-Ray Cone Beam Computed Tomography Image Reconstruction with a Power Factor and Total Variation Minimization

    PubMed Central

    Huang, Hsuan-Ming; Hsiao, Ing-Tsung

    2016-01-01

    In recent years, there has been increased interest in low-dose X-ray cone beam computed tomography (CBCT) in many fields, including dentistry, guided radiotherapy and small animal imaging. Despite reducing the radiation dose, low-dose CBCT has not gained widespread acceptance in routine clinical practice. In addition to performing more evaluation studies, developing a fast and high-quality reconstruction algorithm is required. In this work, we propose an iterative reconstruction method that accelerates ordered-subsets (OS) reconstruction using a power factor. Furthermore, we combine it with the total-variation (TV) minimization method. Both simulation and phantom studies were conducted to evaluate the performance of the proposed method. Results show that the proposed method can accelerate conventional OS methods, greatly increase the convergence speed in early iterations. Moreover, applying the TV minimization to the power acceleration scheme can further improve the image quality while preserving the fast convergence rate. PMID:27073853

  6. Using Structured Additive Regression Models to Estimate Risk Factors of Malaria: Analysis of 2010 Malawi Malaria Indicator Survey Data

    PubMed Central

    Chirombo, James; Lowe, Rachel; Kazembe, Lawrence

    2014-01-01

    Background After years of implementing Roll Back Malaria (RBM) interventions, the changing landscape of malaria in terms of risk factors and spatial pattern has not been fully investigated. This paper uses the 2010 malaria indicator survey data to investigate if known malaria risk factors remain relevant after many years of interventions. Methods We adopted a structured additive logistic regression model that allowed for spatial correlation, to more realistically estimate malaria risk factors. Our model included child and household level covariates, as well as climatic and environmental factors. Continuous variables were modelled by assuming second order random walk priors, while spatial correlation was specified as a Markov random field prior, with fixed effects assigned diffuse priors. Inference was fully Bayesian resulting in an under five malaria risk map for Malawi. Results Malaria risk increased with increasing age of the child. With respect to socio-economic factors, the greater the household wealth, the lower the malaria prevalence. A general decline in malaria risk was observed as altitude increased. Minimum temperatures and average total rainfall in the three months preceding the survey did not show a strong association with disease risk. Conclusions The structured additive regression model offered a flexible extension to standard regression models by enabling simultaneous modelling of possible nonlinear effects of continuous covariates, spatial correlation and heterogeneity, while estimating usual fixed effects of categorical and continuous observed variables. Our results confirmed that malaria epidemiology is a complex interaction of biotic and abiotic factors, both at the individual, household and community level and that risk factors are still relevant many years after extensive implementation of RBM activities. PMID:24991915

  7. Adaptive conversion of a high-order mode beam into a near-diffraction-limited beam.

    PubMed

    Zhao, Haichuan; Wang, Xiaolin; Ma, Haotong; Zhou, Pu; Ma, Yanxing; Xu, Xiaojun; Zhao, Yijun

    2011-08-01

    We present a new method for efficiently transforming a high-order mode beam into a nearly Gaussian beam with much higher beam quality. The method is based on modulation of phases of different lobes by stochastic parallel gradient descent algorithm and coherent addition after phase flattening. We demonstrate the method by transforming an LP11 mode into a nearly Gaussian beam. The experimental results reveal that the power in the diffraction-limited bucket in the far field is increased by more than a factor of 1.5.

  8. That's why I take my ONS. Means-end chain as a novel approach to elucidate the personally relevant factors driving ONS consumption in nutritionally frail elderly users.

    PubMed

    den Uijl, Louise C; Kremer, Stefanie; Jager, Gerry; van der Stelt, Annelies J; de Graaf, Cees; Gibson, Peter; Godfrey, James; Lawlor, J Ben

    2015-06-01

    Oral nutritional supplements (ONS) are a recommended form of nutritional intervention for older malnourished persons when a 'food first' approach and/or food fortification prove ineffective. The efficacy of ONS will depend on, amongst other factors, whether persons do, or do not, consume their prescribed amount. Factors influencing ONS consumption can be product, context, or person related. Whereas product and context have received some attention, little is known about the person factors driving ONS consumption. In addition, the relative importance of the product, context, and person factors to ONS consumption is not known. Using the means-end chain (MEC) method, the current study elucidated personally relevant factors (product, context, and person factors) related to ONS consumption in two groups of older nutritionally frail ONS users: community-dwelling persons and care home residents with mainly somatic disorders. To our knowledge, the current work is the first to apply the MEC method to study older nutritionally frail ONS users. Forty ONS users (n = 20 per group) were recruited via healthcare professionals. The level of frailty was assessed using the FRAIL scale. Both groups were interviewed for 30 to 45 minutes using the soft laddering technique. The laddering data were analysed using LadderUX software™. The MEC method appeared to work well in both groups. The majority of the participants took ONS on their doctor's or dietician's prescription as they trusted their advice. The community-dwelling group took ONS to prolong their independence, whereas the care home group reported values that related more to small improvements in quality of life. In addition, care home residents perceived themselves as dependent on their caregiver for their ONS arrangements, whereas this dependence was not reported by community-dwelling persons. Key insights from this work will enable doctors and dieticians to customize their nutritional interventions to ONS users' personal needs and thus positively impact health outcomes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Contextual influences on health worker motivation in district hospitals in Kenya

    PubMed Central

    Mbindyo, Patrick; Gilson, Lucy; Blaauw, Duane; English, Mike

    2009-01-01

    Background Organizational factors are considered to be an important influence on health workers' uptake of interventions that improve their practices. These are additionally influenced by factors operating at individual and broader health system levels. We sought to explore contextual influences on worker motivation, a factor that may modify the effect of an intervention aimed at changing clinical practices in Kenyan hospitals. Methods Franco LM, et al's (Health sector reform and public sector health worker motivation: a conceptual framework. Soc Sci Med. 2002, 54: 1255–66) model of motivational influences was used to frame the study Qualitative methods including individual in-depth interviews, small-group interviews and focus group discussions were used to gather data from 185 health workers during one-week visits to each of eight district hospitals. Data were collected prior to a planned intervention aiming to implement new practice guidelines and improve quality of care. Additionally, on-site observations of routine health worker behaviour in the study sites were used to inform analyses. Results Study settings are likely to have important influences on worker motivation. Effective management at hospital level may create an enabling working environment modifying the impact of resource shortfalls. Supportive leadership may foster good working relationships between cadres, improve motivation through provision of local incentives and appropriately handle workers' expectations in terms of promotions, performance appraisal processes, and good communication. Such organisational attributes may counteract de-motivating factors at a national level, such as poor schemes of service, and enhance personally motivating factors such as the desire to maintain professional standards. Conclusion Motivation is likely to influence powerfully any attempts to change or improve health worker and hospital practices. Some factors influencing motivation may themselves be influenced by the processes chosen to implement change. PMID:19627590

  10. Optimization of Selective Laser Melting by Evaluation Method of Multiple Quality Characteristics

    NASA Astrophysics Data System (ADS)

    Khaimovich, A. I.; Stepanenko, I. S.; Smelov, V. G.

    2018-01-01

    Article describes the adoption of the Taguchi method in selective laser melting process of sector of combustion chamber by numerical and natural experiments for achieving minimum temperature deformation. The aim was to produce a quality part with minimum amount of numeric experiments. For the study, the following optimization parameters (independent factors) were chosen: the laser beam power and velocity; two factors for compensating the effect of the residual thermal stresses: the scale factor of the preliminary correction of the part geometry and the number of additional reinforcing elements. We used an orthogonal plan of 9 experiments with a factor variation at three levels (L9). As quality criterias, the values of distortions for 9 zones of the combustion chamber and the maximum strength of the material of the chamber were chosen. Since the quality parameters are multidirectional, a grey relational analysis was used to solve the optimization problem for multiple quality parameters. As a result, according to the parameters obtained, the combustion chamber segments of the gas turbine engine were manufactured.

  11. Assessing and Understanding Trail Degradation: Results from Big South Fork National River and Recreational Area

    USGS Publications Warehouse

    Marion, J.L.; Olive, N.

    2006-01-01

    This report describes results from a comprehensive assessment of resource conditions on a large (24%) sample of the trail system within Big South Fork National River and Recreational Area (BSF). Components include research to develop state-of-knowledge trail impact assessment and monitoring methods, application of survey methods to BSF trails, analysis and summary of results, and recommendations for trail management decision making and future monitoring. Findings reveal a trail system with some substantial degradation, particularly soil erosion, which additionally threatens water quality in areas adjacent to streams and rivers. Factors that contribute to or influence these problems are analyzed and described. Principal among these are trail design factors (trail topographic position, soil texture, grade and slope alignment angle), use-related factors (type and amount of use), and maintenance factors (water drainage). Recommendations are offered to assist managers in improving the sustainability of the trails system to accommodate visitation while enhancing natural resource protection.

  12. An adaptive beamforming method for ultrasound imaging based on the mean-to-standard-deviation factor.

    PubMed

    Wang, Yuanguo; Zheng, Chichao; Peng, Hu; Chen, Qiang

    2018-06-12

    The beamforming performance has a large impact on image quality in ultrasound imaging. Previously, several adaptive weighting factors including coherence factor (CF) and generalized coherence factor (GCF) have been proposed to improved image resolution and contrast. In this paper, we propose a new adaptive weighting factor for ultrasound imaging, which is called signal mean-to-standard-deviation factor (SMSF). SMSF is defined as the mean-to-standard-deviation of the aperture data and is used to weight the output of delay-and-sum (DAS) beamformer before image formation. Moreover, we develop a robust SMSF (RSMSF) by extending the SMSF to the spatial frequency domain using an altered spectrum of the aperture data. In addition, a square neighborhood average is applied on the RSMSF to offer a more smoothed square neighborhood RSMSF (SN-RSMSF) value. We compared our methods with DAS, CF, and GCF using simulated and experimental synthetic aperture data sets. The quantitative results show that SMSF results in an 82% lower full width at half-maximum (FWHM) but a 12% lower contrast ratio (CR) compared with CF. Moreover, the SN-RSMSF leads to 15% and 10% improvement, on average, in FWHM and CR compared with GCF while maintaining the speckle quality. This demonstrates that the proposed methods can effectively improve the image resolution and contrast. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Polymerase Chain Reaction/Rapid Methods Are Gaining a Foothold in Developing Countries.

    PubMed

    Ragheb, Suzan Mohammed; Jimenez, Luis

    Detection of microbial contamination in pharmaceutical raw materials and finished products is a critical factor to guarantee their safety, stability, and potency. Rapid microbiological methods-such as polymerase chain reaction-have been widely applied to clinical and food quality control analysis. However, polymerase chain reaction applications to pharmaceutical quality control have been rather slow and sporadic. Successful implementation of these methods in pharmaceutical companies in developing countries requires important considerations to provide sensitive and robust assays that will comply with good manufacturing practices. In recent years several publications have encouraged the application of molecular techniques in the microbiological assessment of pharmaceuticals. One of these techniques is polymerase chain reaction (PCR). The successful application of PCR in the pharmaceutical industry in developing countries is governed by considerable factors and requirements. These factors include the setting up of a PCR laboratory and the choice of appropriate equipment and reagents. In addition, the presence of well-trained analysts and establishment of quality control and quality assurance programs are important requirements. The pharmaceutical firms should take into account these factors to allow better chances for regulatory acceptance and wide application of this technique. © PDA, Inc. 2014.

  14. Augmenting matrix factorization technique with the combination of tags and genres

    NASA Astrophysics Data System (ADS)

    Ma, Tinghuai; Suo, Xiafei; Zhou, Jinjuan; Tang, Meili; Guan, Donghai; Tian, Yuan; Al-Dhelaan, Abdullah; Al-Rodhaan, Mznah

    2016-11-01

    Recommender systems play an important role in our daily life and are becoming popular tools for users to find what they are really interested in. Matrix factorization methods, which are popular recommendation methods, have gained high attention these years. With the rapid growth of the Internet, lots of information has been created, like social network information, tags and so on. Along with these, a few matrix factorization approaches have been proposed which incorporate the personalized information of users or items. However, except for ratings, most of the matrix factorization models have utilized only one kind of information to understand users' interests. Considering the sparsity of information, in this paper, we try to investigate the combination of different information, like tags and genres, to reveal users' interests accurately. With regard to the generalization of genres, a constraint is added when genres are utilized to find users' similar ;soulmates;. In addition, item regularizer is also considered based on latent semantic indexing (LSI) method with the item tags. Our experiments are conducted on two real datasets: Movielens dataset and Douban dataset. The experimental results demonstrate that the combination of tags and genres is really helpful to reveal users' interests.

  15. Molecular dynamics force-field refinement against quasi-elastic neutron scattering data

    DOE PAGES

    Borreguero Calvo, Jose M.; Lynch, Vickie E.

    2015-11-23

    Quasi-elastic neutron scattering (QENS) is one of the experimental techniques of choice for probing the dynamics at length and time scales that are also in the realm of full-atom molecular dynamics (MD) simulations. This overlap enables extension of current fitting methods that use time-independent equilibrium measurements to new methods fitting against dynamics data. We present an algorithm that fits simulation-derived incoherent dynamical structure factors against QENS data probing the diffusive dynamics of the system. We showcase the difficulties inherent to this type of fitting problem, namely, the disparity between simulation and experiment environment, as well as limitations in the simulationmore » due to incomplete sampling of phase space. We discuss a methodology to overcome these difficulties and apply it to a set of full-atom MD simulations for the purpose of refining the force-field parameter governing the activation energy of methyl rotation in the octa-methyl polyhedral oligomeric silsesquioxane molecule. Our optimal simulated activation energy agrees with the experimentally derived value up to a 5% difference, well within experimental error. We believe the method will find applicability to other types of diffusive motions and other representation of the systems such as coarse-grain models where empirical fitting is essential. In addition, the refinement method can be extended to the coherent dynamic structure factor with no additional effort.« less

  16. Determination of small-field correction factors for cylindrical ionization chambers using a semiempirical method

    NASA Astrophysics Data System (ADS)

    Park, Kwangwoo; Bak, Jino; Park, Sungho; Choi, Wonhoon; Park, Suk Won

    2016-02-01

    A semiempirical method based on the averaging effect of the sensitive volumes of different air-filled ionization chambers (ICs) was employed to approximate the correction factors for beam quality produced from the difference in the sizes of the reference field and small fields. We measured the output factors using several cylindrical ICs and calculated the correction factors using a mathematical method similar to deconvolution; in the method, we modeled the variable and inhomogeneous energy fluence function within the chamber cavity. The parameters of the modeled function and the correction factors were determined by solving a developed system of equations as well as on the basis of the measurement data and the geometry of the chambers. Further, Monte Carlo (MC) computations were performed using the Monaco® treatment planning system to validate the proposed method. The determined correction factors (k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} ) were comparable to the values derived from the MC computations performed using Monaco®. For example, for a 6 MV photon beam and a field size of 1  ×  1 cm2, k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} was calculated to be 1.125 for a PTW 31010 chamber and 1.022 for a PTW 31016 chamber. On the other hand, the k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} values determined from the MC computations were 1.121 and 1.031, respectively; the difference between the proposed method and the MC computation is less than 2%. In addition, we determined the k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} values for PTW 30013, PTW 31010, PTW 31016, IBA FC23-C, and IBA CC13 chambers as well. We devised a method for determining k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} from both the measurement of the output factors and model-based mathematical computation. The proposed method can be useful in case the MC simulation would not be applicable for the clinical settings.

  17. Using logic model methods in systematic review synthesis: describing complex pathways in referral management interventions.

    PubMed

    Baxter, Susan K; Blank, Lindsay; Woods, Helen Buckley; Payne, Nick; Rimmer, Melanie; Goyder, Elizabeth

    2014-05-10

    There is increasing interest in innovative methods to carry out systematic reviews of complex interventions. Theory-based approaches, such as logic models, have been suggested as a means of providing additional insights beyond that obtained via conventional review methods. This paper reports the use of an innovative method which combines systematic review processes with logic model techniques to synthesise a broad range of literature. The potential value of the model produced was explored with stakeholders. The review identified 295 papers that met the inclusion criteria. The papers consisted of 141 intervention studies and 154 non-intervention quantitative and qualitative articles. A logic model was systematically built from these studies. The model outlines interventions, short term outcomes, moderating and mediating factors and long term demand management outcomes and impacts. Interventions were grouped into typologies of practitioner education, process change, system change, and patient intervention. Short-term outcomes identified that may result from these interventions were changed physician or patient knowledge, beliefs or attitudes and also interventions related to changed doctor-patient interaction. A range of factors which may influence whether these outcomes lead to long term change were detailed. Demand management outcomes and intended impacts included content of referral, rate of referral, and doctor or patient satisfaction. The logic model details evidence and assumptions underpinning the complex pathway from interventions to demand management impact. The method offers a useful addition to systematic review methodologies. PROSPERO registration number: CRD42013004037.

  18. Nanoscale determination of the mass enhancement factor in the lightly doped bulk insulator lead selenide.

    PubMed

    Zeljkovic, Ilija; Scipioni, Kane L; Walkup, Daniel; Okada, Yoshinori; Zhou, Wenwen; Sankar, R; Chang, Guoqing; Wang, Yung Jui; Lin, Hsin; Bansil, Arun; Chou, Fangcheng; Wang, Ziqiang; Madhavan, Vidya

    2015-03-27

    Bismuth chalcogenides and lead telluride/selenide alloys exhibit exceptional thermoelectric properties that could be harnessed for power generation and device applications. Since phonons play a significant role in achieving these desired properties, quantifying the interaction between phonons and electrons, which is encoded in the Eliashberg function of a material, is of immense importance. However, its precise extraction has in part been limited due to the lack of local experimental probes. Here we construct a method to directly extract the Eliashberg function using Landau level spectroscopy, and demonstrate its applicability to lightly doped thermoelectric bulk insulator PbSe. In addition to its high energy resolution only limited by thermal broadening, this novel experimental method could be used to detect variations in mass enhancement factor at the nanoscale level. This opens up a new pathway for investigating the local effects of doping and strain on the mass enhancement factor.

  19. Novel Selective Detection Method of Tumor Angiogenesis Factors Using Living Nano-Robots

    PubMed Central

    Alshraiedeh, Nida; Owies, Rami; Alshdaifat, Hala; Al-Mahaseneh, Omamah; Al-Tall, Khadijah; Alawneh, Rawan

    2017-01-01

    This paper reports a novel self-detection method for tumor cells using living nano-robots. These living robots are a nonpathogenic strain of E. coli bacteria equipped with naturally synthesized bio-nano-sensory systems that have an affinity to VEGF, an angiogenic factor overly-expressed by cancer cells. The VEGF-affinity/chemotaxis was assessed using several assays including the capillary chemotaxis assay, chemotaxis assay on soft agar, and chemotaxis assay on solid agar. In addition, a microfluidic device was developed to possibly discover tumor cells through the overexpressed vascular endothelial growth factor (VEGF). Various experiments to study the sensing characteristic of the nano-robots presented a strong response toward the VEGF. Thus, a new paradigm of selective targeting therapies for cancer can be advanced using swimming E. coli as self-navigator miniaturized robots as well as drug-delivery vehicles. PMID:28708066

  20. Determinants of Interest Rates on Corporate Bonds of Mining Enterprises

    NASA Astrophysics Data System (ADS)

    Ranosz, Robert

    2017-09-01

    This article is devoted to the determinants of interest rates on corporate bonds of mining enterprises. The study includes a comparison between the cost of foreign capital as resulting from the issue of debt instruments in different sectors of the economy in relation to the mining industry. The article also depicts the correlation between the rating scores published by the three largest rating agencies: S&P, Moody's, and Fitch. The test was based on simple statistical methods. The analysis performed indicated that there is a dependency between the factors listed and the amount of interest rates on corporate bonds of global mining enterprises. Most significant factors include the rating level and the period for which the given series of bonds was issued. Additionally, it is not without significance whether the given bond has additional options. Pursuant to the obtained results, is should be recognized that in order to reduce the interest rate on bonds, mining enterprises should pay particular attention to the rating and attempt to include additional options in issued bonds. Such additional options may comprise, for example, an ability to exchange bonds to shares or raw materials.

  1. Use of experimental design in the investigation of stir bar sorptive extraction followed by ultra-high-performance liquid chromatography-tandem mass spectrometry for the analysis of explosives in water samples.

    PubMed

    Schramm, Sébastien; Vailhen, Dominique; Bridoux, Maxime Cyril

    2016-02-12

    A method for the sensitive quantification of trace amounts of organic explosives in water samples was developed by using stir bar sorptive extraction (SBSE) followed by liquid desorption and ultra-high performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS). The proposed method was developed and optimized using a statistical design of experiment approach. Use of experimental designs allowed a complete study of 10 factors and 8 analytes including nitro-aromatics, amino-nitro-aromatics and nitric esters. The liquid desorption study was performed using a full factorial experimental design followed by a kinetic study. Four different variables were tested here: the liquid desorption mode (stirring or sonication), the chemical nature of the stir bar (PDMS or PDMS-PEG), the composition of the liquid desorption phase and finally, the volume of solvent used for the liquid desorption. On the other hand, the SBSE extraction study was performed using a Doehlert design. SBSE extraction conditions such as extraction time profiles, sample volume, modifier addition, and acetic acid addition were examined. After optimization of the experimental parameters, sensitivity was improved by a factor 5-30, depending on the compound studied, due to the enrichment factors reached using the SBSE method. Limits of detection were in the ng/L level for all analytes studied. Reproducibility of the extraction with different stir bars was close to the reproducibility of the analytical method (RSD between 4 and 16%). Extractions in various water sample matrices (spring, mineral and underground water) have shown similar enrichment compared to ultrapure water, revealing very low matrix effects. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Personality Factors in the Long Life Family Study

    PubMed Central

    2013-01-01

    Objectives. To evaluate personality profiles of Long Life Family Study participants relative to population norms and offspring of centenarians from the New England Centenarian Study. Method. Personality domains of agreeableness, conscientiousness, extraversion, neuroticism, and openness were assessed with the NEO Five-Factor Inventory in 4,937 participants from the Long Life Family Study (mean age 70 years). A linear mixed model of age and gender was implemented adjusting for other covariates. Results. A significant age trend was found in all five personality domains. On average, the offspring generation of long-lived families scored low in neuroticism, high in extraversion, and within average values for the other three domains. Older participants tended to score higher in neuroticism and lower in the other domains compared with younger participants, but the estimated scores generally remained within average population values. No significant differences were found between long-lived family members and their spouses. Discussion. Personality factors and more specifically low neuroticism and high extraversion may be important for achieving extreme old age. In addition, personality scores of family members were not significantly different from those of their spouses, suggesting that environmental factors may play a significant role in addition to genetic factors. PMID:23275497

  3. Consolidation & Factors Influencing Sintering Process in Polymer Powder Based Additive Manufacturing

    NASA Astrophysics Data System (ADS)

    Sagar, M. B.; Elangovan, K.

    2017-08-01

    Additive Manufacturing (AM) is two decade old technology; where parts are build layer manufacturing method directly from a CAD template. Over the years, AM techniques changes the future way of part fabrication with enhanced intricacy and custom-made features are aimed. Commercially polymers, metals, ceramic and metal-polymer composites are in practice where polymers enhanced the expectations in AM and are considered as a kind of next industrial revolution. Growing trend in polymer application motivated to study their feasibility and properties. Laser sintering, Heat sintering and Inhibition sintering are the most successful AM techniques for polymers but having least application. The presentation gives up selective sintering of powder polymers and listed commercially available polymer materials. Important significant factors for effective processing and analytical approaches to access them are discussed.

  4. Observation procedure, observer gender, and behavior valence as determinants of sampling error in a behavior assessment analogue

    PubMed Central

    Farkas, Gary M.; Tharp, Roland G.

    1980-01-01

    Several factors thought to influence the representativeness of behavioral assessment data were examined in an analogue study using a multifactorial design. Systematic and unsystematic methods of observing group behavior were investigated using 18 male and 18 female observers. Additionally, valence properties of the observed behaviors were inspected. Observers' assessments of a videotape were compared to a criterion code that defined the population of behaviors. Results indicated that systematic observation procedures were more accurate than unsystematic procedures, though this factor interacted with gender of observer and valence of behavior. Additionally, males tended to sample more representatively than females. A third finding indicated that the negatively valenced behavior was overestimated, whereas the neutral and positively valenced behaviors were accurately assessed. PMID:16795631

  5. A Two-Factor Model Better Explains Heterogeneity in Negative Symptoms: Evidence from the Positive and Negative Syndrome Scale.

    PubMed

    Jang, Seon-Kyeong; Choi, Hye-Im; Park, Soohyun; Jaekal, Eunju; Lee, Ga-Young; Cho, Young Il; Choi, Kee-Hong

    2016-01-01

    Acknowledging separable factors underlying negative symptoms may lead to better understanding and treatment of negative symptoms in individuals with schizophrenia. The current study aimed to test whether the negative symptoms factor (NSF) of the Positive and Negative Syndrome Scale (PANSS) would be better represented by expressive and experiential deficit factors, rather than by a single factor model, using confirmatory factor analysis (CFA). Two hundred and twenty individuals with schizophrenia spectrum disorders completed the PANSS; subsamples additionally completed the Brief Negative Symptom Scale (BNSS) and the Motivation and Pleasure Scale-Self-Report (MAP-SR). CFA results indicated that the two-factor model fit the data better than the one-factor model; however, latent variables were closely correlated. The two-factor model's fit was significantly improved by accounting for correlated residuals between N2 (emotional withdrawal) and N6 (lack of spontaneity and flow of conversation), and between N4 (passive social withdrawal) and G16 (active social avoidance), possibly reflecting common method variance. The two NSF factors exhibited differential patterns of correlation with subdomains of the BNSS and MAP-SR. These results suggest that the PANSS NSF would be better represented by a two-factor model than by a single-factor one, and support the two-factor model's adequate criterion-related validity. Common method variance among several items may be a potential source of measurement error under a two-factor model of the PANSS NSF.

  6. Sustaining innovations in complex healthcare environments: A multiple-case study of rapid response teams

    PubMed Central

    Stolldorf, Deonni P; Havens, Donna S.; Jones, Cheryl B

    2015-01-01

    Objectives Rapid response teams are one innovation previously deployed in U.S. hospitals with the goal to improve the quality of care. Sustaining rapid response teams is important to achieve the desired implementation outcomes, reduce the risk of program investments losses, and prevent employee disillusionment and dissatisfaction. This study sought to examine factors that do and do not support the sustainability of Rapid Response Teams. Methods The study was conceptually guided by an adapted version of the Planning Model of Sustainability. A multiple-case study was conducted using a purposive sample of two hospitals with high RRT sustainability scores and two hospitals with low RRT sustainability scores. Data collection methods included: (a) a hospital questionnaire that was completed by a nurse administrator at each hospital; (b) semi-structured interviews with leaders, RRT members, and those activating RRT calls; and, (c) review of internal documents. Quantitative data were analyzed using descriptive statistics; qualitative data were analyzed using content analysis. Results Few descriptive differences were found between hospitals. However, there were notable differences in the operationalization of certain factors between high- and low-sustainability hospitals. Additional sustainability factors other than those captured by the Planning Model of Sustainability were also identified. Conclusions The sustainability of rapid response teams is optimized through effective operationalization of organizational and project design and implementation factors. Two additional factors—individual and team characteristics—should be included in the Planning Model of Sustainability and considered as potential facilitators (or inhibitors) of RRT sustainability. PMID:26756725

  7. Testing the Intervention Effect in Single-Case Experiments: A Monte Carlo Simulation Study

    ERIC Educational Resources Information Center

    Heyvaert, Mieke; Moeyaert, Mariola; Verkempynck, Paul; Van den Noortgate, Wim; Vervloet, Marlies; Ugille, Maaike; Onghena, Patrick

    2017-01-01

    This article reports on a Monte Carlo simulation study, evaluating two approaches for testing the intervention effect in replicated randomized AB designs: two-level hierarchical linear modeling (HLM) and using the additive method to combine randomization test "p" values (RTcombiP). Four factors were manipulated: mean intervention effect,…

  8. Estimating the impact of grouping misclassification on risk prediction when using the relative potency factors method to assess mixtures risk -Presentation

    EPA Science Inventory

    Environmental health risk assessments of chemical mixtures that rely on component approaches often begin by grouping the chemicals of concern according to toxicological similarity. Approaches that assume dose addition typically are used for groups of similarly-acting chemicals an...

  9. Battery condenser system total particulate emission factors and rates for cotton gins: Method 17

    USDA-ARS?s Scientific Manuscript database

    This manuscript is part of a series of manuscripts that characterize cotton gin emissions from the standpoint of stack sampling. The impetus behind this project was the urgent need to collect additional cotton gin emissions data to address current regulatory issues. A key component of this study was...

  10. Simplified in vitro refolding and purification of recombinant human granulocyte colony stimulating factor using protein folding cation exchange chromatography.

    PubMed

    Vemula, Sandeep; Dedaniya, Akshay; Thunuguntla, Rahul; Mallu, Maheswara Reddy; Parupudi, Pavani; Ronda, Srinivasa Reddy

    2015-01-30

    Protein folding-strong cation exchange chromatography (PF-SCX) has been employed for efficient refolding with simultaneous purification of recombinant human granulocyte colony stimulating factor (rhG-CSF). To acquire a soluble form of renatured and purified rhG-CSF, various chromatographic conditions, including the mobile phase composition and pH was evaluated. Additionally, the effects of additives such as urea, amino acids, polyols, sugars, oxidizing agents and their amalgamations were also investigated. Under the optimal conditions, rhG-CSF was efficaciously solubilized, refolded and simultaneously purified by SCX in a single step. The experimental results using ribose (2.0M) and arginine (0.6M) combination were found to be satisfactory with mass yield, purity and specific activity of 71%, ≥99% and 2.6×10(8)IU/mg respectively. Through this investigation, we concluded that the SCX refolding method was more efficient than conventional methods which has immense potential for the large-scale production of purified rhG-CSF. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. On non-negative matrix factorization algorithms for signal-dependent noise with application to electromyography data

    PubMed Central

    Devarajan, Karthik; Cheung, Vincent C.K.

    2017-01-01

    Non-negative matrix factorization (NMF) by the multiplicative updates algorithm is a powerful machine learning method for decomposing a high-dimensional nonnegative matrix V into two nonnegative matrices, W and H where V ~ WH. It has been successfully applied in the analysis and interpretation of large-scale data arising in neuroscience, computational biology and natural language processing, among other areas. A distinctive feature of NMF is its nonnegativity constraints that allow only additive linear combinations of the data, thus enabling it to learn parts that have distinct physical representations in reality. In this paper, we describe an information-theoretic approach to NMF for signal-dependent noise based on the generalized inverse Gaussian model. Specifically, we propose three novel algorithms in this setting, each based on multiplicative updates and prove monotonicity of updates using the EM algorithm. In addition, we develop algorithm-specific measures to evaluate their goodness-of-fit on data. Our methods are demonstrated using experimental data from electromyography studies as well as simulated data in the extraction of muscle synergies, and compared with existing algorithms for signal-dependent noise. PMID:24684448

  12. Thinking beyond Opisthorchis viverrini for risk of cholangiocarcinoma in the lower Mekong region: a systematic review and meta-analysis.

    PubMed

    Steele, Jennifer A; Richter, Carsten H; Echaubard, Pierre; Saenna, Parichat; Stout, Virginia; Sithithaworn, Paiboon; Wilcox, Bruce A

    2018-05-17

    Cholangiocarcinoma (CCA) is a fatal bile duct cancer associated with infection by the liver fluke, Opisthorchis viverrini, in the lower Mekong region. Numerous public health interventions have focused on reducing exposure to O. viverrini, but incidence of CCA in the region remains high. While this may indicate the inefficacy of public health interventions due to complex social and cultural factors, it may further indicate other risk factors or interactions with the parasite are important in pathogenesis of CCA. This systematic review aims to provide a comprehensive analysis of described risk factors for CCA in addition to O. viverrini to guide future integrative interventions. We searched five international and seven Thai research databases to identify studies relevant to risk factors for CCA in the lower Mekong region. Selected studies were assessed for risk of bias and quality in terms of study design, population, CCA diagnostic methods, and statistical methods. The final 18 included studies reported numerous risk factors which were grouped into behaviors, socioeconomics, diet, genetics, gender, immune response, other infections, and treatment for O. viverrini. Seventeen risk factors were reported by two or more studies and were assessed with random effects models during meta-analysis. This meta-analysis indicates that the combination of alcohol and smoking (OR = 11.1, 95% CI: 5.63-21.92, P <  0.0001) is most significantly associated with increased risk for CCA and is an even greater risk factor than O. viverrini exposure. This analysis also suggests that family history of cancer, consumption of raw cyprinoid fish, consumption of high nitrate foods, and praziquantel treatment are associated with significantly increased risk. These risk factors may have complex relationships with the host, parasite, or pathogenesis of CCA, and many of these risk factors were found to interact with each other in one or more studies. Our findings suggest that a complex variety of risk factors in addition to O. viverrini infection should be addressed in future public health interventions to reduce CCA in affected regions. In particular, smoking and alcohol use, dietary patterns, and socioeconomic factors should be considered when developing intervention programs to reduce CCA.

  13. Exploring key factors in online shopping with a hybrid model.

    PubMed

    Chen, Hsiao-Ming; Wu, Chia-Huei; Tsai, Sang-Bing; Yu, Jian; Wang, Jiangtao; Zheng, Yuxiang

    2016-01-01

    Nowadays, the web increasingly influences retail sales. An in-depth analysis of consumer decision-making in the context of e-business has become an important issue for internet vendors. However, factors affecting e-business are complicated and intertwined. To stimulate online sales, understanding key influential factors and causal relationships among the factors is important. To gain more insights into this issue, this paper introduces a hybrid method, which combines the Decision Making Trial and Evaluation Laboratory (DEMATEL) with the analytic network process, called DANP method, to find out the driving factors that influence the online business mostly. By DEMATEL approach the causal graph showed that "online service" dimension has the highest degree of direct impact on other dimensions; thus, the internet vendor is suggested to made strong efforts on service quality throughout the online shopping process. In addition, the study adopted DANP to measure the importance of key factors, among which "transaction security" proves to be the most important criterion. Hence, transaction security should be treated with top priority to boost the online businesses. From our study with DANP approach, the comprehensive information can be visually detected so that the decision makers can spotlight on the root causes to develop effectual actions.

  14. Simple and accurate quantification of BTEX in ambient air by SPME and GC-MS.

    PubMed

    Baimatova, Nassiba; Kenessov, Bulat; Koziel, Jacek A; Carlsen, Lars; Bektassov, Marat; Demyanenko, Olga P

    2016-07-01

    Benzene, toluene, ethylbenzene and xylenes (BTEX) comprise one of the most ubiquitous and hazardous groups of ambient air pollutants of concern. Application of standard analytical methods for quantification of BTEX is limited by the complexity of sampling and sample preparation equipment, and budget requirements. Methods based on SPME represent simpler alternative, but still require complex calibration procedures. The objective of this research was to develop a simpler, low-budget, and accurate method for quantification of BTEX in ambient air based on SPME and GC-MS. Standard 20-mL headspace vials were used for field air sampling and calibration. To avoid challenges with obtaining and working with 'zero' air, slope factors of external standard calibration were determined using standard addition and inherently polluted lab air. For polydimethylsiloxane (PDMS) fiber, differences between the slope factors of calibration plots obtained using lab and outdoor air were below 14%. PDMS fiber provided higher precision during calibration while the use of Carboxen/PDMS fiber resulted in lower detection limits for benzene and toluene. To provide sufficient accuracy, the use of 20mL vials requires triplicate sampling and analysis. The method was successfully applied for analysis of 108 ambient air samples from Almaty, Kazakhstan. Average concentrations of benzene, toluene, ethylbenzene and o-xylene were 53, 57, 11 and 14µgm(-3), respectively. The developed method can be modified for further quantification of a wider range of volatile organic compounds in air. In addition, the new method is amenable to automation. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. A new algorithm for real-time optimal dispatch of active and reactive power generation retaining nonlinearity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roy, L.; Rao, N.D.

    1983-04-01

    This paper presents a new method for optimal dispatch of real and reactive power generation which is based on cartesian coordinate formulation of economic dispatch problem and reclassification of state and control variables associated with generator buses. The voltage and power at these buses are classified as parametric and functional inequality constraints, and are handled by reduced gradient technique and penalty factor approach respectively. The advantage of this classification is the reduction in the size of the equality constraint model, leading to less storage requirement. The rectangular coordinate formulation results in an exact equality constraint model in which the coefficientmore » matrix is real, sparse, diagonally dominant, smaller in size and need be computed and factorized once only in each gradient step. In addition, Lagragian multipliers are calculated using a new efficient procedure. A natural outcome of these features is the solution of the economic dispatch problem, faster than other methods available to date in the literature. Rapid and reliable convergence is an additional desirable characteristic of the method. Digital simulation results are presented on several IEEE test systems to illustrate the range of application of the method visa-vis the popular Dommel-Tinney (DT) procedure. It is found that the proposed method is more reliable, 3-4 times faster and requires 20-30 percent less storage compared to the DT algorithm, while being just as general. Thus, owing to its exactness, robust mathematical model and less computational requirements, the method developed in the paper is shown to be a practically feasible algorithm for on-line optimal power dispatch.« less

  16. Heat conduction in double-walled carbon nanotubes with intertube additional carbon atoms.

    PubMed

    Cui, Liu; Feng, Yanhui; Tan, Peng; Zhang, Xinxin

    2015-07-07

    Heat conduction of double-walled carbon nanotubes (DWCNTs) with intertube additional carbon atoms was investigated for the first time using a molecular dynamics method. By analyzing the phonon vibrational density of states (VDOS), we revealed that the intertube additional atoms weak the heat conduction along the tube axis. Moreover, the phonon participation ratio (PR) demonstrates that the heat transfer in DWCNTs is dominated by low frequency modes. The added atoms cause the mode weight factor (MWF) of the outer tube to decrease and that of the inner tube to increase, which implies a lower thermal conductivity. The effects of temperature, tube length, and the number and distribution of added atoms were studied. Furthermore, an orthogonal array testing strategy was designed to identify the most important structural factor. It is indicated that the tendencies of thermal conductivity of DWCNTs with added atoms change with temperature and length are similar to bare ones. In addition, thermal conductivity decreases with the increasing number of added atoms, more evidently for atom addition concentrated at some cross-sections rather than uniform addition along the tube length. Simultaneously, the number of added atoms at each cross-section has a considerably more remarkable impact, compared to the tube length and the density of chosen cross-sections to add atoms.

  17. An aerial survey method to estimate sea otter abundance

    USGS Publications Warehouse

    Bodkin, James L.; Udevitz, Mark S.; Garner, Gerald W.; Amstrup, Steven C.; Laake, Jeffrey L.; Manly, Bryan F.J.; McDonald, Lyman L.; Robertson, Donna G.

    1999-01-01

    Sea otters (Enhydra lutris) occur in shallow coastal habitats and can be highly visible on the sea surface. They generally rest in groups and their detection depends on factors that include sea conditions, viewing platform, observer technique and skill, distance, habitat and group size. While visible on the surface, they are difficult to see while diving and may dive in response to an approaching survey platform. We developed and tested an aerial survey method that uses intensive searches within portions of strip transects to adjust for availability and sightability biases. Correction factors are estimated independently for each survey and observer. In tests of our method using shore-based observers, we estimated detection probabilities of 0.52-0.72 in standard strip-transects and 0.96 in intensive searches. We used the survey method in Prince William Sound, Alaska to estimate a sea otter population size of 9,092 (SE = 1422). The new method represents an improvement over various aspects of previous methods, but additional development and testing will be required prior to its broad application.

  18. Calculation of light delay for coupled microrings by FDTD technique and Padé approximation.

    PubMed

    Huang, Yong-Zhen; Yang, Yue-De

    2009-11-01

    The Padé approximation with Baker's algorithm is compared with the least-squares Prony method and the generalized pencil-of-functions (GPOF) method for calculating mode frequencies and mode Q factors for coupled optical microdisks by FDTD technique. Comparisons of intensity spectra and the corresponding mode frequencies and Q factors show that the Padé approximation can yield more stable results than the Prony and the GPOF methods, especially the intensity spectrum. The results of the Prony method and the GPOF method are greatly influenced by the selected number of resonant modes, which need to be optimized during the data processing, in addition to the length of the time response signal. Furthermore, the Padé approximation is applied to calculate light delay for embedded microring resonators from complex transmission spectra obtained by the Padé approximation from a FDTD output. The Prony and the GPOF methods cannot be applied to calculate the transmission spectra, because the transmission signal obtained by the FDTD simulation cannot be expressed as a sum of damped complex exponentials.

  19. Differentiation of organic and non-organic winter wheat cultivars from a controlled field trial by crystallization patterns.

    PubMed

    Kahl, Johannes; Busscher, Nicolaas; Mergardt, Gaby; Mäder, Paul; Torp, Torfinn; Ploeger, Angelika

    2015-01-01

    There is a need for authentication tools in order to verify the existing certification system. Recently, markers for analytical authentication of organic products were evaluated. Herein, crystallization with additives was described as an interesting fingerprint approach which needs further evidence, based on a standardized method and well-documented sample origin. The fingerprint of wheat cultivars from a controlled field trial is generated from structure analysis variables of crystal patterns. Method performance was tested on factors such as crystallization chamber, day of experiment and region of interest of the patterns. Two different organic treatments and two different treatments of the non-organic regime can be grouped together in each of three consecutive seasons. When the k-nearest-neighbor classification method was applied, approximately 84% of Runal samples and 95% of Titlis samples were classified correctly into organic and non-organic origin using cross-validation. Crystallization with additive offers an interesting complementary fingerprint method for organic wheat samples. When the method is applied to winter wheat from the DOK trial, organic and non-organic treated samples can be differentiated significantly based on pattern recognition. Therefore crystallization with additives seems to be a promising tool in organic wheat authentication. © 2014 Society of Chemical Industry.

  20. Novel approach for calculating the charge carrier mobility and Hall factor for semiconductor materials

    NASA Astrophysics Data System (ADS)

    Colibaba, G. V.

    2018-06-01

    The additive Matthiessen's rule is the simplest and most widely used rule for the rapid experimental characterization and modeling of the charge carrier mobility. However, the error when using this rule can be higher than 40% and the contribution of the assumed additional scattering channels due to the difference between the experimental data and results calculated based on this rule can be misestimated by several times. In this study, a universal semi-additive equation is proposed for the total mobility and Hall factor, which is applicable to any quantity of scattering mechanisms, where it considers the energy dependence of the relaxation time and the error is 10-20 times lower compared with Matthiessen's rule. Calculations with accuracy of 99% are demonstrated for materials with polar-optical phonon, acoustic phonon via the piezoelectric potential, ionized, and neutral impurity scattering. The proposed method is extended to the deformation potential, dislocation, localized defect, alloy potential, and dipole scattering, for nondegenerate and partially degenerate materials.

  1. Analysis on the restriction factors of the green building scale promotion based on DEMATEL

    NASA Astrophysics Data System (ADS)

    Wenxia, Hong; Zhenyao, Jiang; Zhao, Yang

    2017-03-01

    In order to promote the large-scale development of the green building in our country, DEMATEL method was used to classify influence factors of green building development into three parts, including green building market, green technology and macro economy. Through the DEMATEL model, the interaction mechanism of each part was analyzed. The mutual influence degree of each barrier factor that affects the green building promotion was quantitatively analysed and key factors for the development of green building in China were also finally determined. In addition, some implementation strategies of promoting green building scale development in our country were put forward. This research will show important reference value and practical value for making policies of the green building promotion.

  2. Rosenberg Self-Esteem Scale: Method Effects, Factorial Structure and Scale Invariance Across Migrant Child and Urban Child Populations in China.

    PubMed

    Wu, Yang; Zuo, Bin; Wen, Fangfang; Yan, Lei

    2017-01-01

    Using confirmatory factor analyses, this study examined the method effects on a Chinese version of the Rosenberg Self-Esteem Scale (RSES; Rosenberg, 1965 ) in a sample of migrant and urban children in China. In all, 982 children completed the RSES, and 9 models and 9 corresponding variants were specified and tested. The results indicated that the method effects are associated with both positively and negatively worded items and that Item 8 should be treated as a positively worded item. Additionally, the method effects models were invariant across migrant and urban children in China.

  3. 14 CFR 1203.406 - Additional classification factors.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... PROGRAM Guides for Original Classification § 1203.406 Additional classification factors. In determining the appropriate classification category, the following additional factors should be considered: (a... 14 Aeronautics and Space 5 2011-01-01 2010-01-01 true Additional classification factors. 1203.406...

  4. The Effects of Sugars on the Biofilm Formation of Escherichia coli 185p on Stainless Steel and Polyethylene Terephthalate Surfaces in a Laboratory Model.

    PubMed

    Khangholi, Mahdi; Jamalli, Ailar

    2016-09-01

    Bacteria utilize various methods in order to live in protection from adverse environmental conditions. One such method involves biofilm formation; however, this formation is dependent on many factors. The type and concentration of substances such as sugars that are present in an environment can be effective facilitators of biofilm formation. First, the physico-chemical properties of the bacteria and the target surface were studied via the MATS and contact angle measurement methods. Additionally, adhesion to different surfaces in the presence of various concentrations of sugars was compared in order to evaluate the effect of these factors on the biofilm formation of Escherichia coli , which represents a major food contaminant . Results showed that the presence of sugars has no effect on the bacterial growth rate; all three concentrations of sugars were hydrophilic and demonstrated a high affinity toward binding to the surfaces. The impact of sugars and other factors on biofilm formation can vary depending on the type of bacteria present.

  5. Scaling Laws Applied to a Modal Formulation of the Aeroservoelastic Equations

    NASA Technical Reports Server (NTRS)

    Pototzky, Anthony S.

    2002-01-01

    A method of scaling is described that easily converts the aeroelastic equations of motion of a full-sized aircraft into ones of a wind-tunnel model. To implement the method, a set of rules is provided for the conversion process involving matrix operations with scale factors. In addition, a technique for analytically incorporating a spring mounting system into the aeroelastic equations is also presented. As an example problem, a finite element model of a full-sized aircraft is introduced from the High Speed Research (HSR) program to exercise the scaling method. With a set of scale factor values, a brief outline is given of a procedure to generate the first-order aeroservoelastic analytical model representing the wind-tunnel model. To verify the scaling process as applied to the example problem, the root-locus patterns from the full-sized vehicle and the wind-tunnel model are compared to see if the root magnitudes scale with the frequency scale factor value. Selected time-history results are given from a numerical simulation of an active-controlled wind-tunnel model to demonstrate the utility of the scaling process.

  6. The significance of oral streptococci in patients with pneumonia with risk factors for aspiration: the bacterial floral analysis of 16S ribosomal RNA gene using bronchoalveolar lavage fluid.

    PubMed

    Akata, Kentaro; Yatera, Kazuhiro; Yamasaki, Kei; Kawanami, Toshinori; Naito, Keisuke; Noguchi, Shingo; Fukuda, Kazumasa; Ishimoto, Hiroshi; Taniguchi, Hatsumi; Mukae, Hiroshi

    2016-05-11

    Aspiration pneumonia has been a growing interest in an aging population. Anaerobes are important pathogens, however, the etiology of aspiration pneumonia is not fully understood. In addition, the relationship between the patient clinical characteristics and the causative pathogens in pneumonia patients with aspiration risk factors are unclear. To evaluate the relationship between the patient clinical characteristics with risk factors for aspiration and bacterial flora in bronchoalveolar lavage fluid (BALF) in pneumonia patients, the bacterial floral analysis of 16S ribosomal RNA gene was applied in addition to cultivation methods in BALF samples. From April 2010 to February 2014, BALF samples were obtained from the affected lesions of pneumonia via bronchoscopy, and were evaluated by the bacterial floral analysis of 16S rRNA gene in addition to cultivation methods in patients with community-acquired pneumonia (CAP) and healthcare-associated pneumonia (HCAP). Factors associated with aspiration risks in these patients were analyzed. A total of 177 (CAP 83, HCAP 94) patients were enrolled. According to the results of the bacterial floral analysis, detection rate of oral streptococci as the most detected bacterial phylotypes in BALF was significantly higher in patients with aspiration risks (31.0 %) than in patients without aspiration risks (14.7 %) (P = 0.009). In addition, the percentages of oral streptococci in each BALF sample were significantly higher in patients with aspiration risks (26.6 ± 32.0 %) than in patients without aspiration risks (13.8 ± 25.3 %) (P = 0.002). A multiple linear regression analysis showed that an Eastern Cooperative Oncology Group (ECOG) performance status (PS) of ≥3, the presence of comorbidities, and a history of pneumonia within a previous year were significantly associated with a detection of oral streptococci in BALF. The bacterial floral analysis of 16S rRNA gene revealed that oral streptococci were mostly detected as the most detected bacterial phylotypes in BALF samples in CAP and HCAP patients with aspiration risks, especially in those with a poor ECOG-PS or a history of pneumonia.

  7. Image quality in thoracic 4D cone-beam CT: A sensitivity analysis of respiratory signal, binning method, reconstruction algorithm, and projection angular spacing

    PubMed Central

    Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T.; Kuncic, Zdenka; Keall, Paul J.

    2014-01-01

    Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An investigation of the impacts of these four factors on image quality can help determine the most effective strategy in improving 4D-CBCT for IGRT. Methods: Fourteen 4D-CBCT patient projection datasets with various respiratory motion features were reconstructed with the following controllable factors: (i) respiratory signal (real-time position management, projection image intensity analysis, or fiducial marker tracking), (ii) binning method (phase, displacement, or equal-projection-density displacement binning), and (iii) reconstruction algorithm [Feldkamp–Davis–Kress (FDK), McKinnon–Bates (MKB), or adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS)]. The image quality was quantified using signal-to-noise ratio (SNR), contrast-to-noise ratio, and edge-response width in order to assess noise/streaking and blur. The SNR values were also analyzed with respect to the maximum, mean, and root-mean-squared-error (RMSE) projection angular spacing to investigate how projection angular spacing affects image quality. Results: The choice of respiratory signals was found to have no significant impact on image quality. Displacement-based binning was found to be less prone to motion artifacts compared to phase binning in more than half of the cases, but was shown to suffer from large interbin image quality variation and large projection angular gaps. Both MKB and ASD-POCS resulted in noticeably improved image quality almost 100% of the time relative to FDK. In addition, SNR values were found to increase with decreasing RMSE values of projection angular gaps with strong correlations (r ≈ −0.7) regardless of the reconstruction algorithm used. Conclusions: Based on the authors’ results, displacement-based binning methods, better reconstruction algorithms, and the acquisition of even projection angular views are the most important factors to consider for improving thoracic 4D-CBCT image quality. In view of the practical issues with displacement-based binning and the fact that projection angular spacing is not currently directly controllable, development of better reconstruction algorithms represents the most effective strategy for improving image quality in thoracic 4D-CBCT for IGRT applications at the current stage. PMID:24694143

  8. Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter.

    PubMed

    Choi, Jihoon; Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il

    2017-09-13

    This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected.

  9. Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter

    PubMed Central

    Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il

    2017-01-01

    This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected. PMID:28902154

  10. Data and methods for studying commercial motor vehicle driver fatigue, highway safety and long-term driver health.

    PubMed

    Stern, Hal S; Blower, Daniel; Cohen, Michael L; Czeisler, Charles A; Dinges, David F; Greenhouse, Joel B; Guo, Feng; Hanowski, Richard J; Hartenbaum, Natalie P; Krueger, Gerald P; Mallis, Melissa M; Pain, Richard F; Rizzo, Matthew; Sinha, Esha; Small, Dylan S; Stuart, Elizabeth A; Wegman, David H

    2018-03-09

    This article summarizes the recommendations on data and methodology issues for studying commercial motor vehicle driver fatigue of a National Academies of Sciences, Engineering, and Medicine study. A framework is provided that identifies the various factors affecting driver fatigue and relating driver fatigue to crash risk and long-term driver health. The relevant factors include characteristics of the driver, vehicle, carrier and environment. Limitations of existing data are considered and potential sources of additional data described. Statistical methods that can be used to improve understanding of the relevant relationships from observational data are also described. The recommendations for enhanced data collection and the use of modern statistical methods for causal inference have the potential to enhance our understanding of the relationship of fatigue to highway safety and to long-term driver health. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. Consistent approach to describing aircraft HIRF protection

    NASA Technical Reports Server (NTRS)

    Rimbey, P. R.; Walen, D. B.

    1995-01-01

    The high intensity radiated fields (HIRF) certification process as currently implemented is comprised of an inconsistent combination of factors that tend to emphasize worst case scenarios in assessing commercial airplane certification requirements. By examining these factors which include the process definition, the external HIRF environment, the aircraft coupling and corresponding internal fields, and methods of measuring equipment susceptibilities, activities leading to an approach to appraising airplane vulnerability to HIRF are proposed. This approach utilizes technically based criteria to evaluate the nature of the threat, including the probability of encountering the external HIRF environment. No single test or analytic method comprehensively addresses the full HIRF threat frequency spectrum. Additional tools such as statistical methods must be adopted to arrive at more realistic requirements to reflect commercial aircraft vulnerability to the HIRF threat. Test and analytic data are provided to support the conclusions of this report. This work was performed under NASA contract NAS1-19360, Task 52.

  12. Double temporal sparsity based accelerated reconstruction of compressively sensed resting-state fMRI.

    PubMed

    Aggarwal, Priya; Gupta, Anubha

    2017-12-01

    A number of reconstruction methods have been proposed recently for accelerated functional Magnetic Resonance Imaging (fMRI) data collection. However, existing methods suffer with the challenge of greater artifacts at high acceleration factors. This paper addresses the issue of accelerating fMRI collection via undersampled k-space measurements combined with the proposed method based on l 1 -l 1 norm constraints, wherein we impose first l 1 -norm sparsity on the voxel time series (temporal data) in the transformed domain and the second l 1 -norm sparsity on the successive difference of the same temporal data. Hence, we name the proposed method as Double Temporal Sparsity based Reconstruction (DTSR) method. The robustness of the proposed DTSR method has been thoroughly evaluated both at the subject level and at the group level on real fMRI data. Results are presented at various acceleration factors. Quantitative analysis in terms of Peak Signal-to-Noise Ratio (PSNR) and other metrics, and qualitative analysis in terms of reproducibility of brain Resting State Networks (RSNs) demonstrate that the proposed method is accurate and robust. In addition, the proposed DTSR method preserves brain networks that are important for studying fMRI data. Compared to the existing methods, the DTSR method shows promising potential with an improvement of 10-12 dB in PSNR with acceleration factors upto 3.5 on resting state fMRI data. Simulation results on real data demonstrate that DTSR method can be used to acquire accelerated fMRI with accurate detection of RSNs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Who uses firearms as a means of suicide? A population study exploring firearm accessibility and method choice

    PubMed Central

    Klieve, Helen; Sveticic, Jerneja; De Leo, Diego

    2009-01-01

    Background The 1996 Australian National Firearms Agreement introduced strict access limitations. However, reports on the effectiveness of the new legislation are conflicting. This study, accessing all cases of suicide 1997-2004, explores factors which may impact on the choice of firearms as a suicide method, including current licence possession and previous history of legal access. Methods Detailed information on all Queensland suicides (1997-2004) was obtained from the Queensland Suicide Register, with additional details of firearm licence history accessed from the Firearm Registry (Queensland Police Service). Cases were compared against licence history and method choice (firearms or other method). Odds ratios (OR) assessed the risk of firearms suicide and suicide by any method against licence history. A logistic regression was undertaken identifying factors significant in those most likely to use firearms in suicide. Results The rate of suicide using firearms in those with a current license (10.92 per 100,000) far exceeded the rate in those with no license history (1.03 per 100,000). Those with a license history had a far higher rate of suicide (30.41 per 100,000) compared to that of all suicides (15.39 per 100,000). Additionally, a history of firearms licence (current or present) was found to more than double the risk of suicide by any means (OR = 2.09, P < 0.001). The group with the highest risk of selecting firearms to suicide were older males from rural locations. Conclusion Accessibility and familiarity with firearms represent critical elements in determining the choice of method. Further licensing restrictions and the implementation of more stringent secure storage requirements are likely to reduce the overall familiarity with firearms in the community and contribute to reductions in rates of suicide. PMID:19778414

  14. Design of experiments for amino acid extraction from tobacco leaves and their subsequent determination by capillary zone electrophoresis.

    PubMed

    Hodek, Ondřej; Křížek, Tomáš; Coufal, Pavel; Ryšlavá, Helena

    2017-03-01

    In this study, we optimized a method for the determination of free amino acids in Nicotiana tabacum leaves. Capillary electrophoresis with contactless conductivity detector was used for the separation of 20 proteinogenic amino acids in acidic background electrolyte. Subsequently, the conditions of extraction with HCl were optimized for the highest extraction yield of the amino acids because sample treatment of plant materials brings some specific challenges. Central composite face-centered design with fractional factorial design was used in order to evaluate the significance of selected factors (HCl volume, HCl concentration, sonication, shaking) on the extraction process. In addition, the composite design helped us to find the optimal values for each factor using the response surface method. The limits of detection and limits of quantification for the 20 proteinogenic amino acids were found to be in the order of 10 -5 and 10 -4  mol l -1 , respectively. Addition of acetonitrile to the sample was tested as a method commonly used to decrease limits of detection. Ambiguous results of this experiment pointed out some features of plant extract samples, which often required specific approaches. Suitability of the method for metabolomic studies was tested by analysis of a real sample, in which all amino acids, except for L-methionine and L-cysteine, were successfully detected. The optimized extraction process together with the capillary electrophoresis method can be used for the determination of proteinogenic amino acids in plant materials. The resulting inexpensive, simple, and robust method is well suited for various metabolomic studies in plants. As such, the method represents a valuable tool for research and practical application in the fields of biology, biochemistry, and agriculture.

  15. [Use of physical factors in the complex therapy of patients with diabetic angio- and polyneuropathies of the lower extremities].

    PubMed

    Shablinskaia, N B

    2002-01-01

    Results are submitted of treatment of 110 patients with diabetes mellitus (61 male and 49 female subjects) presenting with angio- and polyneuropathies of the lower extremities. 70 patients, in addition to a drug therapy, were administered physiotherapeutic treatments, such as amplipulsetherapy, darsonvalization, and laserotherapy. Forty patients received medicamentous therapy only. Based on clinical findings and laboratory methods of investigation expediency has been shown of employment of physiotherapeutic methods in the treatment of the above pathology.

  16. Hybrid PV/diesel solar power system design using multi-level factor analysis optimization

    NASA Astrophysics Data System (ADS)

    Drake, Joshua P.

    Solar power systems represent a large area of interest across a spectrum of organizations at a global level. It was determined that a clear understanding of current state of the art software and design methods, as well as optimization methods, could be used to improve the design methodology. Solar power design literature was researched for an in depth understanding of solar power system design methods and algorithms. Multiple software packages for the design and optimization of solar power systems were analyzed for a critical understanding of their design workflow. In addition, several methods of optimization were studied, including brute force, Pareto analysis, Monte Carlo, linear and nonlinear programming, and multi-way factor analysis. Factor analysis was selected as the most efficient optimization method for engineering design as it applied to solar power system design. The solar power design algorithms, software work flow analysis, and factor analysis optimization were combined to develop a solar power system design optimization software package called FireDrake. This software was used for the design of multiple solar power systems in conjunction with an energy audit case study performed in seven Tibetan refugee camps located in Mainpat, India. A report of solar system designs for the camps, as well as a proposed schedule for future installations was generated. It was determined that there were several improvements that could be made to the state of the art in modern solar power system design, though the complexity of current applications is significant.

  17. Factors influencing the results of faculty evaluation in Isfahan University of Medical Sciences

    PubMed Central

    Kamali, Farahnaz; Yamani, Nikoo; Changiz, Tahereh; Zoubin, Fatemeh

    2018-01-01

    OBJECTIVE: This study aimed to explore factors influencing the results of faculty member evaluation from the viewpoints of faculty members affiliated with Isfahan University of Medical Sciences, Isfahan, Iran. MATERIALS AND METHODS: This qualitative study was done using a conventional content analysis method. Participants were faculty members of Isfahan University of Medical Sciences who, considering maximum variation in sampling, were chosen with a purposive sampling method. Semi-structured interviews were held with 11 faculty members until data saturation was reached. The interviews were transcribed verbatim and analyzed with conventional content analysis method for theme development. Further, the MAXQDA software was used for data management. RESULTS: The data analysis led to the development of two main themes, namely, “characteristics of the educational system” and “characteristics of the faculty member evaluation system.” The first main theme consists of three categories, i.e. “characteristics of influential people in evaluation,” “features of the courses,” and “background characteristics.” The other theme has the following as its categories: “evaluation methods,” “evaluation tools,” “evaluation process,” and “application of evaluation results.” Each category will have its subcategories. CONCLUSIONS: Many factors affect the evaluation of faculty members that should be taken into account by educational policymakers for improving the quality of the educational process. In addition to the factors that directly influence the educational system, methodological problems in the evaluation system need special attention. PMID:29417073

  18. Boosting specificity of MEG artifact removal by weighted support vector machine.

    PubMed

    Duan, Fang; Phothisonothai, Montri; Kikuchi, Mitsuru; Yoshimura, Yuko; Minabe, Yoshio; Watanabe, Kastumi; Aihara, Kazuyuki

    2013-01-01

    An automatic artifact removal method of magnetoencephalogram (MEG) was presented in this paper. The method proposed is based on independent components analysis (ICA) and support vector machine (SVM). However, different from the previous studies, in this paper we consider two factors which would influence the performance. First, the imbalance factor of independent components (ICs) of MEG is handled by weighted SVM. Second, instead of simply setting a fixed weight to each class, a re-weighting scheme is used for the preservation of useful MEG ICs. Experimental results on manually marked MEG dataset showed that the method proposed could correctly distinguish the artifacts from the MEG ICs. Meanwhile, 99.72% ± 0.67 of MEG ICs were preserved. The classification accuracy was 97.91% ± 1.39. In addition, it was found that this method was not sensitive to individual differences. The cross validation (leave-one-subject-out) results showed an averaged accuracy of 97.41% ± 2.14.

  19. Two-dimensional frequency-domain acoustic full-waveform inversion with rugged topography

    NASA Astrophysics Data System (ADS)

    Zhang, Qian-Jiang; Dai, Shi-Kun; Chen, Long-Wei; Li, Kun; Zhao, Dong-Dong; Huang, Xing-Xing

    2015-09-01

    We studied finite-element-method-based two-dimensional frequency-domain acoustic FWI under rugged topography conditions. The exponential attenuation boundary condition suitable for rugged topography is proposed to solve the cutoff boundary problem as well as to consider the requirement of using the same subdivision grid in joint multifrequency inversion. The proposed method introduces the attenuation factor, and by adjusting it, acoustic waves are sufficiently attenuated in the attenuation layer to minimize the cutoff boundary effect. Based on the law of exponential attenuation, expressions for computing the attenuation factor and the thickness of attenuation layers are derived for different frequencies. In multifrequency-domain FWI, the conjugate gradient method is used to solve equations in the Gauss-Newton algorithm and thus minimize the computation cost in calculating the Hessian matrix. In addition, the effect of initial model selection and frequency combination on FWI is analyzed. Examples using numerical simulations and FWI calculations are used to verify the efficiency of the proposed method.

  20. How and Why Do IT Entrepreneurs Leave Their Salaried Employment to Start a SME? A Mixed Methods Research Design

    NASA Astrophysics Data System (ADS)

    Mourmant, Gaëtan

    This method paper addresses an untapped but important type of IT turnover: IT entrepreneurship. We seek to develop a mixed methods research (MMR) design to understand the factors and processes that influence turnover behavior of prospective (nascent) IT entrepreneurs. To do this, we review two prior streams of research: the entrepreneurship literature and IT employee turnover. We incorporate the results of this literature review into a conceptual framework describing how the relevant factors leading to entrepreneurial and turnover behavior change over time, either gradually or suddenly, in response to specific events. In addition, we also contribute to the research by arguing that mixed methods research (MMR) is appropriate to bridge the gap between entrepreneurial literature and the IT turnover literature. A third important contribution is the design of the MMR, combining a longitudinal approach with a retrospective approach; a qualitative with a quantitative approach and, the exploratory design with the triangulation design [1]. Finally, we discuss practical implications for IT managers and IT entrepreneurs.

  1. Injury of the Inferior Alveolar Nerve during Implant Placement: a Literature Review

    PubMed Central

    Wang, Hom-Lay; Sabalys, Gintautas

    2011-01-01

    ABSTRACT Objectives The purpose of present article was to review aetiological factors, mechanism, clinical symptoms, and diagnostic methods as well as to create treatment guidelines for the management of inferior alveolar nerve injury during dental implant placement. Material and Methods Literature was selected through a search of PubMed, Embase and Cochrane electronic databases. The keywords used for search were inferior alveolar nerve injury, inferior alveolar nerve injuries, inferior alveolar nerve injury implant, inferior alveolar nerve damage, inferior alveolar nerve paresthesia and inferior alveolar nerve repair. The search was restricted to English language articles, published from 1972 to November 2010. Additionally, a manual search in the major anatomy, dental implant, periodontal and oral surgery journals and books were performed. The publications there selected by including clinical, human anatomy and physiology studies. Results In total 136 literature sources were obtained and reviewed. Aetiological factors of inferior alveolar nerve injury, risk factors, mechanism, clinical sensory nerve examination methods, clinical symptoms and treatment were discussed. Guidelines were created to illustrate the methods used to prevent and manage inferior alveolar nerve injury before or after dental implant placement. Conclusions The damage of inferior alveolar nerve during the dental implant placement can be a serious complication. Clinician should recognise and exclude aetiological factors leading to nerve injury. Proper presurgery planning, timely diagnosis and treatment are the key to avoid nerve sensory disturbances management. PMID:24421983

  2. Factors influencing the results of faculty evaluation in Isfahan University of Medical Sciences.

    PubMed

    Kamali, Farahnaz; Yamani, Nikoo; Changiz, Tahereh; Zoubin, Fatemeh

    2018-01-01

    This study aimed to explore factors influencing the results of faculty member evaluation from the viewpoints of faculty members affiliated with Isfahan University of Medical Sciences, Isfahan, Iran. This qualitative study was done using a conventional content analysis method. Participants were faculty members of Isfahan University of Medical Sciences who, considering maximum variation in sampling, were chosen with a purposive sampling method. Semi-structured interviews were held with 11 faculty members until data saturation was reached. The interviews were transcribed verbatim and analyzed with conventional content analysis method for theme development. Further, the MAXQDA software was used for data management. The data analysis led to the development of two main themes, namely, "characteristics of the educational system" and "characteristics of the faculty member evaluation system." The first main theme consists of three categories, i.e. "characteristics of influential people in evaluation," "features of the courses," and "background characteristics." The other theme has the following as its categories: "evaluation methods," "evaluation tools," "evaluation process," and "application of evaluation results." Each category will have its subcategories. Many factors affect the evaluation of faculty members that should be taken into account by educational policymakers for improving the quality of the educational process. In addition to the factors that directly influence the educational system, methodological problems in the evaluation system need special attention.

  3. Deformable image registration with content mismatch: a demons variant to account for added material and surgical devices in the target image

    NASA Astrophysics Data System (ADS)

    Nithiananthan, S.; Uneri, A.; Schafer, S.; Mirota, D.; Otake, Y.; Stayman, J. W.; Zbijewski, W.; Khanna, A. J.; Reh, D. D.; Gallia, G. L.; Siewerdsen, J. H.

    2013-03-01

    Fast, accurate, deformable image registration is an important aspect of image-guided interventions. Among the factors that can confound registration is the presence of additional material in the intraoperative image - e.g., contrast bolus or a surgical implant - that was not present in the prior image. Existing deformable registration methods generally fail to account for tissue excised between image acquisitions and typically simply "move" voxels within the images with no ability to account for tissue that is removed or introduced between scans. We present a variant of the Demons algorithm to accommodate such content mismatch. The approach combines segmentation of mismatched content with deformable registration featuring an extra pseudo-spatial dimension representing a reservoir from which material can be drawn into the registered image. Previous work tested the registration method in the presence of tissue excision ("missing tissue"). The current paper tests the method in the presence of additional material in the target image and presents a general method by which either missing or additional material can be accommodated. The method was tested in phantom studies, simulations, and cadaver models in the context of intraoperative cone-beam CT with three examples of content mismatch: a variable-diameter bolus (contrast injection); surgical device (rod), and additional material (bone cement). Registration accuracy was assessed in terms of difference images and normalized cross correlation (NCC). We identify the difficulties that traditional registration algorithms encounter when faced with content mismatch and evaluate the ability of the proposed method to overcome these challenges.

  4. Hydrogen environment embrittlement

    NASA Technical Reports Server (NTRS)

    Gray, H. R.

    1972-01-01

    Hydrogen embrittlement is classified into three types: internal reversible hydrogen embrittlement, hydrogen reaction embrittlement, and hydrogen environment embrittlement. Characteristics of and materials embrittled by these types of hydrogen embrittlement are discussed. Hydrogen environment embrittlement is reviewed in detail. Factors involved in standardizing test methods for detecting the occurrence of and evaluating the severity of hydrogen environment embrittlement are considered. The effect of test technique, hydrogen pressure, purity, strain rate, stress concentration factor, and test temperature are discussed. Additional research is required to determine whether hydrogen environment embrittlement and internal reversible hydrogen embrittlement are similar or distinct types of embrittlement.

  5. Reptile hematology.

    PubMed

    Sykes, John M; Klaphake, Eric

    2015-01-01

    The basic principles of hematology used in mammalian medicine can be applied to reptiles. The appearances of the blood cells are significantly different from those seen in most mammals, and vary with taxa and staining method used. Many causes for abnormalities of the reptilian hemogram are similar to those for mammals, although additional factors such as venipuncture site, season, hibernation status, captivity status, and environmental factors can also affect values, making interpretation of hematologic results challenging. Values in an individual should be compared with reference ranges specific to that species, gender, and environmental conditions when available.

  6. Interleaved segment correction achieves higher improvement factors in using genetic algorithm to optimize light focusing through scattering media

    NASA Astrophysics Data System (ADS)

    Li, Runze; Peng, Tong; Liang, Yansheng; Yang, Yanlong; Yao, Baoli; Yu, Xianghua; Min, Junwei; Lei, Ming; Yan, Shaohui; Zhang, Chunmin; Ye, Tong

    2017-10-01

    Focusing and imaging through scattering media has been proved possible with high resolution wavefront shaping. A completely scrambled scattering field can be corrected by applying a correction phase mask on a phase only spatial light modulator (SLM) and thereby the focusing quality can be improved. The correction phase is often found by global searching algorithms, among which Genetic Algorithm (GA) stands out for its parallel optimization process and high performance in noisy environment. However, the convergence of GA slows down gradually with the progression of optimization, causing the improvement factor of optimization to reach a plateau eventually. In this report, we propose an interleaved segment correction (ISC) method that can significantly boost the improvement factor with the same number of iterations comparing with the conventional all segment correction method. In the ISC method, all the phase segments are divided into a number of interleaved groups; GA optimization procedures are performed individually and sequentially among each group of segments. The final correction phase mask is formed by applying correction phases of all interleaved groups together on the SLM. The ISC method has been proved significantly useful in practice because of its ability to achieve better improvement factors when noise is present in the system. We have also demonstrated that the imaging quality is improved as better correction phases are found and applied on the SLM. Additionally, the ISC method lowers the demand of dynamic ranges of detection devices. The proposed method holds potential in applications, such as high-resolution imaging in deep tissue.

  7. Direct system parameter identification of mechanical structures with application to modal analysis

    NASA Technical Reports Server (NTRS)

    Leuridan, J. M.; Brown, D. L.; Allemang, R. J.

    1982-01-01

    In this paper a method is described to estimate mechanical structure characteristics in terms of mass, stiffness and damping matrices using measured force input and response data. The estimated matrices can be used to calculate a consistent set of damped natural frequencies and damping values, mode shapes and modal scale factors for the structure. The proposed technique is attractive as an experimental modal analysis method since the estimation of the matrices does not require previous estimation of frequency responses and since the method can be used, without any additional complications, for multiple force input structure testing.

  8. Service Discovery Oriented Management System Construction Method

    NASA Astrophysics Data System (ADS)

    Li, Huawei; Ren, Ying

    2017-10-01

    In order to solve the problem that there is no uniform method for design service quality management system in large-scale complex service environment, this paper proposes a distributed service-oriented discovery management system construction method. Three measurement functions are proposed to compute nearest neighbor user similarity at different levels. At present in view of the low efficiency of service quality management systems, three solutions are proposed to improve the efficiency of the system. Finally, the key technologies of distributed service quality management system based on service discovery are summarized through the factor addition and subtraction of quantitative experiment.

  9. Preparation and application of a tyre-based activated carbon solid phase extraction of heavy metals in wastewater samples

    NASA Astrophysics Data System (ADS)

    Dimpe, K. Mogolodi; Ngila, J. C.; Nomngongo, Philiswa N.

    2018-06-01

    In this paper, the tyre-based activated carbon solid phase extraction (SPE) method was successfully developed for simultaneous preconcentration of metal ions in the model and real water samples before their determination using flame atomic absorption spectrometry (FAAS). The activation of carbon was achieved by chemical activation and the tyre-based activated carbon was used as a sorbent for solid phase extraction. The prepared activated carbon was characterized using the scanning electron microscope (SEM), Brunauer-Emmett-Teller (BET), and Fourier Transform Infrared spectroscopy. Moreover, optimization of the proposed method was performed by the two-level full factorial design (FFD). The FFD was chosen in order to fully investigate the effect of the experimental variables (pH, eluent concentration and sample flow rate) that significantly influence the preconcentration procedure. In this model, individual factors are considered along with their interactions. In addition, modelling of the experiments allowed simultaneous variation of all experimental factors investigated, reduced the required time and number of experimental runs which consequently led to the reduction of the overall required costs. Under optimized conditions, the limits of detection and quantification (LOD and LOQ) ranged 0.66-2.12 μg L-1and 1.78-5.34 μg L-1, respectively and the enrichment factor of 25 was obtained. The developed SPE/FAAS method was validated using CWW-TM-A and CWW-TM-B wastewater standard reference materials (SRMs). The procedure showed to be accurate with satisfactory recoveries ranging from 92 to 99%. The precision (repeatability) was lower than 4% in terms of the relative standard deviation (%RSD). The developed method proved to have the capability to be used in routine analysis of heavy metals in domestic and industrial wastewater samples. In addition, the developed method can be used as a final step (before being discharged to the rivers) in wastewater treatment process in order to keep our water bodies free from toxic metals.

  10. Statistical Determination of Rainfall-Runoff Erosivity Indices for Single Storms in the Chinese Loess Plateau

    PubMed Central

    Zheng, Mingguo; Chen, Xiaoan

    2015-01-01

    Correlation analysis is popular in erosion- or earth-related studies, however, few studies compare correlations on a basis of statistical testing, which should be conducted to determine the statistical significance of the observed sample difference. This study aims to statistically determine the erosivity index of single storms, which requires comparison of a large number of dependent correlations between rainfall-runoff factors and soil loss, in the Chinese Loess Plateau. Data observed at four gauging stations and five runoff experimental plots were presented. Based on the Meng’s tests, which is widely used for comparing correlations between a dependent variable and a set of independent variables, two methods were proposed. The first method removes factors that are poorly correlated with soil loss from consideration in a stepwise way, while the second method performs pairwise comparisons that are adjusted using the Bonferroni correction. Among 12 rainfall factors, I 30 (the maximum 30-minute rainfall intensity) has been suggested for use as the rainfall erosivity index, although I 30 is equally correlated with soil loss as factors of I 20, EI 10 (the product of the rainfall kinetic energy, E, and I 10), EI 20 and EI 30 are. Runoff depth (total runoff volume normalized to drainage area) is more correlated with soil loss than all other examined rainfall-runoff factors, including I 30, peak discharge and many combined factors. Moreover, sediment concentrations of major sediment-producing events are independent of all examined rainfall-runoff factors. As a result, introducing additional factors adds little to the prediction accuracy of the single factor of runoff depth. Hence, runoff depth should be the best erosivity index at scales from plots to watersheds. Our findings can facilitate predictions of soil erosion in the Loess Plateau. Our methods provide a valuable tool while determining the predictor among a number of variables in terms of correlations. PMID:25781173

  11. Statistical determination of rainfall-runoff erosivity indices for single storms in the Chinese Loess Plateau.

    PubMed

    Zheng, Mingguo; Chen, Xiaoan

    2015-01-01

    Correlation analysis is popular in erosion- or earth-related studies, however, few studies compare correlations on a basis of statistical testing, which should be conducted to determine the statistical significance of the observed sample difference. This study aims to statistically determine the erosivity index of single storms, which requires comparison of a large number of dependent correlations between rainfall-runoff factors and soil loss, in the Chinese Loess Plateau. Data observed at four gauging stations and five runoff experimental plots were presented. Based on the Meng's tests, which is widely used for comparing correlations between a dependent variable and a set of independent variables, two methods were proposed. The first method removes factors that are poorly correlated with soil loss from consideration in a stepwise way, while the second method performs pairwise comparisons that are adjusted using the Bonferroni correction. Among 12 rainfall factors, I30 (the maximum 30-minute rainfall intensity) has been suggested for use as the rainfall erosivity index, although I30 is equally correlated with soil loss as factors of I20, EI10 (the product of the rainfall kinetic energy, E, and I10), EI20 and EI30 are. Runoff depth (total runoff volume normalized to drainage area) is more correlated with soil loss than all other examined rainfall-runoff factors, including I30, peak discharge and many combined factors. Moreover, sediment concentrations of major sediment-producing events are independent of all examined rainfall-runoff factors. As a result, introducing additional factors adds little to the prediction accuracy of the single factor of runoff depth. Hence, runoff depth should be the best erosivity index at scales from plots to watersheds. Our findings can facilitate predictions of soil erosion in the Loess Plateau. Our methods provide a valuable tool while determining the predictor among a number of variables in terms of correlations.

  12. Factors affecting quality of life in Hungarian adults with epilepsy: A comparison of four psychiatric instruments.

    PubMed

    Kováts, Daniella; Császár, Noémi; Haller, József; Juhos, Vera; Sallay, Viola; Békés, Judit; Kelemen, Anna; Fabó, Dániel; Rásonyi, György; Folyovich, András; Kurimay, Tamás

    2017-09-01

    We investigated the impact of 19 factors on life quality in Hungarian patients with epilepsy. Wellbeing was evaluated by several inventories to investigate the impact of factors in more detail. A cross-sectional study was performed in 170 patients. Wellbeing was evaluated with the WHO-5 Well-being Index (WHOQOL-5), Diener Satisfaction with Life Scale (SwLS), and the Quality of Life in Epilepsy-31 Questionnaire (Qolie-31). We investigated their association with demographic characteristics, general health status, epilepsy, and its treatment. The impact of these factors on illness perception (Illness Perception Questionnaire, IPQ) was also studied. The four measures correlated highly significantly. In addition, the predictive power of factors was comparable with the four inventories as evaluated by Multiple Regression. Factors explained 52%, 41%, 63% and 46% in the variance of WHOQOL-5, SwLS, Qolie-31, and IPQ scores, respectively. However, associations with particular factors were instrument-specific. The WHOQOL-5 was associated with factors indicative of general health. SwLS scores were associated with health-related and several demographic factors. Neither showed associations with epilepsy-related factors. All four categories of factors were associated with Qolie-31 and IPQ scores. Factors had an additive impact on IPQ, but not on Qolie-31. Our findings reveal interactions between the method of life quality assessment and the factors that are identified as influencing life quality. This appears to be the first study that analyses the factors that influence illness perception in epilepsy patients, and suggests that the IPQ may become a valuable tool in epilepsy research. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Menopausal Hot Flashes and Carotid Intima Media Thickness among Midlife Women

    PubMed Central

    Thurston, Rebecca C.; Chang, Yuefang; Barinas-Mitchell, Emma; Jennings, J. Richard; Landsittel, Doug P.; Santoro, Nanette; von Känel, Roland; Matthews, Karen A.

    2016-01-01

    Background and Purpose There has been a longstanding interest in the role of menopause and its correlates in the development of cardiovascular disease (CVD) in women. Menopausal hot flashes are experienced by most midlife women; emerging data link hot flashes to CVD risk indicators. We tested whether hot flashes, measured via state-of-the-art physiologic methods, were associated with greater subclinical atherosclerosis as assessed by carotid ultrasound. We considered the role of CVD risk factors and estradiol concentrations in these associations. Methods 295 nonsmoking women free of clinical CVD underwent ambulatory physiologic hot flash assessments; a blood draw; and carotid ultrasound measurement of IMT and plaque. Associations between hot flashes and subclinical atherosclerosis were tested in regression models controlling for CVD risk factors and estradiol. Results More frequent physiologic hot flashes were associated with higher carotid intima media thickness [IMT; for each additional hot flash: beta (standard error)=.004(.001), p=.0001; reported hot flash: beta (standard error)=.008(.002), p=.002, multivariable] and plaque [e.g., for each additional hot flash, odds ratio (95% confidence interval) plaque index ≥2=1.07(1.003–1.14, p=.04), relative to no plaque, multivariable] among women reporting daily hot flashes; associations were not accounted for by CVD risk factors or by estradiol. Among women reporting hot flashes, hot flashes accounted for more variance in IMT than most CVD risk factors. Conclusions Among women reporting daily hot flashes, frequent hot flashes may provide information about a woman’s vascular status beyond standard CVD risk factors and estradiol. Frequent hot flashes may mark a vulnerable vascular phenotype among midlife women. PMID:27834746

  14. On Aethalometer measurement uncertainties and an instrument correction factor for the Arctic

    NASA Astrophysics Data System (ADS)

    Backman, John; Schmeisser, Lauren; Virkkula, Aki; Ogren, John A.; Asmi, Eija; Starkweather, Sandra; Sharma, Sangeeta; Eleftheriadis, Konstantinos; Uttal, Taneil; Jefferson, Anne; Bergin, Michael; Makshtas, Alexander; Tunved, Peter; Fiebig, Markus

    2017-12-01

    Several types of filter-based instruments are used to estimate aerosol light absorption coefficients. Two significant results are presented based on Aethalometer measurements at six Arctic stations from 2012 to 2014. First, an alternative method of post-processing the Aethalometer data is presented, which reduces measurement noise and lowers the detection limit of the instrument more effectively than boxcar averaging. The biggest benefit of this approach can be achieved if instrument drift is minimised. Moreover, by using an attenuation threshold criterion for data post-processing, the relative uncertainty from the electronic noise of the instrument is kept constant. This approach results in a time series with a variable collection time (Δt) but with a constant relative uncertainty with regard to electronic noise in the instrument. An additional advantage of this method is that the detection limit of the instrument will be lowered at small aerosol concentrations at the expense of temporal resolution, whereas there is little to no loss in temporal resolution at high aerosol concentrations ( > 2.1-6.7 Mm-1 as measured by the Aethalometers). At high aerosol concentrations, minimising the detection limit of the instrument is less critical. Additionally, utilising co-located filter-based absorption photometers, a correction factor is presented for the Arctic that can be used in Aethalometer corrections available in literature. The correction factor of 3.45 was calculated for low-elevation Arctic stations. This correction factor harmonises Aethalometer attenuation coefficients with light absorption coefficients as measured by the co-located light absorption photometers. Using one correction factor for Arctic Aethalometers has the advantage that measurements between stations become more inter-comparable.

  15. Spatial Characteristics and Driving Factors of Provincial Wastewater Discharge in China.

    PubMed

    Chen, Kunlun; Liu, Xiaoqiong; Ding, Lei; Huang, Gengzhi; Li, Zhigang

    2016-12-09

    Based on the increasing pressure on the water environment, this study aims to clarify the overall status of wastewater discharge in China, including the spatio-temporal distribution characteristics of wastewater discharge and its driving factors, so as to provide reference for developing "emission reduction" strategies in China and discuss regional sustainable development and resources environment policies. We utilized the Exploratory Spatial Data Analysis (ESDA) method to analyze the characteristics of the spatio-temporal distribution of the total wastewater discharge among 31 provinces in China from 2002 to 2013. Then, we discussed about the driving factors, affected the wastewater discharge through the Logarithmic Mean Divisia Index (LMDI) method and classified those driving factors. Results indicate that: (1) the total wastewater discharge steadily increased, based on the social economic development, with an average growth rate of 5.3% per year; the domestic wastewater discharge is the main source of total wastewater discharge, and the amount of domestic wastewater discharge is larger than the industrial wastewater discharge. There are many spatial differences of wastewater discharge among provinces via the ESDA method. For example, provinces with high wastewater discharge are mainly the developed coastal provinces such as Jiangsu Province and Guangdong Province. Provinces and their surrounding areas with low wastewater discharge are mainly the undeveloped ones in Northwest China; (2) The dominant factors affecting wastewater discharge are the economy and technological advance; The secondary one is the efficiency of resource utilization, which brings about the unstable effect; population plays a less important role in wastewater discharge. The dominant driving factors affecting wastewater discharge among 31 provinces are divided into three types, including two-factor dominant type, three-factor leading type and four-factor antagonistic type. In addition, the proposals aimed at reducing the wastewater discharge are provided on the basis of these three types.

  16. Risk Factor Detection as a Metric of STARHS Performance for HIV Incidence Surveillance Among Female Sex Workers in Kigali, Rwanda

    PubMed Central

    Braunstein, Sarah L; van de Wijgert, Janneke H; Vyankandondera, Joseph; Kestelyn, Evelyne; Ntirushwa, Justin; Nash, Denis

    2012-01-01

    Background: The epidemiologic utility of STARHS hinges not only on producing accurate estimates of HIV incidence, but also on identifying risk factors for recent HIV infection. Methods: As part of an HIV seroincidence study, 800 Rwandan female sex workers (FSW) were HIV tested, with those testing positive further tested by BED-CEIA (BED) and AxSYM Avidity Index (Ax-AI) assays. A sample of HIV-negative (N=397) FSW were followed prospectively for HIV seroconversion. We compared estimates of risk factors for: 1) prevalent HIV infection; 2) recently acquired HIV infection (RI) based on three different STARHS classifications (BED alone, Ax-AI alone, BED/Ax-AI combined); and 3) prospectively observed seroconversion. Results: There was mixed agreement in risk factors between methods. HSV-2 coinfection and recent STI treatment were associated with both prevalent HIV infection and all three measures of recent infection. A number of risk factors were associated only with prevalent infection, including widowhood, history of forced sex, regular alcohol consumption, prior imprisonment, and current breastfeeding. Number of sex partners in the last 3 months was associated with recent infection based on BED/Ax-AI combined, but not other STARHS-based recent infection outcomes or prevalent infection. Risk factor estimates for prospectively observed seroconversion differed in magnitude and direction from those for recent infection via STARHS. Conclusions: Differences in risk factor estimates by each method could reflect true differences in risk factors between the prevalent, recently, or newly infected populations, the effect of study interventions (among those followed prospectively), or assay misclassification. Similar investigations in other populations/settings are needed to further establish the epidemiologic utility of STARHS for identifying risk factors, in addition to incidence rate estimation. PMID:23056162

  17. Compressed sensing for rapid late gadolinium enhanced imaging of the left atrium: A preliminary study.

    PubMed

    Kamesh Iyer, Srikant; Tasdizen, Tolga; Burgon, Nathan; Kholmovski, Eugene; Marrouche, Nassir; Adluru, Ganesh; DiBella, Edward

    2016-09-01

    Current late gadolinium enhancement (LGE) imaging of left atrial (LA) scar or fibrosis is relatively slow and requires 5-15min to acquire an undersampled (R=1.7) 3D navigated dataset. The GeneRalized Autocalibrating Partially Parallel Acquisitions (GRAPPA) based parallel imaging method is the current clinical standard for accelerating 3D LGE imaging of the LA and permits an acceleration factor ~R=1.7. Two compressed sensing (CS) methods have been developed to achieve higher acceleration factors: a patch based collaborative filtering technique tested with acceleration factor R~3, and a technique that uses a 3D radial stack-of-stars acquisition pattern (R~1.8) with a 3D total variation constraint. The long reconstruction time of these CS methods makes them unwieldy to use, especially the patch based collaborative filtering technique. In addition, the effect of CS techniques on the quantification of percentage of scar/fibrosis is not known. We sought to develop a practical compressed sensing method for imaging the LA at high acceleration factors. In order to develop a clinically viable method with short reconstruction time, a Split Bregman (SB) reconstruction method with 3D total variation (TV) constraints was developed and implemented. The method was tested on 8 atrial fibrillation patients (4 pre-ablation and 4 post-ablation datasets). Blur metric, normalized mean squared error and peak signal to noise ratio were used as metrics to analyze the quality of the reconstructed images, Quantification of the extent of LGE was performed on the undersampled images and compared with the fully sampled images. Quantification of scar from post-ablation datasets and quantification of fibrosis from pre-ablation datasets showed that acceleration factors up to R~3.5 gave good 3D LGE images of the LA wall, using a 3D TV constraint and constrained SB methods. This corresponds to reducing the scan time by half, compared to currently used GRAPPA methods. Reconstruction of 3D LGE images using the SB method was over 20 times faster than standard gradient descent methods. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Distance dependence in photoinduced intramolecular electron transfer. Additional remarks and calculations

    NASA Astrophysics Data System (ADS)

    Larsson, Sven; Volosov, Andrey

    1987-12-01

    Rate constants for photoinduced intramolecular electron transfer are calculated for four of the molecules studied by Hush et al. The electronic factor is obtained in quantum chemical calculations using the CNDO/S method. The results agree reasonably well with experiments for the forward reaction. Possible reasons for the disagreement for the charge recombination process are offered.

  19. Middle School Students' Perceptions of Effective Motivation and Preparation Factors for High-Stakes Tests

    ERIC Educational Resources Information Center

    Hoffman, Lynn M.; Nottis, Katharyn E. K.

    2008-01-01

    This mixed-methods study examines young adolescents' perceptions of strategies implemented before a state-mandated "high-stakes" test. Survey results for Grade 8 students (N = 215) are analyzed by sex, academic group, and preparation team. Letters to the principal are reviewed for convergence and additional themes. Although students were most…

  20. Battery condenser system PM10 emission factors and rates for cotton gins: Method 201A PM10 sizing cyclones

    USDA-ARS?s Scientific Manuscript database

    This manuscript is part of a series of manuscripts that to characterize cotton gin emissions from the standpoint of stack sampling. The impetus behind this project was the urgent need to collect additional cotton gin emissions data to address current regulatory issues. A key component of this study ...

  1. Ozone dosing alters the biological potential and therapeutic outcomes of plasma rich in growth factors.

    PubMed

    Anitua, E; Zalduendo, M M; Troya, M; Orive, G

    2015-04-01

    Until now, ozone has been used in a rather empirical way. This in-vitro study investigates, for the first time, whether different ozone treatments of plasma rich in growth factors (PRGF) alter the biological properties and outcomes of this autologous platelet-rich plasma. Human plasma rich in growth factors was treated with ozone using one of the following protocols: a continuous-flow method; or a syringe method in which constant volumes of ozone and PRGF were mixed. In both cases, ozone was added before, during and after the addition of calcium chloride. Three ozone concentrations, of the therapeutic range 20, 40 and 80 μg/mL, were tested. Fibrin clot properties, growth factor content and the proliferative effect on primary osteoblasts and gingival fibroblasts were evaluated. Ozone treatment of PRGF using the continuous flow protocol impaired formation of the fibrin scaffold, drastically reduced the levels of growth factors and significantly decreased the proliferative potential of PRGF on primary osteoblasts and gingival fibroblasts. In contrast, treatment of PRGF with ozone using the syringe method, before, during and after the coagulation process, did not alter the biological outcomes of the autologous therapy. These findings suggest that ozone dose and the way that ozone combines with PRGF may alter the biological potential and therapeutic outcomes of PRGF. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  2. Use of reflectance spectrophotometry and colorimetry in a general linear model for the determination of the age of bruises.

    PubMed

    Hughes, Vanessa K; Langlois, Neil E I

    2010-12-01

    Bruises can have medicolegal significance such that the age of a bruise may be an important issue. This study sought to determine if colorimetry or reflectance spectrophotometry could be employed to objectively estimate the age of bruises. Based on a previously described method, reflectance spectrophotometric scans were obtained from bruises using a Cary 100 Bio spectrophotometer fitted with a fibre-optic reflectance probe. Measurements were taken from the bruise and a control area. Software was used to calculate the first derivative at 490 and 480 nm; the proportion of oxygenated hemoglobin was calculated using an isobestic point method and a software application converted the scan data into colorimetry data. In addition, data on factors that might be associated with the determination of the age of a bruise: subject age, subject sex, degree of trauma, bruise size, skin color, body build, and depth of bruise were recorded. From 147 subjects, 233 reflectance spectrophotometry scans were obtained for analysis. The age of the bruises ranged from 0.5 to 231.5 h. A General Linear Model analysis method was used. This revealed that colorimetric measurement of the yellowness of a bruise accounted for 13% of the bruise age. By incorporation of the other recorded data (as above), yellowness could predict up to 32% of the age of a bruise-implying that 68% of the variation was dependent on other factors. However, critical appraisal of the model revealed that the colorimetry method of determining the age of a bruise was affected by skin tone and required a measure of the proportion of oxygenated hemoglobin, which is obtained by spectrophotometric methods. Using spectrophotometry, the first derivative at 490 nm alone accounted for 18% of the bruise age estimate. When additional factors (subject sex, bruise depth and oxygenation of hemoglobin) were included in the General Linear Model this increased to 31%-implying that 69% of the variation was dependent on other factors. This indicates that spectrophotometry would be of more use that colorimetry for assessing the age of bruises, but the spectrophotometric method used needs to be refined to provide useful data regarding the estimated age of a bruise. Such refinements might include the use of multiple readings or utilizing a comprehensive mathematical model of the optics of skin.

  3. Factors affecting the overcrowding in outpatient healthcare

    PubMed Central

    Bahadori, Mohammadkarim; Teymourzadeh, Ehsan; Ravangard, Ramin; Raadabadi, Mehdi

    2017-01-01

    Background: The expansion of outpatient services and the desire to provide more outpatient care than inpatient care create some problems such as the overcrowding in the outpatient clinics. Given the importance of overcrowding in the outpatient clinics, this qualitative study aimed to determine the factors influencing the overcrowding in the specialty and subspecialty clinic of a teaching hospital. Materials and Methods: This was a qualitative study conducted in the specialty and subspecialty clinic of a hospital using content analysis method in the period of January to March 2014. The study population was all managers and heads of the outpatient wards. The studied sample consisted of 22 managers of the clinic wards who were selected using the purposive sampling method. The required data was collected using semi-structured interviews. The collected data was analyzed using conventional content analysis and the MAXQDA 10.0 software. Results: Three themes were identified as the main factors affecting the overcrowding including the internal positive factors, internal negative factors, and external factors. Conclusions: Despite the efforts made to eliminate overcrowding, and reduce waiting times and increase access to the services for patients, the problem of overcrowding still has remained unresolved. In addition, the use of some strategies such as clarifying the working processes of the clinic for staff and patients and the relationships between the clinic and other wards especially emergency department, as well as using a simple triage system on the patients’ arrival at the clinic are recommended. PMID:28546986

  4. Effects of additional data on Bayesian clustering.

    PubMed

    Yamazaki, Keisuke

    2017-10-01

    Hierarchical probabilistic models, such as mixture models, are used for cluster analysis. These models have two types of variables: observable and latent. In cluster analysis, the latent variable is estimated, and it is expected that additional information will improve the accuracy of the estimation of the latent variable. Many proposed learning methods are able to use additional data; these include semi-supervised learning and transfer learning. However, from a statistical point of view, a complex probabilistic model that encompasses both the initial and additional data might be less accurate due to having a higher-dimensional parameter. The present paper presents a theoretical analysis of the accuracy of such a model and clarifies which factor has the greatest effect on its accuracy, the advantages of obtaining additional data, and the disadvantages of increasing the complexity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Determination of rivaroxaban in patient's plasma samples by anti-Xa chromogenic test associated to High Performance Liquid Chromatography tandem Mass Spectrometry (HPLC-MS/MS).

    PubMed

    Derogis, Priscilla Bento Matos; Sanches, Livia Rentas; de Aranda, Valdir Fernandes; Colombini, Marjorie Paris; Mangueira, Cristóvão Luis Pitangueira; Katz, Marcelo; Faulhaber, Adriana Caschera Leme; Mendes, Claudio Ernesto Albers; Ferreira, Carlos Eduardo Dos Santos; França, Carolina Nunes; Guerra, João Carlos de Campos

    2017-01-01

    Rivaroxaban is an oral direct factor Xa inhibitor, therapeutically indicated in the treatment of thromboembolic diseases. As other new oral anticoagulants, routine monitoring of rivaroxaban is not necessary, but important in some clinical circumstances. In our study a high-performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) method was validated to measure rivaroxaban plasmatic concentration. Our method used a simple sample preparation, protein precipitation, and a fast chromatographic run. It was developed a precise and accurate method, with a linear range from 2 to 500 ng/mL, and a lower limit of quantification of 4 pg on column. The new method was compared to a reference method (anti-factor Xa activity) and both presented a good correlation (r = 0.98, p < 0.001). In addition, we validated hemolytic, icteric or lipemic plasma samples for rivaroxaban measurement by HPLC-MS/MS without interferences. The chromogenic and HPLC-MS/MS methods were highly correlated and should be used as clinical tools for drug monitoring. The method was applied successfully in a group of 49 real-life patients, which allowed an accurate determination of rivaroxaban in peak and trough levels.

  6. Using a generalized additive model with autoregressive terms to study the effects of daily temperature on mortality

    PubMed Central

    2012-01-01

    Background Generalized Additive Model (GAM) provides a flexible and effective technique for modelling nonlinear time-series in studies of the health effects of environmental factors. However, GAM assumes that errors are mutually independent, while time series can be correlated in adjacent time points. Here, a GAM with Autoregressive terms (GAMAR) is introduced to fill this gap. Methods Parameters in GAMAR are estimated by maximum partial likelihood using modified Newton’s method, and the difference between GAM and GAMAR is demonstrated using two simulation studies and a real data example. GAMM is also compared to GAMAR in simulation study 1. Results In the simulation studies, the bias of the mean estimates from GAM and GAMAR are similar but GAMAR has better coverage and smaller relative error. While the results from GAMM are similar to GAMAR, the estimation procedure of GAMM is much slower than GAMAR. In the case study, the Pearson residuals from the GAM are correlated, while those from GAMAR are quite close to white noise. In addition, the estimates of the temperature effects are different between GAM and GAMAR. Conclusions GAMAR incorporates both explanatory variables and AR terms so it can quantify the nonlinear impact of environmental factors on health outcome as well as the serial correlation between the observations. It can be a useful tool in environmental epidemiological studies. PMID:23110601

  7. Test methods for environment-assisted cracking

    NASA Astrophysics Data System (ADS)

    Turnbull, A.

    1992-03-01

    The test methods for assessing environment assisted cracking of metals in aqueous solution are described. The advantages and disadvantages are examined and the interrelationship between results from different test methods is discussed. The source of differences in susceptibility to cracking occasionally observed from the varied mechanical test methods arises often from the variation between environmental parameters in the different test conditions and the lack of adequate specification, monitoring, and control of environmental variables. Time is also a significant factor when comparing results from short term tests with long exposure tests. In addition to these factors, the intrinsic difference in the important mechanical variables, such as strain rate, associated with the various mechanical tests methods can change the apparent sensitivity of the material to stress corrosion cracking. The increasing economic pressure for more accelerated testing is in conflict with the characteristic time dependence of corrosion processes. Unreliable results may be inevitable in some cases but improved understanding of mechanisms and the development of mechanistically based models of environment assisted cracking which incorporate the key mechanical, material, and environmental variables can provide the framework for a more realistic interpretation of short term data.

  8. The ACCE method: an approach for obtaining quantitative or qualitative estimates of residual confounding that includes unmeasured confounding

    PubMed Central

    Smith, Eric G.

    2015-01-01

    Background:  Nonrandomized studies typically cannot account for confounding from unmeasured factors.  Method:  A method is presented that exploits the recently-identified phenomenon of  “confounding amplification” to produce, in principle, a quantitative estimate of total residual confounding resulting from both measured and unmeasured factors.  Two nested propensity score models are constructed that differ only in the deliberate introduction of an additional variable(s) that substantially predicts treatment exposure.  Residual confounding is then estimated by dividing the change in treatment effect estimate between models by the degree of confounding amplification estimated to occur, adjusting for any association between the additional variable(s) and outcome. Results:  Several hypothetical examples are provided to illustrate how the method produces a quantitative estimate of residual confounding if the method’s requirements and assumptions are met.  Previously published data is used to illustrate that, whether or not the method routinely provides precise quantitative estimates of residual confounding, the method appears to produce a valuable qualitative estimate of the likely direction and general size of residual confounding. Limitations:  Uncertainties exist, including identifying the best approaches for: 1) predicting the amount of confounding amplification, 2) minimizing changes between the nested models unrelated to confounding amplification, 3) adjusting for the association of the introduced variable(s) with outcome, and 4) deriving confidence intervals for the method’s estimates (although bootstrapping is one plausible approach). Conclusions:  To this author’s knowledge, it has not been previously suggested that the phenomenon of confounding amplification, if such amplification is as predictable as suggested by a recent simulation, provides a logical basis for estimating total residual confounding. The method's basic approach is straightforward.  The method's routine usefulness, however, has not yet been established, nor has the method been fully validated. Rapid further investigation of this novel method is clearly indicated, given the potential value of its quantitative or qualitative output. PMID:25580226

  9. A methodology for calculating transport emissions in cities with limited traffic data: Case study of diesel particulates and black carbon emissions in Murmansk.

    PubMed

    Kholod, N; Evans, M; Gusev, E; Yu, S; Malyshev, V; Tretyakova, S; Barinov, A

    2016-03-15

    This paper presents a methodology for calculating exhaust emissions from on-road transport in cities with low-quality traffic data and outdated vehicle registries. The methodology consists of data collection approaches and emission calculation methods. For data collection, the paper suggests using video survey and parking lot survey methods developed for the International Vehicular Emissions model. Additional sources of information include data from the largest transportation companies, vehicle inspection stations, and official vehicle registries. The paper suggests using the European Computer Programme to Calculate Emissions from Road Transport (COPERT) 4 model to calculate emissions, especially in countries that implemented European emissions standards. If available, the local emission factors should be used instead of the default COPERT emission factors. The paper also suggests additional steps in the methodology to calculate emissions only from diesel vehicles. We applied this methodology to calculate black carbon emissions from diesel on-road vehicles in Murmansk, Russia. The results from Murmansk show that diesel vehicles emitted 11.7 tons of black carbon in 2014. The main factors determining the level of emissions are the structure of the vehicle fleet and the level of vehicle emission controls. Vehicles without controls emit about 55% of black carbon emissions. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Provincial variation of carbon emissions from bituminous coal: Influence of inertinite and other factors

    USGS Publications Warehouse

    Quick, J.C.; Brill, T.

    2002-01-01

    We observe a 1.3 kg C/net GJ variation of carbon emissions due to inertinite abundance in some commercially available bituminous coal. An additional 0.9 kg C/net GJ variation of carbon emissions is expected due to the extent of coalification through the bituminous rank stages. Each percentage of sulfur in bituminous coal reduces carbon emissions by about 0.08 kg C/net GJ. Other factors, such as mineral content, liptinite abundance and individual macerals, also influence carbon emissions, but their quantitative effect is less certain. The large range of carbon emissions within the bituminous rank class suggests that rank- specific carbon emission factors are provincial rather than global. Although carbon emission factors that better account for this provincial variation might be calculated, we show that the data used for this calculation may vary according to the methods used to sample and analyze coal. Provincial variation of carbon emissions and the use of different coal sampling and analytical methods complicate the verification of national greenhouse gas inventories. Published by Elsevier Science B.V.

  11. Pathophysiology, risk factors, and screening methods for prediabetes in women with polycystic ovary syndrome

    PubMed Central

    Gourgari, Evgenia; Spanakis, Elias; Dobs, Adrian Sandra

    2016-01-01

    Polycystic ovary syndrome (PCOS) is a syndrome associated with insulin resistance (IR), obesity, infertility, and increased cardiometabolic risk. This is a descriptive review of several mechanisms that can explain the IR among women with PCOS, other risk factors for the development of diabetes, and the screening methods used for the detection of glucose intolerance in women with PCOS. Few mechanisms can explain IR in women with PCOS such as obesity, insulin receptor signaling defects, and inhibition of insulin-mediated glucose uptake in adipocytes. Women with PCOS have additional risk factors for the development of glucose intolerance such as family history of diabetes, use of oral contraceptives, anovulation, and age. The Androgen Society in 2007 and the Endocrine Society in 2013 recommended using oral glucose tolerance test as a screening tool for abnormal glucose tolerance in all women with PCOS. The approach to detection of glucose intolerance among women with PCOS varies among health care providers. Large prospective studies are still needed for the development of guidelines with strong evidence. When assessing risk of future diabetes in women with PCOS, it is important to take into account the method used for screening as well as other risk factors that these women might have. PMID:27570464

  12. Pathophysiology, risk factors, and screening methods for prediabetes in women with polycystic ovary syndrome.

    PubMed

    Gourgari, Evgenia; Spanakis, Elias; Dobs, Adrian Sandra

    2016-01-01

    Polycystic ovary syndrome (PCOS) is a syndrome associated with insulin resistance (IR), obesity, infertility, and increased cardiometabolic risk. This is a descriptive review of several mechanisms that can explain the IR among women with PCOS, other risk factors for the development of diabetes, and the screening methods used for the detection of glucose intolerance in women with PCOS. Few mechanisms can explain IR in women with PCOS such as obesity, insulin receptor signaling defects, and inhibition of insulin-mediated glucose uptake in adipocytes. Women with PCOS have additional risk factors for the development of glucose intolerance such as family history of diabetes, use of oral contraceptives, anovulation, and age. The Androgen Society in 2007 and the Endocrine Society in 2013 recommended using oral glucose tolerance test as a screening tool for abnormal glucose tolerance in all women with PCOS. The approach to detection of glucose intolerance among women with PCOS varies among health care providers. Large prospective studies are still needed for the development of guidelines with strong evidence. When assessing risk of future diabetes in women with PCOS, it is important to take into account the method used for screening as well as other risk factors that these women might have.

  13. Continued investigation of solid propulsion economics. Task 1B: Large solid rocket motor case fabrication methods - Supplement process complexity factor cost technique

    NASA Technical Reports Server (NTRS)

    Baird, J.

    1967-01-01

    This supplement to Task lB-Large Solid Rocket Motor Case Fabrication Methods supplies additional supporting cost data and discusses in detail the methodology that was applied to the task. For the case elements studied, the cost was found to be directly proportional to the Process Complexity Factor (PCF). The PCF was obtained for each element by identifying unit processes that are common to the elements and their alternative manufacturing routes, by assigning a weight to each unit process, and by summing the weighted counts. In three instances of actual manufacture, the actual cost per pound equaled the cost estimate based on PCF per pound, but this supplement, recognizes that the methodology is of limited, rather than general, application.

  14. Automatic segmentation of brain MRI in high-dimensional local and non-local feature space based on sparse representation.

    PubMed

    Khalilzadeh, Mohammad Mahdi; Fatemizadeh, Emad; Behnam, Hamid

    2013-06-01

    Automatic extraction of the varying regions of magnetic resonance images is required as a prior step in a diagnostic intelligent system. The sparsest representation and high-dimensional feature are provided based on learned dictionary. The classification is done by employing the technique that computes the reconstruction error locally and non-locally of each pixel. The acquired results from the real and simulated images are superior to the best MRI segmentation method with regard to the stability advantages. In addition, it is segmented exactly through a formula taken from the distance and sparse factors. Also, it is done automatically taking sparse factor in unsupervised clustering methods whose results have been improved. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Leading for the long haul: a mixed-method evaluation of the Sustainment Leadership Scale (SLS).

    PubMed

    Ehrhart, Mark G; Torres, Elisa M; Green, Amy E; Trott, Elise M; Willging, Cathleen E; Moullin, Joanna C; Aarons, Gregory A

    2018-01-19

    Despite our progress in understanding the organizational context for implementation and specifically the role of leadership in implementation, its role in sustainment has received little attention. This paper took a mixed-method approach to examine leadership during the sustainment phase of the Exploration, Preparation, Implementation, Sustainment (EPIS) framework. Utilizing the Implementation Leadership Scale as a foundation, we sought to develop a short, practical measure of sustainment leadership that can be used for both applied and research purposes. Data for this study were collected as a part of a larger mixed-method study of evidence-based intervention, SafeCare®, sustainment. Quantitative data were collected from 157 providers using web-based surveys. Confirmatory factor analysis was used to examine the factor structure of the Sustainment Leadership Scale (SLS). Qualitative data were collected from 95 providers who participated in one of 15 focus groups. A framework approach guided qualitative data analysis. Mixed-method integration was also utilized to examine convergence of quantitative and qualitative findings. Confirmatory factor analysis supported the a priori higher order factor structure of the SLS with subscales indicating a single higher order sustainment leadership factor. The SLS demonstrated excellent internal consistency reliability. Qualitative analyses offered support for the dimensions of sustainment leadership captured by the quantitative measure, in addition to uncovering a fifth possible factor, available leadership. This study found qualitative and quantitative support for the pragmatic SLS measure. The SLS can be used for assessing leadership of first-level leaders to understand how staff perceive leadership during sustainment and to suggest areas where leaders could direct more attention in order to increase the likelihood that EBIs are institutionalized into the normal functioning of the organization.

  16. Exploration of the factor structure of the Kirton Adaption-Innovation Inventory using bootstrapping estimation.

    PubMed

    Im, Subin; Min, Soonhong

    2013-04-01

    Exploratory factor analyses of the Kirton Adaption-Innovation Inventory (KAI), which serves to measure individual cognitive styles, generally indicate three factors: sufficiency of originality, efficiency, and rule/group conformity. In contrast, a 2005 study by Im and Hu using confirmatory factor analysis supported a four-factor structure, dividing the sufficiency of originality dimension into two subdimensions, idea generation and preference for change. This study extends Im and Hu's (2005) study of a derived version of the KAI by providing additional evidence of the four-factor structure. Specifically, the authors test the robustness of the parameter estimates to the violation of normality assumptions in the sample using bootstrap methods. A bias-corrected confidence interval bootstrapping procedure conducted among a sample of 356 participants--members of the Arkansas Household Research Panel, with middle SES and average age of 55.6 yr. (SD = 13.9)--showed that the four-factor model with two subdimensions of sufficiency of originality fits the data significantly better than the three-factor model in non-normality conditions.

  17. Using the theory of planned behavior to determine factors influencing processed foods consumption behavior

    PubMed Central

    Kim, Og Yeon; Shim, Soonmi

    2014-01-01

    BACKGROUND/OBJECTIVES The purpose of this study is to identify how level of information affected intention, using the Theory of Planned Behavior. SUBJECTS/METHODS The study was conducted survey in diverse community centers and shopping malls in Seoul, which yielded N = 209 datasets. To compare processed foods consumption behavior, we divided samples into two groups based on level of information about food additives (whether respondents felt that information on food additives was sufficient or not). We analyzed differences in attitudes toward food additives and toward purchasing processed foods, subjective norms, perceived behavioral control, and behavioral intentions to processed foods between sufficient information group and lack information group. RESULTS The results confirmed that more than 78% of respondents thought information on food additives was insufficient. However, the group who felt information was sufficient had more positive attitudes about consuming processed foods and behavioral intentions than the group who thought information was inadequate. This study found people who consider that they have sufficient information on food additives tend to have more positive attitudes toward processed foods and intention to consume processed foods. CONCLUSIONS This study suggests increasing needs for nutrition education on the appropriate use of processed foods. Designing useful nutrition education requires a good understanding of factors which influence on processed foods consumption. PMID:24944779

  18. High-order interactions observed in multi-task intrinsic networks are dominant indicators of aberrant brain function in schizophrenia

    PubMed Central

    Plis, Sergey M; Sui, Jing; Lane, Terran; Roy, Sushmita; Clark, Vincent P; Potluru, Vamsi K; Huster, Rene J; Michael, Andrew; Sponheim, Scott R; Weisend, Michael P; Calhoun, Vince D

    2013-01-01

    Identifying the complex activity relationships present in rich, modern neuroimaging data sets remains a key challenge for neuroscience. The problem is hard because (a) the underlying spatial and temporal networks may be nonlinear and multivariate and (b) the observed data may be driven by numerous latent factors. Further, modern experiments often produce data sets containing multiple stimulus contexts or tasks processed by the same subjects. Fusing such multi-session data sets may reveal additional structure, but raises further statistical challenges. We present a novel analysis method for extracting complex activity networks from such multifaceted imaging data sets. Compared to previous methods, we choose a new point in the trade-off space, sacrificing detailed generative probability models and explicit latent variable inference in order to achieve robust estimation of multivariate, nonlinear group factors (“network clusters”). We apply our method to identify relationships of task-specific intrinsic networks in schizophrenia patients and control subjects from a large fMRI study. After identifying network-clusters characterized by within- and between-task interactions, we find significant differences between patient and control groups in interaction strength among networks. Our results are consistent with known findings of brain regions exhibiting deviations in schizophrenic patients. However, we also find high-order, nonlinear interactions that discriminate groups but that are not detected by linear, pair-wise methods. We additionally identify high-order relationships that provide new insights into schizophrenia but that have not been found by traditional univariate or second-order methods. Overall, our approach can identify key relationships that are missed by existing analysis methods, without losing the ability to find relationships that are known to be important. PMID:23876245

  19. Inertial mass sensing with low Q-factor vibrating microcantilevers

    NASA Astrophysics Data System (ADS)

    Adhikari, S.

    2017-10-01

    Mass sensing using micromechanical cantilever oscillators has been established as a promising approach. The scientific principle underpinning this technique is the shift in the resonance frequency caused by the additional mass in the dynamic system. This approach relies on the fact that the Q-factor of the underlying oscillator is high enough so that it does not significantly affect the resonance frequencies. We consider the case when the Q-factor is low to the extent that the effect of damping is prominent. It is shown that the mass sensing can be achieved using a shift in the damping factor. We prove that the shift in the damping factor is of the same order as that of the resonance frequency. Based on this crucial observation, three new approaches have been proposed, namely, (a) mass sensing using frequency shifts in the complex plane, (b) mass sensing from damped free vibration response in the time domain, and (c) mass sensing from the steady-state response in the frequency domain. Explicit closed-form expressions relating absorbed mass with changes in the measured dynamic properties have been derived. The rationale behind each new method has been explained using non-dimensional graphical illustrations. The new mass sensing approaches using damped dynamic characteristics can expand the current horizon of micromechanical sensing by incorporating a wide range of additional measurements.

  20. A technique for rapid source apportionment applied to ambient organic aerosol measurements from a thermal desorption aerosol gas chromatograph (TAG)

    DOE PAGES

    Zhang, Yaping; Williams, Brent J.; Goldstein, Allen H.; ...

    2016-11-25

    Here, we present a rapid method for apportioning the sources of atmospheric organic aerosol composition measured by gas chromatography–mass spectrometry methods. Here, we specifically apply this new analysis method to data acquired on a thermal desorption aerosol gas chromatograph (TAG) system. Gas chromatograms are divided by retention time into evenly spaced bins, within which the mass spectra are summed. A previous chromatogram binning method was introduced for the purpose of chromatogram structure deconvolution (e.g., major compound classes) (Zhang et al., 2014). Here we extend the method development for the specific purpose of determining aerosol samples' sources. Chromatogram bins are arrangedmore » into an input data matrix for positive matrix factorization (PMF), where the sample number is the row dimension and the mass-spectra-resolved eluting time intervals (bins) are the column dimension. Then two-dimensional PMF can effectively do three-dimensional factorization on the three-dimensional TAG mass spectra data. The retention time shift of the chromatogram is corrected by applying the median values of the different peaks' shifts. Bin width affects chemical resolution but does not affect PMF retrieval of the sources' time variations for low-factor solutions. A bin width smaller than the maximum retention shift among all samples requires retention time shift correction. A six-factor PMF comparison among aerosol mass spectrometry (AMS), TAG binning, and conventional TAG compound integration methods shows that the TAG binning method performs similarly to the integration method. However, the new binning method incorporates the entirety of the data set and requires significantly less pre-processing of the data than conventional single compound identification and integration. In addition, while a fraction of the most oxygenated aerosol does not elute through an underivatized TAG analysis, the TAG binning method does have the ability to achieve molecular level resolution on other bulk aerosol components commonly observed by the AMS.« less

  1. A technique for rapid source apportionment applied to ambient organic aerosol measurements from a thermal desorption aerosol gas chromatograph (TAG)

    NASA Astrophysics Data System (ADS)

    Zhang, Yaping; Williams, Brent J.; Goldstein, Allen H.; Docherty, Kenneth S.; Jimenez, Jose L.

    2016-11-01

    We present a rapid method for apportioning the sources of atmospheric organic aerosol composition measured by gas chromatography-mass spectrometry methods. Here, we specifically apply this new analysis method to data acquired on a thermal desorption aerosol gas chromatograph (TAG) system. Gas chromatograms are divided by retention time into evenly spaced bins, within which the mass spectra are summed. A previous chromatogram binning method was introduced for the purpose of chromatogram structure deconvolution (e.g., major compound classes) (Zhang et al., 2014). Here we extend the method development for the specific purpose of determining aerosol samples' sources. Chromatogram bins are arranged into an input data matrix for positive matrix factorization (PMF), where the sample number is the row dimension and the mass-spectra-resolved eluting time intervals (bins) are the column dimension. Then two-dimensional PMF can effectively do three-dimensional factorization on the three-dimensional TAG mass spectra data. The retention time shift of the chromatogram is corrected by applying the median values of the different peaks' shifts. Bin width affects chemical resolution but does not affect PMF retrieval of the sources' time variations for low-factor solutions. A bin width smaller than the maximum retention shift among all samples requires retention time shift correction. A six-factor PMF comparison among aerosol mass spectrometry (AMS), TAG binning, and conventional TAG compound integration methods shows that the TAG binning method performs similarly to the integration method. However, the new binning method incorporates the entirety of the data set and requires significantly less pre-processing of the data than conventional single compound identification and integration. In addition, while a fraction of the most oxygenated aerosol does not elute through an underivatized TAG analysis, the TAG binning method does have the ability to achieve molecular level resolution on other bulk aerosol components commonly observed by the AMS.

  2. Effects of test method and participant musical training on preference ratings of stimuli with different reverberation times.

    PubMed

    Lawless, Martin S; Vigeant, Michelle C

    2017-10-01

    Selecting an appropriate listening test design for concert hall research depends on several factors, including listening test method and participant critical-listening experience. Although expert listeners afford more reliable data, their perceptions may not be broadly representative. The present paper contains two studies that examined the validity and reliability of the data obtained from two listening test methods, a successive and a comparative method, and two types of participants, musicians and non-musicians. Participants rated their overall preference of auralizations generated from eight concert hall conditions with a range of reverberation times (0.0-7.2 s). Study 1, with 34 participants, assessed the two methods. The comparative method yielded similar results and reliability as the successive method. Additionally, the comparative method was rated as less difficult and more preferable. For study 2, an additional 37 participants rated the stimuli using the comparative method only. An analysis of variance of the responses from both studies revealed that musicians are better than non-musicians at discerning their preferences across stimuli. This result was confirmed with a k-means clustering analysis on the entire dataset that revealed five preference groups. Four groups exhibited clear preferences to the stimuli, while the fifth group, predominantly comprising non-musicians, demonstrated no clear preference.

  3. The potential of IGF-1 and TGFbeta1 for promoting "adult" articular cartilage repair: an in vitro study.

    PubMed

    Davies, Lindsay C; Blain, Emma J; Gilbert, Sophie J; Caterson, Bruce; Duance, Victor C

    2008-07-01

    Research into articular cartilage repair, a tissue unable to spontaneously regenerate once injured, has focused on the generation of a biomechanically functional repair tissue with the characteristics of hyaline cartilage. This study was undertaken to provide insight into how to improve ex vivo chondrocyte amplification, without cellular dedifferentiation for cell-based methods of cartilage repair. We investigated the effects of insulin-like growth factor 1 (IGF-1) and transforming growth factor beta 1 (TGFbeta1) on cell proliferation and the de novo synthesis of sulfated glycosaminoglycans and collagen in chondrocytes isolated from skeletally mature bovine articular cartilage, whilst maintaining their chondrocytic phenotype. Here we demonstrate that mature differentiated chondrocytes respond to growth factor stimulation to promote de novo synthesis of matrix macromolecules. Additionally, chondrocytes stimulated with IGF-1 or TGFbeta1 induced receptor expression. We conclude that IGF-1 and TGFbeta1 in addition to autoregulatory effects have differential effects on each other when used in combination. This may be mediated by regulation of receptor expression or endogenous factors; these findings offer further options for improving strategies for repair of cartilage defects.

  4. Patterns and biases in climate change research on amphibians and reptiles: a systematic review.

    PubMed

    Winter, Maiken; Fiedler, Wolfgang; Hochachka, Wesley M; Koehncke, Arnulf; Meiri, Shai; De la Riva, Ignacio

    2016-09-01

    Climate change probably has severe impacts on animal populations, but demonstrating a causal link can be difficult because of potential influences by additional factors. Assessing global impacts of climate change effects may also be hampered by narrow taxonomic and geographical research foci. We review studies on the effects of climate change on populations of amphibians and reptiles to assess climate change effects and potential biases associated with the body of work that has been conducted within the last decade. We use data from 104 studies regarding the effect of climate on 313 species, from 464 species-study combinations. Climate change effects were reported in 65% of studies. Climate change was identified as causing population declines or range restrictions in half of the cases. The probability of identifying an effect of climate change varied among regions, taxa and research methods. Climatic effects were equally prevalent in studies exclusively investigating climate factors (more than 50% of studies) and in studies including additional factors, thus bolstering confidence in the results of studies exclusively examining effects of climate change. Our analyses reveal biases with respect to geography, taxonomy and research question, making global conclusions impossible. Additional research should focus on under-represented regions, taxa and questions. Conservation and climate policy should consider the documented harm climate change causes reptiles and amphibians.

  5. Input-current shaped ac to dc converters

    NASA Technical Reports Server (NTRS)

    1986-01-01

    The problem of achieving near unity power factor while supplying power to a dc load from a single phase ac source of power is examined. Power processors for this application must perform three functions: input current shaping, energy storage, and output voltage regulation. The methods available for performing each of these three functions are reviewed. Input current shaping methods are either active or passive, with the active methods divided into buck-like and boost-like techniques. In addition to large reactances, energy storage methods include resonant filters, active filters, and active storage schemes. Fast voltage regulation can be achieved by post regulation or by supplementing the current shaping topology with an extra switch. Some indications of which methods are best suited for particular applications concludes the discussion.

  6. Enhanced regeneration potential of mobilized dental pulp stem cells from immature teeth.

    PubMed

    Nakayama, H; Iohara, K; Hayashi, Y; Okuwa, Y; Kurita, K; Nakashima, M

    2017-07-01

    We have previously demonstrated that dental pulp stem cells (DPSCs) isolated from mature teeth by granulocyte colony-stimulating factor (G-CSF)-induced mobilization method can enhance angiogenesis/vasculogenesis and improve pulp regeneration when compared with colony-derived DPSCs. However, the efficacy of this method in immature teeth with root-formative stage has never been investigated. Therefore, the aim of this study was to examine the stemness, biological characteristics, and regeneration potential in mobilized DPSCs compared with colony-derived DPSCs from immature teeth. Mobilized DPSCs isolated from immature teeth were compared to colony-derived DPSCs using methods including flow cytometry, migration assays, mRNA expression of angiogenic/neurotrophic factor, and induced differentiation assays. They were also compared in trophic effects of the secretome. Regeneration potential was further compared in an ectopic tooth transplantation model. Mobilized DPSCs had higher migration ability and expressed more angiogenic/neurotrophic factors than DPSCs. The mobilized DPSC secretome produced a higher stimulatory effect on migration, immunomodulation, anti-apoptosis, endothelial differentiation, and neurite extension. In addition, vascularization and pulp regeneration potential were higher in mobilized DPSCs than in DPSCs. G-CSF-induced mobilization method enhances regeneration potential of colony-derived DPSCs from immature teeth. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  7. Nanoscale determination of the mass enhancement factor in the lightly doped bulk insulator lead selenide

    DOE PAGES

    Zeljkovic, Ilija; Scipioni, Kane L.; Walkup, Daniel; ...

    2015-03-27

    Bismuth chalcogenides and lead telluride/selenide alloys exhibit exceptional thermoelectric properties that could be harnessed for power generation and device applications. Since phonons play a significant role in achieving these desired properties, quantifying the interaction between phonons and electrons, which is encoded in the Eliashberg function of a material, is of immense importance. However, its precise extraction has in part been limited due to the lack of local experimental probes. Here we construct a method to directly extract the Eliashberg function using Landau level spectroscopy, and demonstrate its applicability to lightly doped thermoelectric bulk insulator PbSe. In addition to its highmore » energy resolution only limited by thermal broadening, this novel experimental method could be used to detect variations in mass enhancement factor at the nanoscale level. Finally, this opens up a new pathway for investigating the local effects of doping and strain on the mass enhancement factor.« less

  8. Electrostatic separation for recycling waste printed circuit board: a study on external factor and a robust design for optimization.

    PubMed

    Hou, Shibing; Wu, Jiang; Qin, Yufei; Xu, Zhenming

    2010-07-01

    Electrostatic separation is an effective and environmentally friendly method for recycling waste printed circuit board (PCB) by several kinds of electrostatic separators. However, some notable problems have been detected in its applications and cannot be efficiently resolved by optimizing the separation process. Instead of the separator itself, these problems are mainly caused by some external factors such as the nonconductive powder (NP) and the superficial moisture of feeding granule mixture. These problems finally lead to an inefficient separation. In the present research, the impacts of these external factors were investigated and a robust design was built to optimize the process and to weaken the adverse impact. A most robust parameter setting (25 kv, 80 rpm) was concluded from the experimental design. In addition, some theoretical methods, including cyclone separation, were presented to eliminate these problems substantially. This will contribute to efficient electrostatic separation of waste PCB and make remarkable progress for industrial applications.

  9. New thermochemical parameter for describing solvent effects on IR stretching vibration frequencies. Communication 2. Assessment of cooperativity effects.

    PubMed

    Solomonov, Boris N; Varfolomeev, Mikhail A; Novikov, Vladimir B; Klimovitskii, Alexander E

    2006-05-15

    Solvent effects on O-H stretching vibration frequency of methanol in hydrogen bond complexes with different bases, CH3OH...B, have been investigated by FTIR spectroscopy. Using chloroform as a solvent results in strengthening of CH3OH...B hydrogen bonding due to cooperativity between CH3OH...B and Cl3CH...CH3OH bonds. A method is proposed for quantifying the hydrogen bond cooperativity effect. The determined cooperativity factors take into account all specific interactions of the solute in proton-donor solvents. In addition, a method of estimation of cooperativity factors Ab and AOX in system (CH3OH)2...B is proposed. It is demonstrated that in such systems, the cooperativity factor of the OH...B bond decreases and that of the OH...O bond increases with increasing the acceptor strength of the base B. The obtained results are in a good agreement with the data obtained previously from matrix-isolation FTIR spectroscopy.

  10. [Evaluation of crossing calibration of (123)I-MIBG H/M ration, with the IDW scatter correction method, on different gamma camera systems].

    PubMed

    Kittaka, Daisuke; Takase, Tadashi; Akiyama, Masayuki; Nakazawa, Yasuo; Shinozuka, Akira; Shirai, Muneaki

    2011-01-01

    (123)I-MIBG Heart-to-Mediastinum activity ratio (H/M) is commonly used as an indicator of relative myocardial (123)I-MIBG uptake. H/M ratios reflect myocardial sympathetic nerve function, therefore it is a useful parameter to assess regional myocardial sympathetic denervation in various cardiac diseases. However, H/M ratio values differ by site, gamma camera system, position and size of region of interest (ROI), and collimator. In addition to these factors, 529 keV scatter component may also affect (123)I-MIBG H/M ratio. In this study, we examined whether the H/M ratio shows correlation between two different gamma camera systems and that sought for H/M ratio calculation formula. Moreover, we assessed the feasibility of (123)I Dual Window (IDW) method, which is a scatter correction method, and compared H/M ratios with and without IDW method. H/M ratio displayed a good correlation between two gamma camera systems. Additionally, we were able to create a new H/M calculation formula. These results indicated that the IDW method is a useful scatter correction method for calculating (123)I-MIBG H/M ratios.

  11. Methods and options in vitro dialyzability; benefits and limitations.

    PubMed

    Sandberg, Ann-Sofie

    2005-11-01

    In vitro dialyzability methods involve a two-step digestion process simulating the gastric and intestinal phase, and dialysis through a semi-permeable membrane with a selected molecular weight cut-off. Dialyzable iron/zinc is used as an estimation of available mineral. Final pH adjustment and use of a strict time schedule were found to be critical factors for standardization. In addition the selected cut-off of the dialysis membrane and the method used for iron and zinc determination influence the results. For screening purposes, simple solubility or dialyzability methods seem preferable to the more sophisticated computer-controlled gastrointestinal model. This is likely more valuable in studies of different transit times and sites of dialyzability. In vitro solubility/dialyzability methods correlate in most cases with human absorption studies in ranking iron and zinc availability from different meals. Exceptions may be that effects of milk, certain proteins, tea, and organic acids cannot be predicted. The dialyzability methods exclude iron bound to large molecules, which in some cases is available and include iron bound to small molecules, which is not always available. In vitro experiments based on solubility/dialyzability are tools to understand factors that may affect subsequent mineral absorption.

  12. Next Steps in Bayesian Structural Equation Models: Comments on, Variations of, and Extensions to Muthen and Asparouhov (2012)

    ERIC Educational Resources Information Center

    Rindskopf, David

    2012-01-01

    Muthen and Asparouhov (2012) made a strong case for the advantages of Bayesian methodology in factor analysis and structural equation models. I show additional extensions and adaptations of their methods and show how non-Bayesians can take advantage of many (though not all) of these advantages by using interval restrictions on parameters. By…

  13. Horizontal directional drilling: a green and sustainable technology for site remediation.

    PubMed

    Lubrecht, Michael D

    2012-03-06

    Sustainability has become an important factor in the selection of remedies to clean up contaminated sites. Horizontal directional drilling (HDD) is a relatively new drilling technology that has been successfully adapted to site remediation. In addition to the benefits that HDD provides for the logistics of site cleanup, it also delivers sustainability advantages, compared to alternative construction methods.

  14. Children and Pesticides: New Approach to Considering Risk Is Partly in Place. Report to Congressional Requesters.

    ERIC Educational Resources Information Center

    Heinrich, Janet

    The Food Quality Protection Act of 1996 (FQPA) requires that the Environmental Protection Agency (EPA), which regulates the use of pesticides at the federal level, reevaluate the amounts of pesticide residues allowed on or in food. The EPA immediately began efforts to consider the additional safety factor for children, using available methods and…

  15. Finding Culture Change in the Second Factor: Stability and Change in Cultural Consensus and Residual Agreement

    ERIC Educational Resources Information Center

    Dressler, William W.; Balieiro, Mauro C.; dos Santos, José Ernesto

    2015-01-01

    This article reports the replication after 10 years of cultural consensus analyses in four cultural domains in the city of Ribeirão Preto, Brazil. Additionally, two methods for evaluating residual agreement are applied to the data, and a new technique for evaluating how cultural knowledge is represented by residual agreement is introduced. We…

  16. Crack Propagation Calculations for Optical Fibers under Static Bending and Tensile Loads Using Continuum Damage Mechanics

    PubMed Central

    Chen, Yunxia; Cui, Yuxuan; Gong, Wenjun

    2017-01-01

    Static fatigue behavior is the main failure mode of optical fibers applied in sensors. In this paper, a computational framework based on continuum damage mechanics (CDM) is presented to calculate the crack propagation process and failure time of optical fibers subjected to static bending and tensile loads. For this purpose, the static fatigue crack propagation in the glass core of the optical fiber is studied. Combining a finite element method (FEM), we use the continuum damage mechanics for the glass core to calculate the crack propagation path and corresponding failure time. In addition, three factors including bending radius, tensile force and optical fiber diameter are investigated to find their impacts on the crack propagation process and failure time of the optical fiber under concerned situations. Finally, experiments are conducted and the results verify the correctness of the simulation calculation. It is believed that the proposed method could give a straightforward description of the crack propagation path in the inner glass core. Additionally, the predicted crack propagation time of the optical fiber with different factors can provide effective suggestions for improving the long-term usage of optical fibers. PMID:29140284

  17. Profiles of eight working mothers who practiced exclusive breastfeeding in Depok, Indonesia.

    PubMed

    Februhartanty, Judhiastuty; Wibowo, Yulianti; Fahmida, Umi; Roshita, Airin

    2012-02-01

    Exclusive breastfeeding practice is generally low because of multifaceted factors internally within mothers themselves and also the surroundings. In addition, studies have consistently found that maternal employment outside the home is related to shorter duration of exclusive breastfeeding. With all these challenges, it is interesting that there are some mothers who manage to exclusively breastfeed their infants. Therefore, this report aims at exploring the characteristics of working mothers who are able to practice exclusive breastfeeding. The original study population was non-working and working mothers who have infants around 1 to 6 months old. The study design is an observational study with a mixed methods approach using a quantitative study (survey) and qualitative methods (in-depth interview) in sequential order. In addition, in-depth interviews with family members, midwives, supervisors at work, and community health workers were also included to accomplish a holistic picture of the situation. The study concludes that self-efficacy and confidence of the breastfeeding mothers characterize the practice of exclusive breastfeeding. Good knowledge that was acquired way before the mothers got pregnant suggests a predisposing factor to the current state of confidence. Home support from the father enhances the decision to sustain breastfeeding.

  18. Analyzing key performance indicators (KPIs) for E-commerce and Internet marketing of elderly products: a review.

    PubMed

    Tsai, Yuan-Cheng; Cheng, Yu-Tien

    2012-01-01

    With the transformation of its population structure and economic environment, Taiwan is rapidly becoming an aging society. There is a growing need for elderly products, and therefore the operation of web shops that sell elderly products is important. In an era which values performance management, searching for key performance indicators (KPIs) helps to reveal, if the goals of a web shop are achieved. In the current study, researchers adopted the constructs of the Balanced Scorecard (BSC) to evaluate web shop performance. Additionally, the Delphi method, along with questionnaires, was used to develop 29 indicators. Finally, the decision making trial and evaluation laboratory (DEMATEL) method assisted in identifying the level of importance of the constructs, in which "internal process" ranked top, followed by "learning and growth", "customer", and "financial". "Internal process" was the key construct that impacted other factors, while "customer" was an important construct affected by other factors. By understanding the influences and relationships among the constructs, enterprises can conduct additional monitoring and management to achieve functions of prevention, continuous improvement, and innovation in order to shape their core competence. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  19. Crack Propagation Calculations for Optical Fibers under Static Bending and Tensile Loads Using Continuum Damage Mechanics.

    PubMed

    Chen, Yunxia; Cui, Yuxuan; Gong, Wenjun

    2017-11-15

    Static fatigue behavior is the main failure mode of optical fibers applied in sensors. In this paper, a computational framework based on continuum damage mechanics (CDM) is presented to calculate the crack propagation process and failure time of optical fibers subjected to static bending and tensile loads. For this purpose, the static fatigue crack propagation in the glass core of the optical fiber is studied. Combining a finite element method (FEM), we use the continuum damage mechanics for the glass core to calculate the crack propagation path and corresponding failure time. In addition, three factors including bending radius, tensile force and optical fiber diameter are investigated to find their impacts on the crack propagation process and failure time of the optical fiber under concerned situations. Finally, experiments are conducted and the results verify the correctness of the simulation calculation. It is believed that the proposed method could give a straightforward description of the crack propagation path in the inner glass core. Additionally, the predicted crack propagation time of the optical fiber with different factors can provide effective suggestions for improving the long-term usage of optical fibers.

  20. Ranking factors affecting emissions of GHG from incubated agricultural soils.

    PubMed

    García-Marco, S; Ravella, S R; Chadwick, D; Vallejo, A; Gregory, A S; Cárdenas, L M

    2014-07-01

    Agriculture significantly contributes to global greenhouse gas (GHG) emissions and there is a need to develop effective mitigation strategies. The efficacy of methods to reduce GHG fluxes from agricultural soils can be affected by a range of interacting management and environmental factors. Uniquely, we used the Taguchi experimental design methodology to rank the relative importance of six factors known to affect the emission of GHG from soil: nitrate (NO 3 - ) addition, carbon quality (labile and non-labile C), soil temperature, water-filled pore space (WFPS) and extent of soil compaction. Grassland soil was incubated in jars where selected factors, considered at two or three amounts within the experimental range, were combined in an orthogonal array to determine the importance and interactions between factors with a L 16 design, comprising 16 experimental units. Within this L 16 design, 216 combinations of the full factorial experimental design were represented. Headspace nitrous oxide (N 2 O), methane (CH 4 ) and carbon dioxide (CO 2 ) concentrations were measured and used to calculate fluxes. Results found for the relative influence of factors (WFPS and NO 3 - addition were the main factors affecting N 2 O fluxes, whilst glucose, NO 3 - and soil temperature were the main factors affecting CO 2 and CH 4 fluxes) were consistent with those already well documented. Interactions between factors were also studied and results showed that factors with little individual influence became more influential in combination. The proposed methodology offers new possibilities for GHG researchers to study interactions between influential factors and address the optimized sets of conditions to reduce GHG emissions in agro-ecosystems, while reducing the number of experimental units required compared with conventional experimental procedures that adjust one variable at a time.

  1. Ranking factors affecting emissions of GHG from incubated agricultural soils

    PubMed Central

    García-Marco, S; Ravella, S R; Chadwick, D; Vallejo, A; Gregory, A S; Cárdenas, L M

    2014-01-01

    Agriculture significantly contributes to global greenhouse gas (GHG) emissions and there is a need to develop effective mitigation strategies. The efficacy of methods to reduce GHG fluxes from agricultural soils can be affected by a range of interacting management and environmental factors. Uniquely, we used the Taguchi experimental design methodology to rank the relative importance of six factors known to affect the emission of GHG from soil: nitrate (NO3−) addition, carbon quality (labile and non-labile C), soil temperature, water-filled pore space (WFPS) and extent of soil compaction. Grassland soil was incubated in jars where selected factors, considered at two or three amounts within the experimental range, were combined in an orthogonal array to determine the importance and interactions between factors with a L16 design, comprising 16 experimental units. Within this L16 design, 216 combinations of the full factorial experimental design were represented. Headspace nitrous oxide (N2O), methane (CH4) and carbon dioxide (CO2) concentrations were measured and used to calculate fluxes. Results found for the relative influence of factors (WFPS and NO3− addition were the main factors affecting N2O fluxes, whilst glucose, NO3− and soil temperature were the main factors affecting CO2 and CH4 fluxes) were consistent with those already well documented. Interactions between factors were also studied and results showed that factors with little individual influence became more influential in combination. The proposed methodology offers new possibilities for GHG researchers to study interactions between influential factors and address the optimized sets of conditions to reduce GHG emissions in agro-ecosystems, while reducing the number of experimental units required compared with conventional experimental procedures that adjust one variable at a time. PMID:25177207

  2. Gasification Characteristics and Kinetics of Coke with Chlorine Addition

    NASA Astrophysics Data System (ADS)

    Wang, Cui; Zhang, Jianliang; Jiao, Kexin; Liu, Zhengjian; Chou, Kuochih

    2017-10-01

    The gasification process of metallurgical coke with 0, 1.122, 3.190, and 7.132 wt pct chlorine was investigated through thermogravimetric method from ambient temperature to 1593 K (1320 °C) in purified CO2 atmosphere. The variations in the temperature parameters that T i decreases gradually with increasing chlorine, T f and T max first decrease and then increase, but both in a downward trend indicated that the coke gasification process was catalyzed by the chlorine addition. Then the kinetic model of the chlorine-containing coke gasification was obtained through the advanced determination of the average apparent activation energy, the optimal reaction model, and the pre-exponential factor. The average apparent activation energies were 182.962, 118.525, 139.632, and 111.953 kJ/mol, respectively, which were in the same decreasing trend with the temperature parameters analyzed by the thermogravimetric method. It was also demonstrated that the coke gasification process was catalyzed by chlorine. The optimal kinetic model to describe the gasification process of chlorine-containing coke was the Šesták Berggren model using Málek's method, and the pre-exponential factors were 6.688 × 105, 2.786 × 103, 1.782 × 104, and 1.324 × 103 min-1, respectively. The predictions of chlorine-containing coke gasification from the Šesták Berggren model were well fitted with the experimental data.

  3. Protocol Improvements for Low Concentration DNA-Based Bioaerosol Sampling and Analysis

    PubMed Central

    Ng, Chun Kiat; Miller, Dana; Cao, Bin

    2015-01-01

    Introduction As bioaerosol research attracts increasing attention, there is a need for additional efforts that focus on method development to deal with different environmental samples. Bioaerosol environmental samples typically have very low biomass concentrations in the air, which often leaves researchers with limited options in choosing the downstream analysis steps, especially when culture-independent methods are intended. Objectives This study investigates the impacts of three important factors that can influence the performance of culture-independent DNA-based analysis in dealing with bioaerosol environmental samples engaged in this study. The factors are: 1) enhanced high temperature sonication during DNA extraction; 2) effect of sampling duration on DNA recoverability; and 3) an alternative method for concentrating composite samples. In this study, DNA extracted from samples was analysed using the Qubit fluorometer (for direct total DNA measurement) and quantitative polymerase chain reaction (qPCR). Results and Findings The findings suggest that additional lysis from high temperature sonication is crucial: DNA yields from both high and low biomass samples increased up to 600% when the protocol included 30-min sonication at 65°C. Long air sampling duration on a filter media was shown to have a negative impact on DNA recoverability with up to 98% of DNA lost over a 20-h sampling period. Pooling DNA from separate samples during extraction was proven to be feasible with margins of error below 30%. PMID:26619279

  4. The Bioactivity of Cartilage Extracellular Matrix in Articular Cartilage Regeneration

    PubMed Central

    Sutherland, Amanda J.; Converse, Gabriel L.; Hopkins, Richard A.; Detamore, Michael S.

    2014-01-01

    Cartilage matrix is a particularly promising acellular material for cartilage regeneration given the evidence supporting its chondroinductive character. The ‘raw materials’ of cartilage matrix can serve as building blocks and signals for enhanced tissue regeneration. These matrices can be created by chemical or physical methods: physical methods disrupt cellular membranes and nuclei but may not fully remove all cell components and DNA, whereas chemical methods when combined with physical methods are particularly effective in fully decellularizing such materials. Critical endpoints include no detectable residual DNA or immunogenic antigens. It is important to first delineate between the sources of the cartilage matrix, i.e., derived from matrix produced by cells in vitro or from native tissue, and then to further characterize the cartilage matrix based on the processing method, i.e., decellularization or devitalization. With these distinctions, four types of cartilage matrices exist: decellularized native cartilage (DCC), devitalized native cartilage (DVC), decellularized cell derived matrix (DCCM), and devitalized cell derived matrix (DVCM). Delivery of cartilage matrix may be a straightforward approach without the need for additional cells or growth factors. Without additional biological additives, cartilage matrix may be attractive from a regulatory and commercialization standpoint. Source and delivery method are important considerations for clinical translation. Only one currently marketed cartilage matrix medical device is decellularized, although trends in filed patents suggest additional decellularized products may be available in the future. To choose the most relevant source and processing for cartilage matrix, qualifying testing needs to include targeting the desired application, optimizing delivery of the material, identify relevant FDA regulations, assess availability of raw materials, and immunogenic properties of the product. PMID:25044502

  5. Prevalence, associated factors and heritabilities of metabolic syndrome and its individual components in African Americans: the Jackson Heart Study

    PubMed Central

    Khan, Rumana J; Gebreab, Samson Y; Sims, Mario; Riestra, Pia; Xu, Ruihua; Davis, Sharon K

    2015-01-01

    Objective Both environmental and genetic factors play important roles in the development of metabolic syndrome (MetS). Studies about its associated factors and genetic contribution in African Americans (AA) are sparse. Our aim was to report the prevalence, associated factors and heritability estimates of MetS and its components in AA men and women. Participants and setting Data of this cross-sectional study come from a large community-based Jackson Heart Study (JHS). We analysed a total of 5227 participants, of whom 1636 from 281 families were part of a family study subset of JHS. Methods Participants were classified as having MetS according to the Adult Treatment Panel III criteria. Multiple logistic regression analysis was performed to isolate independently associated factors of MetS (n=5227). Heritability was estimated from the family study subset using variance component methods (n=1636). Results About 27% of men and 40% of women had MetS. For men, associated factors with having MetS were older age, lower physical activity, higher body mass index, and higher homocysteine and adiponectin levels (p<0.05 for all). For women, in addition to all these, lower education, current smoking and higher stress were also significant (p<0.05 for all). After adjusting for covariates, the heritability of MetS was 32% (p<0.001). Heritability ranged from 14 to 45% among its individual components. Relatively higher heritability was estimated for waist circumference (45%), high density lipoprotein-cholesterol (43%) and triglycerides (42%). Heritability of systolic blood pressure (BP), diastolic BP and fasting blood glucose was 16%, 15% and 14%, respectively. Conclusions Stress and low education were associated with having MetS in AA women, but not in men. Higher heritability estimates for lipids and waist circumference support the hypothesis of lipid metabolism playing a central role in the development of MetS and encourage additional efforts to identify the underlying susceptibility genes for this syndrome in AA. PMID:26525420

  6. Genetic Factors in Tendon Injury: A Systematic Review of the Literature

    PubMed Central

    Vaughn, Natalie H.; Stepanyan, Hayk; Gallo, Robert A.; Dhawan, Aman

    2017-01-01

    Background: Tendon injury such as tendinopathy or rupture is common and has multiple etiologies, including both intrinsic and extrinsic factors. The genetic influence on susceptibility to tendon injury is not well understood. Purpose: To analyze the published literature regarding genetic factors associated with tendon injury. Study Design: Systematic review; Level of evidence, 3. Methods: A systematic review of published literature was performed in concordance with the Preferred Reporting Items of Systematic Reviews and Meta-analysis (PRISMA) guidelines to identify current evidence for genetic predisposition to tendon injury. PubMed, Ovid, and ScienceDirect databases were searched. Studies were included for review if they specifically addressed genetic factors and tendon injuries in humans. Reviews, animal studies, or studies evaluating the influence of posttranscription factors and modifications (eg, proteins) were excluded. Results: Overall, 460 studies were available for initial review. After application of inclusion and exclusion criteria, 11 articles were ultimately included for qualitative synthesis. Upon screening of references of these 11 articles, an additional 15 studies were included in the final review, for a total of 26 studies. The genetic factors with the strongest evidence of association with tendon injury were those involving type V collagen A1, tenascin-C, matrix metalloproteinase–3, and estrogen-related receptor beta. Conclusion: The published literature is limited to relatively homogenous populations, with only level 3 and level 4 data. Additional research is needed to make further conclusions about the genetic factors involved in tendon injury. PMID:28856171

  7. Risk factors and clinical indicators for the development of biliary strictures post liver transplant: Significance of bilirubin

    PubMed Central

    Forrest, Elizabeth Ann; Reiling, Janske; Lipka, Geraldine; Fawcett, Jonathan

    2017-01-01

    AIM To identify risk factors associated with the formation of biliary strictures post liver transplantation over a period of 10-year in Queensland. METHODS Data on liver donors and recipients in Queensland between 2005 and 2014 was obtained from an electronic patient data system. In addition, intra-operative and post-operative characteristics were collected and a logistical regression analysis was performed to evaluate their association with the development of biliary strictures. RESULTS Of 296 liver transplants performed, 285 (96.3%) were from brain dead donors. Biliary strictures developed in 45 (15.2%) recipients. Anastomotic stricture formation (n = 25, 48.1%) was the commonest complication, with 14 (58.3%) of these occurred within 6-mo of transplant. A percutaneous approach or endoscopic retrograde cholangiography was used to treat 17 (37.8%) patients with biliary strictures. Biliary reconstruction was initially or ultimately required in 22 (48.9%) patients. In recipients developing biliary strictures, bilirubin was significantly increased within the first post-operative week (Day 7 total bilirubin 74 μmol/L vs 49 μmol/L, P = 0.012). In both univariate and multivariate regression analysis, Day 7 total bilirubin > 55 μmol/L was associated with the development of biliary stricture formation. In addition, hepatic artery thrombosis and primary sclerosing cholangitis were identified as independent risk factors. CONCLUSION In addition to known risk factors, bilirubin levels in the early post-operative period could be used as a clinical indicator for biliary stricture formation. PMID:29312864

  8. Quantitative methods for analysing cumulative effects on fish migration success: a review.

    PubMed

    Johnson, J E; Patterson, D A; Martins, E G; Cooke, S J; Hinch, S G

    2012-07-01

    It is often recognized, but seldom addressed, that a quantitative assessment of the cumulative effects, both additive and non-additive, of multiple stressors on fish survival would provide a more realistic representation of the factors that influence fish migration. This review presents a compilation of analytical methods applied to a well-studied fish migration, a more general review of quantitative multivariable methods, and a synthesis on how to apply new analytical techniques in fish migration studies. A compilation of adult migration papers from Fraser River sockeye salmon Oncorhynchus nerka revealed a limited number of multivariable methods being applied and the sub-optimal reliance on univariable methods for multivariable problems. The literature review of fisheries science, general biology and medicine identified a large number of alternative methods for dealing with cumulative effects, with a limited number of techniques being used in fish migration studies. An evaluation of the different methods revealed that certain classes of multivariable analyses will probably prove useful in future assessments of cumulative effects on fish migration. This overview and evaluation of quantitative methods gathered from the disparate fields should serve as a primer for anyone seeking to quantify cumulative effects on fish migration survival. © 2012 The Authors. Journal of Fish Biology © 2012 The Fisheries Society of the British Isles.

  9. Extraction of polycyclic aromatic hydrocarbons and organochlorine pesticides from soils: a comparison between Soxhlet extraction, microwave-assisted extraction and accelerated solvent extraction techniques.

    PubMed

    Wang, Wentao; Meng, Bingjun; Lu, Xiaoxia; Liu, Yu; Tao, Shu

    2007-10-29

    The methods of simultaneous extraction of polycyclic aromatic hydrocarbons (PAHs) and organochlorine pesticides (OCPs) from soils using Soxhlet extraction, microwave-assisted extraction (MAE) and accelerated solvent extraction (ASE) were established, and the extraction efficiencies using the three methods were systemically compared from procedural blank, limits of detection and quantification, method recovery and reproducibility, method chromatogram and other factors. In addition, soils with different total organic carbon contents were used to test the extraction efficiencies of the three methods. The results showed that the values obtained in this study were comparable with the values reported by other studies. In some respects such as method recovery and reproducibility, there were no significant differences among the three methods for the extraction of PAHs and OCPs. In some respects such as procedural blank and limits of detection and quantification, there were significant differences among the three methods. Overall, ASE had the best extraction efficiency compared to MAE and Soxhlet extraction, and the extraction efficiencies of MAE and Soxhlet extraction were comparable to each other depending on the property such as TOC content of the studied soil. Considering other factors such as solvent consumption and extraction time, ASE and MAE are preferable to Soxhlet extraction.

  10. Improved apparatus for measuring hydraulic conductivity at low water content

    USGS Publications Warehouse

    Nimmo, J.R.; Akstin, K.C.; Mello, K.A.

    1992-01-01

    A modification of the steady-state centrifuge method for unsaturated hydraulic conductivity (K) measurement improves the range and adjustability of this method. The modified apparatus allows mechanical adjustment to vary the measured K by a factor of 360. In addition, the use of different flow-regulation ceramic materials can give a total K range covering about six orders of magnitude. The range extension afforded has led to the lowest steady-state K measurement to date, for a sandy soil of the Delhi series (Typic Xeropsamment). -from Authors

  11. Location Modification Factors for Potential Dose Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snyder, Sandra F.; Barnett, J. Matthew

    2017-01-01

    A Department of Energy facility must comply with the National Emission Standard for Hazardous Air Pollutants for radioactive air emissions. The standard is an effective dose of less than 0.1 mSv yr-1 to the maximum public receptor. Additionally, a lower dose level may be assigned to a specific emission point in a State issued permit. A method to efficiently estimate the expected dose for future emissions is described. This method is most appropriately applied to a research facility with several emission points with generally low emission levels of numerous isotopes.

  12. An assessment of predominant causal factors of pilot deviations that contribute to runway incursions

    NASA Astrophysics Data System (ADS)

    Campbell, Denado M.

    The aim of this study was to identify predominant causal factors of pilot deviations in runway incursions over a two-year period. Runway incursion reports were obtained from NASA's Aviation Safety Reporting System (ASRS), and a qualitative method was used by classifying and coding each report to a specific causal factor(s). The causal factors that were used were substantiated by research from the Aircraft Owner's and Pilot's Association that found that these causal factors were the most common in runway incursion incidents and accidents. An additional causal factor was also utilized to determine the significance of pilot training in relation to runway incursions. From the reports examined, it was found that miscommunication and situational awareness have the greatest impact on pilots and are most often the major causes of runway incursions. This data can be used to assist airports, airlines, and the FAA to understand trends in pilot deviations, and to find solutions for specific problem areas in runway incursion incidents.

  13. Aetiology of Oral Cancer in the Sudan

    PubMed Central

    2013-01-01

    ABSTRACT Objectives To review the studied risk factors that linked to aetiology of oral cancer in the Sudan. There have been numerous reports in the increase in the incidence of oral cancer from various parts of the world. A recent trend for a rising incidence of oral cancer, with the absence of the well established risk factors, has raised concern. Although, there are inconsistent data on incidence and demographical factors, studies suggest that the physiologic response to risk factors by men and women vary in different populations. Material and Methods This review principally examines 33 publications devoted to aetiology of oral cancer in the Sudan, in addition to some risk factors that are commonly practiced in the Sudan. Results Several studies examining risk factors for oral cancer include tobacco use (Smoked and Smokeless), alcohol consumption, occupational risk, familial risk, immune deficits, virus infection and genetic factors. Conclusions Toombak use and infection with high risk Human Papilloma Virus (HPV) were extensively investigated and linked to the aetiology of oral cancer in Sudan. PMID:24422031

  14. 3D Parallel Multigrid Methods for Real-Time Fluid Simulation

    NASA Astrophysics Data System (ADS)

    Wan, Feifei; Yin, Yong; Zhang, Suiyu

    2018-03-01

    The multigrid method is widely used in fluid simulation because of its strong convergence. In addition to operating accuracy, operational efficiency is also an important factor to consider in order to enable real-time fluid simulation in computer graphics. For this problem, we compared the performance of the Algebraic Multigrid and the Geometric Multigrid in the V-Cycle and Full-Cycle schemes respectively, and analyze the convergence and speed of different methods. All the calculations are done on the parallel computing of GPU in this paper. Finally, we experiment with the 3D-grid for each scale, and give the exact experimental results.

  15. Off-Line Quality Control In Integrated Circuit Fabrication Using Experimental Design

    NASA Astrophysics Data System (ADS)

    Phadke, M. S.; Kackar, R. N.; Speeney, D. V.; Grieco, M. J.

    1987-04-01

    Off-line quality control is a systematic method of optimizing production processes and product designs. It is widely used in Japan to produce high quality products at low cost. The method was introduced to us by Professor Genichi Taguchi who is a Deming-award winner and a former Director of the Japanese Academy of Quality. In this paper we will i) describe the off-line quality control method, and ii) document our efforts to optimize the process for forming contact windows in 3.5 Aim CMOS circuits fabricated in the Murray Hill Integrated Circuit Design Capability Laboratory. In the fabrication of integrated circuits it is critically important to produce contact windows of size very near the target dimension. Windows which are too small or too large lead to loss of yield. The off-line quality control method has improved both the process quality and productivity. The variance of the window size has been reduced by a factor of four. Also, processing time for window photolithography has been substantially reduced. The key steps of off-line quality control are: i) Identify important manipulatable process factors and their potential working levels. ii) Perform fractional factorial experiments on the process using orthogonal array designs. iii) Analyze the resulting data to determine the optimum operating levels of the factors. Both the process mean and the process variance are considered in this analysis. iv) Conduct an additional experiment to verify that the new factor levels indeed give an improvement.

  16. Development and Implementation of a Coagulation Factor Testing Method Utilizing Autoverification in a High-volume Clinical Reference Laboratory Environment.

    PubMed

    Riley, Paul W; Gallea, Benoit; Valcour, Andre

    2017-01-01

    Testing coagulation factor activities requires that multiple dilutions be assayed and analyzed to produce a single result. The slope of the line created by plotting measured factor concentration against sample dilution is evaluated to discern the presence of inhibitors giving rise to nonparallelism. Moreover, samples producing results on initial dilution falling outside the analytic measurement range of the assay must be tested at additional dilutions to produce reportable results. The complexity of this process has motivated a large clinical reference laboratory to develop advanced computer algorithms with automated reflex testing rules to complete coagulation factor analysis. A method was developed for autoverification of coagulation factor activity using expert rules developed with on an off the shelf commercially available data manager system integrated into an automated coagulation platform. Here, we present an approach allowing for the autoverification and reporting of factor activity results with greatly diminished technologist effort. To the best of our knowledge, this is the first report of its kind providing a detailed procedure for implementation of autoverification expert rules as applied to coagulation factor activity testing. Advantages of this system include ease of training for new operators, minimization of technologist time spent, reduction of staff fatigue, minimization of unnecessary reflex tests, optimization of turnaround time, and assurance of the consistency of the testing and reporting process.

  17. Highly Reproducible Label Free Quantitative Proteomic Analysis of RNA Polymerase Complexes*

    PubMed Central

    Mosley, Amber L.; Sardiu, Mihaela E.; Pattenden, Samantha G.; Workman, Jerry L.; Florens, Laurence; Washburn, Michael P.

    2011-01-01

    The use of quantitative proteomics methods to study protein complexes has the potential to provide in-depth information on the abundance of different protein components as well as their modification state in various cellular conditions. To interrogate protein complex quantitation using shotgun proteomic methods, we have focused on the analysis of protein complexes using label-free multidimensional protein identification technology and studied the reproducibility of biological replicates. For these studies, we focused on three highly related and essential multi-protein enzymes, RNA polymerase I, II, and III from Saccharomyces cerevisiae. We found that label-free quantitation using spectral counting is highly reproducible at the protein and peptide level when analyzing RNA polymerase I, II, and III. In addition, we show that peptide sampling does not follow a random sampling model, and we show the need for advanced computational models to predict peptide detection probabilities. In order to address these issues, we used the APEX protocol to model the expected peptide detectability based on whole cell lysate acquired using the same multidimensional protein identification technology analysis used for the protein complexes. Neither method was able to predict the peptide sampling levels that we observed using replicate multidimensional protein identification technology analyses. In addition to the analysis of the RNA polymerase complexes, our analysis provides quantitative information about several RNAP associated proteins including the RNAPII elongation factor complexes DSIF and TFIIF. Our data shows that DSIF and TFIIF are the most highly enriched RNAP accessory factors in Rpb3-TAP purifications and demonstrate our ability to measure low level associated protein abundance across biological replicates. In addition, our quantitative data supports a model in which DSIF and TFIIF interact with RNAPII in a dynamic fashion in agreement with previously published reports. PMID:21048197

  18. Development and Evaluation of the Brief Sexual Openness Scale—A Construal Level Theory Based Approach

    PubMed Central

    Chen, Xinguang; Wang, Yan; Li, Fang; Gong, Jie; Yan, Yaqiong

    2015-01-01

    Obtaining reliable and valid data on sensitive questions represents a longstanding challenge for public health, particularly HIV research. To overcome the challenge, we assessed a construal level theory (CLT)-based novel method. The method was previously established and pilot-tested using the Brief Sexual Openness Scale (BSOS). This scale consists of five items assessing attitudes toward premarital sex, multiple sexual partners, homosexuality, extramarital sex, and commercial sex, all rated on a standard 5-point Likert scale. In addition to self-assessment, the participants were asked to assess rural residents, urban residents, and foreigners. The self-assessment plus the assessment of the three other groups were all used as subconstructs of one latent construct: sexual openness. The method was validated with data from 1,132 rural-to-urban migrants (mean age = 32.5, SD = 7.9; 49.6% female) recruited in China. Consistent with CLT, the Cronbach alpha of the BSOS as a conventional tool increased with social distance, from .81 for self-assessment to .97 for assessing foreigners. In addition to a satisfactory fit of the data to a one-factor model (CFI = .94, TLI = .93, RMSEA = .08), a common factor was separated from the four perspective factors (i.e., migrants’ self-perspective and their perspectives of rural residents, urban residents and foreigners) through a trifactor modeling analysis (CFI = .95, TLI = .94, RMSEA = .08). Relative to its conventional form, CTL-based BSOS was more reliable (alpha: .96 vs .81) and valid in predicting sexual desire, frequency of dating, age of first sex, multiple sexual partners and STD history. This novel technique can be used to assess sexual openness, and possibly other sensitive questions among Chinese domestic migrants. PMID:26308336

  19. Measurement bias detection with Kronecker product restricted models for multivariate longitudinal data: an illustration with health-related quality of life data from thirteen measurement occasions

    PubMed Central

    Verdam, Mathilde G. E.; Oort, Frans J.

    2014-01-01

    Highlights Application of Kronecker product to construct parsimonious structural equation models for multivariate longitudinal data. A method for the investigation of measurement bias with Kronecker product restricted models. Application of these methods to health-related quality of life data from bone metastasis patients, collected at 13 consecutive measurement occasions. The use of curves to facilitate substantive interpretation of apparent measurement bias. Assessment of change in common factor means, after accounting for apparent measurement bias. Longitudinal measurement invariance is usually investigated with a longitudinal factor model (LFM). However, with multiple measurement occasions, the number of parameters to be estimated increases with a multiple of the number of measurement occasions. To guard against too low ratios of numbers of subjects and numbers of parameters, we can use Kronecker product restrictions to model the multivariate longitudinal structure of the data. These restrictions can be imposed on all parameter matrices, including measurement invariance restrictions on factor loadings and intercepts. The resulting models are parsimonious and have attractive interpretation, but require different methods for the investigation of measurement bias. Specifically, additional parameter matrices are introduced to accommodate possible violations of measurement invariance. These additional matrices consist of measurement bias parameters that are either fixed at zero or free to be estimated. In cases of measurement bias, it is also possible to model the bias over time, e.g., with linear or non-linear curves. Measurement bias detection with Kronecker product restricted models will be illustrated with multivariate longitudinal data from 682 bone metastasis patients whose health-related quality of life (HRQL) was measured at 13 consecutive weeks. PMID:25295016

  20. Measurement bias detection with Kronecker product restricted models for multivariate longitudinal data: an illustration with health-related quality of life data from thirteen measurement occasions.

    PubMed

    Verdam, Mathilde G E; Oort, Frans J

    2014-01-01

    Application of Kronecker product to construct parsimonious structural equation models for multivariate longitudinal data.A method for the investigation of measurement bias with Kronecker product restricted models.Application of these methods to health-related quality of life data from bone metastasis patients, collected at 13 consecutive measurement occasions.The use of curves to facilitate substantive interpretation of apparent measurement bias.Assessment of change in common factor means, after accounting for apparent measurement bias.Longitudinal measurement invariance is usually investigated with a longitudinal factor model (LFM). However, with multiple measurement occasions, the number of parameters to be estimated increases with a multiple of the number of measurement occasions. To guard against too low ratios of numbers of subjects and numbers of parameters, we can use Kronecker product restrictions to model the multivariate longitudinal structure of the data. These restrictions can be imposed on all parameter matrices, including measurement invariance restrictions on factor loadings and intercepts. The resulting models are parsimonious and have attractive interpretation, but require different methods for the investigation of measurement bias. Specifically, additional parameter matrices are introduced to accommodate possible violations of measurement invariance. These additional matrices consist of measurement bias parameters that are either fixed at zero or free to be estimated. In cases of measurement bias, it is also possible to model the bias over time, e.g., with linear or non-linear curves. Measurement bias detection with Kronecker product restricted models will be illustrated with multivariate longitudinal data from 682 bone metastasis patients whose health-related quality of life (HRQL) was measured at 13 consecutive weeks.

  1. Strange nucleon electromagnetic form factors from lattice QCD

    NASA Astrophysics Data System (ADS)

    Alexandrou, C.; Constantinou, M.; Hadjiyiannakou, K.; Jansen, K.; Kallidonis, C.; Koutsou, G.; Avilés-Casco, A. Vaquero

    2018-05-01

    We evaluate the strange nucleon electromagnetic form factors using an ensemble of gauge configurations generated with two degenerate maximally twisted mass clover-improved fermions with mass tuned to approximately reproduce the physical pion mass. In addition, we present results for the disconnected light quark contributions to the nucleon electromagnetic form factors. Improved stochastic methods are employed leading to high-precision results. The momentum dependence of the disconnected contributions is fitted using the model-independent z-expansion. We extract the magnetic moment and the electric and magnetic radii of the proton and neutron by including both connected and disconnected contributions. We find that the disconnected light quark contributions to both electric and magnetic form factors are nonzero and at the few percent level as compared to the connected. The strange form factors are also at the percent level but more noisy yielding statistical errors that are typically within one standard deviation from a zero value.

  2. Effects of secondary loudspeaker properties on broadband feedforward active duct noise control.

    PubMed

    Chan, Yum-Ji; Huang, Lixi; Lam, James

    2013-07-01

    Dependence of the performance of feedforward active duct noise control on secondary loudspeaker parameters is investigated. Noise reduction performance can be improved if the force factor of the secondary loudspeaker is higher. For example, broadband noise reduction improvement up to 1.6 dB is predicted by increasing the force factor by 50%. In addition, a secondary loudspeaker with a larger force factor was found to have quicker convergence in the adaptive algorithm in experiment. In simulations, noise reduction is improved in using an adaptive algorithm by using a secondary loudspeaker with a heavier moving mass. It is predicted that an extra broadband noise reduction of more than 7 dB can be gained using an adaptive filter if the force factor, moving mass and coil inductance of a commercially available loudspeaker are doubled. Methods to increase the force factor beyond those of commercially available loudspeakers are proposed.

  3. The improved z-scan technique: potentialities of the additional right-angle scattering channel and the input polarization control

    NASA Astrophysics Data System (ADS)

    Volchkov, S. S.; Yuvchenko, S. A.; Zimnyakov, D. A.

    2018-04-01

    The theoretical possibility of retrieving the additional information on the dielectric properties of the nanoparticles material by single scattering in suspensions was studied. We have demonstrated a method of recreating the dielectric function of the material in the fundamental absorption band using the closed aperture z-scanning with the simultaneous Rayleigh scattering intensity measurements and the polarization control of an input laser beam. A possibility to recreate the form factor of the non-spherical particles or anisotropic nonlinear sensitivity for the sphere-like particles was also observed.

  4. [Critical of the additive model of the randomized controlled trial].

    PubMed

    Boussageon, Rémy; Gueyffier, François; Bejan-Angoulvant, Theodora; Felden-Dominiak, Géraldine

    2008-01-01

    Randomized, double-blind, placebo-controlled clinical trials are currently the best way to demonstrate the clinical effectiveness of drugs. Its methodology relies on the method of difference (John Stuart Mill), through which the observed difference between two groups (drug vs placebo) can be attributed to the pharmacological effect of the drug being tested. However, this additive model can be questioned in the event of statistical interactions between the pharmacological and the placebo effects. Evidence in different domains has shown that the placebo effect can influence the effect of the active principle. This article evaluates the methodological, clinical and epistemological consequences of this phenomenon. Topics treated include extrapolating results, accounting for heterogeneous results, demonstrating the existence of several factors in the placebo effect, the necessity to take these factors into account for given symptoms or pathologies, as well as the problem of the "specific" effect.

  5. A synthesis of studies of access point density as a risk factor for road accidents.

    PubMed

    Elvik, Rune

    2017-10-01

    Studies of the relationship between access point density (number of access points, or driveways, per kilometre of road) and accident frequency or rate (number of accidents per unit of exposure) have consistently found that accident rate increases when access point density increases. This paper presents a formal synthesis of the findings of these studies. It was found that the addition of one access point per kilometre of road is associated with an increase of 4% in the expected number of accidents, controlling for traffic volume. Although studies consistently indicate an increase in accident rate as access point density increases, the size of the increase varies substantially between studies. In addition to reviewing studies of access point density as a risk factor, the paper discusses some issues related to formally synthesising regression coefficients by applying the inverse-variance method of meta-analysis. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Challenging ocular image recognition

    NASA Astrophysics Data System (ADS)

    Pauca, V. Paúl; Forkin, Michael; Xu, Xiao; Plemmons, Robert; Ross, Arun A.

    2011-06-01

    Ocular recognition is a new area of biometric investigation targeted at overcoming the limitations of iris recognition performance in the presence of non-ideal data. There are several advantages for increasing the area beyond the iris, yet there are also key issues that must be addressed such as size of the ocular region, factors affecting performance, and appropriate corpora to study these factors in isolation. In this paper, we explore and identify some of these issues with the goal of better defining parameters for ocular recognition. An empirical study is performed where iris recognition methods are contrasted with texture and point operators on existing iris and face datasets. The experimental results show a dramatic recognition performance gain when additional features are considered in the presence of poor quality iris data, offering strong evidence for extending interest beyond the iris. The experiments also highlight the need for the direct collection of additional ocular imagery.

  7. Simultaneous determination of binary mixture of amlodipine besylate and atenolol based on dual wavelengths

    NASA Astrophysics Data System (ADS)

    Lamie, Nesrine T.

    2015-10-01

    Four, accurate, precise, and sensitive spectrophotometric methods are developed for simultaneous determination of a binary mixture of amlodipine besylate (AM) and atenolol (AT). AM is determined at its λmax 360 nm (0D), while atenolol can be determined by four different methods. Method (A) is absorption factor (AF). Method (B) is the new ratio difference method (RD) which measures the difference in amplitudes between 210 and 226 nm. Method (C) is novel constant center spectrophotometric method (CC). Method (D) is mean centering of the ratio spectra (MCR) at 284 nm. The methods are tested by analyzing synthetic mixtures of the cited drugs and they are applied to their commercial pharmaceutical preparation. The validity of results is assessed by applying standard addition technique. The results obtained are found to agree statistically with those obtained by official methods, showing no significant difference with respect to accuracy and precision.

  8. Reptile Hematology.

    PubMed

    Sykes, John M; Klaphake, Eric

    2015-09-01

    The basic principles of hematology used in mammalian medicine can be applied to reptiles. The appearances of the blood cells are significantly different from those seen in most mammals, and vary with taxa and staining method used. Many causes for abnormalities of the reptilian hemogram are similar to those for mammals, although additional factors such as venipuncture site, season, hibernation status, captivity status, and environmental factors can also affect values, making interpretation of hematologic results challenging. Values in an individual should be compared with reference ranges specific to that species, gender, and environmental conditions when available. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Supercritical carbon dioxide extracted extracellular matrix material from adipose tissue.

    PubMed

    Wang, Jun Kit; Luo, Baiwen; Guneta, Vipra; Li, Liang; Foo, Selin Ee Min; Dai, Yun; Tan, Timothy Thatt Yang; Tan, Nguan Soon; Choong, Cleo; Wong, Marcus Thien Chong

    2017-06-01

    Adipose tissue is a rich source of extracellular matrix (ECM) material that can be isolated by delipidating and decellularizing the tissue. However, the current delipidation and decellularization methods either involve tedious and lengthy processes or require toxic chemicals, which may result in the elimination of vital proteins and growth factors found in the ECM. Hence, an alternative delipidation and decellularization method for adipose tissue was developed using supercritical carbon dioxide (SC-CO 2 ) that eliminates the need of any harsh chemicals and also reduces the amount of processing time required. The resultant SC-CO 2 -treated ECM material showed an absence of nuclear content but the preservation of key proteins such as collagen Type I, collagen Type III, collagen Type IV, elastin, fibronectin and laminin. In addition, other biological factors such as glycosaminoglycans (GAGs) and growth factors such as basic fibroblast growth factor (bFGF) and vascular endothelial growth factor (VEGF) were also retained. Subsequently, the resulting SC-CO 2 -treated ECM material was used as a bioactive coating on tissue culture plastic (TCP). Four different cell types including adipose tissue-derived mesenchymal stem cells (ASCs), human umbilical vein endothelial cells (HUVECs), immortalized human keratinocyte (HaCaT) cells and human monocytic leukemia cells (THP-1) were used in this study to show that the SC-CO 2 -treated ECM coating can be potentially used for various biomedical applications. The SC-CO 2 -treated ECM material showed improved cell-material interactions for all cell types tested. In addition, in vitro scratch wound assay using HaCaT cells showed that the presence of SC-CO 2 -treated ECM material enhanced keratinocyte migration whilst the in vitro cellular studies using THP-1-derived macrophages showed that the SC-CO 2 -treated ECM material did not evoke pro-inflammatory responses from the THP-1-derived macrophages. Overall, this study shows the efficacy of SC-CO 2 method for delipidation and decellularization of adipose tissue whilst retaining its ECM and its subsequent utilization as a bioactive surface coating material for soft tissue engineering, angiogenesis and wound healing applications. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Application of factorial designs to study factors involved in the determination of aldehydes present in beer by on-fiber derivatization in combination with gas chromatography and mass spectrometry.

    PubMed

    Carrillo, Génesis; Bravo, Adriana; Zufall, Carsten

    2011-05-11

    With the aim of studying the factors involved in on-fiber derivatization of Strecker aldehydes, furfural, and (E)-2-nonenal with O-(2,3,4,5,6-pentafluorobenzyl)hydroxylamine in beer, factorial designs were applied. The effect of the temperature, time, and NaCl addition on the analytes' derivatization/extraction efficiency was studied through a factorial 2(3) randomized-block design; all of the factors and their interactions were significant at the 95% confidence level for most of the analytes. The effect of temperature and its interactions separated the analytes in two groups. However, a single sampling condition was selected that optimized response for most aldehydes. The resulting method, combining on-fiber derivatization with gas chromatography-mass spectrometry, was validated. Limits of detections were between 0.015 and 1.60 μg/L, and relative standard deviations were between 1.1 and 12.2%. The efficacy of the internal standardization method was confirmed by recovery percentage (73-117%). The method was applied to the determination of aldehydes in fresh beer and after storage at 28 °C.

  11. Optimization of squalene produced from crude palm oil waste

    NASA Astrophysics Data System (ADS)

    Wandira, Irda; Legowo, Evita H.; Widiputri, Diah I.

    2017-01-01

    Squalene is a hydrocarbon originally and still mostly extracted from shark liver oil. Due to environmental issues over shark hunting, there have been efforts to extract squalene from alternative sources, such as Palm Fatty Acid Distillate (PFAD), one of crude palm oil (CPO) wastes. Previous researches have shown that squalene can be extracted from PFAD using saponification process followed with liquid-liquid extraction process although the method had yet to be optimized in order to optimize the amount of squalene extracted from PFAD. The optimization was done by optimizing both processes of squalene extraction method: saponification and liquid-liquid extraction. The factors utilized in the saponification process optimization were KOH concentration and saponification duration while during the liquid-liquid extraction (LLE) process optimization, the factors used were the volumes of distilled water and dichloromethane. The optimum percentage of squalene content in the extract (24.08%) was achieved by saponifying the PFAD with 50%w/v KOH for 60 minutes and subjecting the saponified PFAD to LLE, utilizing 100 ml of distilled water along with 3 times addition of fresh dichloromethane, 75 ml each; those factors would be utilized in the optimum squalene extraction method.

  12. Determination of pKa values of some antipsychotic drugs by HPLC--correlations with the Kamlet and taft solvatochromic parameters and HPLC analysis in dosage forms.

    PubMed

    Sanli, Senem; Akmese, Bediha; Altun, Yuksel

    2013-01-01

    In this study, ionization constant (pKa) values were determined by using the dependence of the retention factor on the pH of the mobile phase for four ionizable drugs, namely, risperidone (RI), clozapine (CL), olanzapine (OL), and sertindole (SE). The effect of the mobile phase composition on the pKa was studied by measuring the pKa at different acetonitrile-water mixtures in an HPLC-UV method. To explain the variation of the pKa values obtained over the whole composition range studied, the quasi-lattice quasi-chemical theory of preferential solvation was applied. The pKa values of drugs were correlated with the Kamlet and Taft solvatochromic parameters. Kamlet and Taft's general equation was reduced to two terms by using combined factor analysis and target factor analysis in these mixtures: the independent term and the hydrogen-bond donating ability a. The HPLC-UV method was successfully applied for the determination of RI, OL, and SE in pharmaceutical dosage forms. CL was chosen as an internal standard. Additionally, the repeatability, reproducibility, selectivity, precision, and accuracy of the method in all media were investigated and calculated.

  13. Analytical method development of nifedipine and its degradants binary mixture using high performance liquid chromatography through a quality by design approach

    NASA Astrophysics Data System (ADS)

    Choiri, S.; Ainurofiq, A.; Ratri, R.; Zulmi, M. U.

    2018-03-01

    Nifedipin (NIF) is a photo-labile drug that easily degrades when it exposures a sunlight. This research aimed to develop of an analytical method using a high-performance liquid chromatography and implemented a quality by design approach to obtain effective, efficient, and validated analytical methods of NIF and its degradants. A 22 full factorial design approach with a curvature as a center point was applied to optimize of the analytical condition of NIF and its degradants. Mobile phase composition (MPC) and flow rate (FR) as factors determined on the system suitability parameters. The selected condition was validated by cross-validation using a leave one out technique. Alteration of MPC affected on time retention significantly. Furthermore, an increase of FR reduced the tailing factor. In addition, the interaction of both factors affected on an increase of the theoretical plates and resolution of NIF and its degradants. The selected analytical condition of NIF and its degradants has been validated at range 1 – 16 µg/mL that had good linearity, precision, accuration and efficient due to an analysis time within 10 min.

  14. Direct purification of pectinase from mango (Mangifera Indica Cv. Chokanan) peel using a PEG/salt-based Aqueous Two Phase System.

    PubMed

    Mehrnoush, Amid; Sarker, Md Zaidul Islam; Mustafa, Shuhaimi; Yazid, Abdul Manap Mohd

    2011-10-10

    An Aqueous Two-Phase System (ATPS) was employed for the first time for the separation and purification of pectinase from mango (Mangifera Indica Cv. Chokanan) peel. The effects of different parameters such as molecular weight of the polymer (polyethylene glycol, 2,000-10,000), potassium phosphate composition (12-20%, w/w), system pH (6-9), and addition of different concentrations of neutral salts (0-8%, w/w) on partition behavior of pectinase were investigated. The partition coefficient of the enzyme was decreased by increasing the PEG molecular weight. Additionally, the phase composition showed a significant effect on purification factor and yield of the enzyme. Optimum conditions for purification of pectinase from mango peel were achieved in a 14% PEG 4000-14% potassium phosphate system using 3% (w/w) NaCl addition at pH 7.0. Based on this system, the purification factor of pectinase was increased to 13.2 with a high yield of (97.6%). Thus, this study proves that ATPS can be an inexpensive and effective method for partitioning of pectinase from mango peel.

  15. Computational Thermochemistry: Scale Factor Databases and Scale Factors for Vibrational Frequencies Obtained from Electronic Model Chemistries.

    PubMed

    Alecu, I M; Zheng, Jingjing; Zhao, Yan; Truhlar, Donald G

    2010-09-14

    Optimized scale factors for calculating vibrational harmonic and fundamental frequencies and zero-point energies have been determined for 145 electronic model chemistries, including 119 based on approximate functionals depending on occupied orbitals, 19 based on single-level wave function theory, three based on the neglect-of-diatomic-differential-overlap, two based on doubly hybrid density functional theory, and two based on multicoefficient correlation methods. Forty of the scale factors are obtained from large databases, which are also used to derive two universal scale factor ratios that can be used to interconvert between scale factors optimized for various properties, enabling the derivation of three key scale factors at the effort of optimizing only one of them. A reduced scale factor optimization model is formulated in order to further reduce the cost of optimizing scale factors, and the reduced model is illustrated by using it to obtain 105 additional scale factors. Using root-mean-square errors from the values in the large databases, we find that scaling reduces errors in zero-point energies by a factor of 2.3 and errors in fundamental vibrational frequencies by a factor of 3.0, but it reduces errors in harmonic vibrational frequencies by only a factor of 1.3. It is shown that, upon scaling, the balanced multicoefficient correlation method based on coupled cluster theory with single and double excitations (BMC-CCSD) can lead to very accurate predictions of vibrational frequencies. With a polarized, minimally augmented basis set, the density functionals with zero-point energy scale factors closest to unity are MPWLYP1M (1.009), τHCTHhyb (0.989), BB95 (1.012), BLYP (1.013), BP86 (1.014), B3LYP (0.986), MPW3LYP (0.986), and VSXC (0.986).

  16. A new method for water quality assessment: by harmony degree equation.

    PubMed

    Zuo, Qiting; Han, Chunhui; Liu, Jing; Ma, Junxia

    2018-02-22

    Water quality assessment is an important basic work in the development, utilization, management, and protection of water resources, and also a prerequisite for water safety. In this paper, the harmony degree equation (HDE) was introduced into the research of water quality assessment, and a new method for water quality assessment was proposed according to the HDE: by harmony degree equation (WQA-HDE). First of all, the calculation steps and ideas of this method were described in detail, and then, this method with some other important methods of water quality assessment (single factor assessment method, mean-type comprehensive index assessment method, and multi-level gray correlation assessment method) were used to assess the water quality of the Shaying River (the largest tributary of the Huaihe in China). For this purpose, 2 years (2013-2014) dataset of nine water quality variables covering seven monitoring sites, and approximately 189 observations were used to compare and analyze the characteristics and advantages of the new method. The results showed that the calculation steps of WQA-HDE are similar to the comprehensive assessment method, and WQA-HDE is more operational comparing with the results of other water quality assessment methods. In addition, this new method shows good flexibility by setting the judgment criteria value HD 0 of water quality; when HD 0  = 0.8, the results are closer to reality, and more realistic and reliable. Particularly, when HD 0  = 1, the results of WQA-HDE are consistent with the single factor assessment method, both methods are subject to the most stringent "one vote veto" judgment condition. So, WQA-HDE is a composite method that combines the single factor assessment and comprehensive assessment. This research not only broadens the research field of theoretical method system of harmony theory but also promotes the unity of water quality assessment method and can be used for reference in other comprehensive assessment.

  17. Image quality in thoracic 4D cone-beam CT: A sensitivity analysis of respiratory signal, binning method, reconstruction algorithm, and projection angular spacing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T.

    Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An investigation of the impacts of these four factors on image quality can help determine the most effective strategy in improving 4D-CBCT for IGRT. Methods: Fourteen 4D-CBCT patient projection datasets withmore » various respiratory motion features were reconstructed with the following controllable factors: (i) respiratory signal (real-time position management, projection image intensity analysis, or fiducial marker tracking), (ii) binning method (phase, displacement, or equal-projection-density displacement binning), and (iii) reconstruction algorithm [Feldkamp–Davis–Kress (FDK), McKinnon–Bates (MKB), or adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS)]. The image quality was quantified using signal-to-noise ratio (SNR), contrast-to-noise ratio, and edge-response width in order to assess noise/streaking and blur. The SNR values were also analyzed with respect to the maximum, mean, and root-mean-squared-error (RMSE) projection angular spacing to investigate how projection angular spacing affects image quality. Results: The choice of respiratory signals was found to have no significant impact on image quality. Displacement-based binning was found to be less prone to motion artifacts compared to phase binning in more than half of the cases, but was shown to suffer from large interbin image quality variation and large projection angular gaps. Both MKB and ASD-POCS resulted in noticeably improved image quality almost 100% of the time relative to FDK. In addition, SNR values were found to increase with decreasing RMSE values of projection angular gaps with strong correlations (r ≈ −0.7) regardless of the reconstruction algorithm used. Conclusions: Based on the authors’ results, displacement-based binning methods, better reconstruction algorithms, and the acquisition of even projection angular views are the most important factors to consider for improving thoracic 4D-CBCT image quality. In view of the practical issues with displacement-based binning and the fact that projection angular spacing is not currently directly controllable, development of better reconstruction algorithms represents the most effective strategy for improving image quality in thoracic 4D-CBCT for IGRT applications at the current stage.« less

  18. Network selection, Information filtering and Scalable computation

    NASA Astrophysics Data System (ADS)

    Ye, Changqing

    This dissertation explores two application scenarios of sparsity pursuit method on large scale data sets. The first scenario is classification and regression in analyzing high dimensional structured data, where predictors corresponds to nodes of a given directed graph. This arises in, for instance, identification of disease genes for the Parkinson's diseases from a network of candidate genes. In such a situation, directed graph describes dependencies among the genes, where direction of edges represent certain causal effects. Key to high-dimensional structured classification and regression is how to utilize dependencies among predictors as specified by directions of the graph. In this dissertation, we develop a novel method that fully takes into account such dependencies formulated through certain nonlinear constraints. We apply the proposed method to two applications, feature selection in large margin binary classification and in linear regression. We implement the proposed method through difference convex programming for the cost function and constraints. Finally, theoretical and numerical analyses suggest that the proposed method achieves the desired objectives. An application to disease gene identification is presented. The second application scenario is personalized information filtering which extracts the information specifically relevant to a user, predicting his/her preference over a large number of items, based on the opinions of users who think alike or its content. This problem is cast into the framework of regression and classification, where we introduce novel partial latent models to integrate additional user-specific and content-specific predictors, for higher predictive accuracy. In particular, we factorize a user-over-item preference matrix into a product of two matrices, each representing a user's preference and an item preference by users. Then we propose a likelihood method to seek a sparsest latent factorization, from a class of over-complete factorizations, possibly with a high percentage of missing values. This promotes additional sparsity beyond rank reduction. Computationally, we design methods based on a ``decomposition and combination'' strategy, to break large-scale optimization into many small subproblems to solve in a recursive and parallel manner. On this basis, we implement the proposed methods through multi-platform shared-memory parallel programming, and through Mahout, a library for scalable machine learning and data mining, for mapReduce computation. For example, our methods are scalable to a dataset consisting of three billions of observations on a single machine with sufficient memory, having good timings. Both theoretical and numerical investigations show that the proposed methods exhibit significant improvement in accuracy over state-of-the-art scalable methods.

  19. Multi-Unmanned Aerial Vehicle (UAV) Cooperative Fault Detection Employing Differential Global Positioning (DGPS), Inertial and Vision Sensors.

    PubMed

    Heredia, Guillermo; Caballero, Fernando; Maza, Iván; Merino, Luis; Viguria, Antidio; Ollero, Aníbal

    2009-01-01

    This paper presents a method to increase the reliability of Unmanned Aerial Vehicle (UAV) sensor Fault Detection and Identification (FDI) in a multi-UAV context. Differential Global Positioning System (DGPS) and inertial sensors are used for sensor FDI in each UAV. The method uses additional position estimations that augment individual UAV FDI system. These additional estimations are obtained using images from the same planar scene taken from two different UAVs. Since accuracy and noise level of the estimation depends on several factors, dynamic replanning of the multi-UAV team can be used to obtain a better estimation in case of faults caused by slow growing errors of absolute position estimation that cannot be detected by using local FDI in the UAVs. Experimental results with data from two real UAVs are also presented.

  20. Analytical Expressions for the Mixed-Order Kinetics Parameters of TL Glow Peaks Based on the two Heating Rates Method.

    PubMed

    Maghrabi, Mufeed; Al-Abdullah, Tariq; Khattari, Ziad

    2018-03-24

    The two heating rates method (originally developed for first-order glow peaks) was used for the first time to evaluate the activation energy (E) from glow peaks obeying mixed-order (MO) kinetics. The derived expression for E has an insignificant additional term (on the scale of a few meV) when compared with the first-order case. Hence, the original expression for E using the two heating rates method can be used with excellent accuracy in the case of MO glow peaks. In addition, we derived a simple analytical expression for the MO parameter. The present procedure has the advantage that the MO parameter can now be evaluated using analytical expression instead of using the graphical representation between the geometrical factor and the MO parameter as given by the existing peak shape methods. The applicability of the derived expressions for real samples was demonstrated for the glow curve of Li 2 B 4 O 7 :Mn single crystal. The obtained parameters compare very well with those obtained by glow curve fitting and with the available published data.

  1. Which sociodemographic factors are important on smoking behaviour of high school students? The contribution of classification and regression tree methodology in a broad epidemiological survey

    PubMed Central

    Özge, C; Toros, F; Bayramkaya, E; Çamdeviren, H; Şaşmaz, T

    2006-01-01

    Background The purpose of this study is to evaluate the most important sociodemographic factors on smoking status of high school students using a broad randomised epidemiological survey. Methods Using in‐class, self administered questionnaire about their sociodemographic variables and smoking behaviour, a representative sample of total 3304 students of preparatory, 9th, 10th, and 11th grades, from 22 randomly selected schools of Mersin, were evaluated and discriminative factors have been determined using appropriate statistics. In addition to binary logistic regression analysis, the study evaluated combined effects of these factors using classification and regression tree methodology, as a new statistical method. Results The data showed that 38% of the students reported lifetime smoking and 16.9% of them reported current smoking with a male predominancy and increasing prevalence by age. Second hand smoking was reported at a 74.3% frequency with father predominance (56.6%). The significantly important factors that affect current smoking in these age groups were increased by household size, late birth rank, certain school types, low academic performance, increased second hand smoking, and stress (especially reported as separation from a close friend or because of violence at home). Classification and regression tree methodology showed the importance of some neglected sociodemographic factors with a good classification capacity. Conclusions It was concluded that, as closely related with sociocultural factors, smoking was a common problem in this young population, generating important academic and social burden in youth life and with increasing data about this behaviour and using new statistical methods, effective coping strategies could be composed. PMID:16891446

  2. Factors Influencing the Job Satisfaction of Health System Employees in Tabriz, Iran

    PubMed Central

    Bagheri, Shokoufe; Kousha, Ahmad; Janati, Ali; Asghari-Jafarabadi, Mohammad

    2012-01-01

    Background: Employees can be counseled on how they feel about their job. If any particular dimension of their job is causing them dissatisfaction, they can be assisted to appropriately change it. In this study, we investigated the factors affecting job satisfaction from the perspective of employees working in the health system and thereby a quantitative measure of job satisfaction. Methods: Using eight focus group discussions (n=70), factors affecting job satisfaction of the employees were discussed. The factors identified from literature review were categorized in four groups: structural and managerial, social, work in it-self, environmental and welfare factors. Results: The findings confirmed the significance of structural and managerial, social, work in it-self, environmental and welfare factors in the level of job satisfaction. In addition, a new factor related to individual characteristics such as employee personal characteristics and development was identified. Conclusion: In order to improve the quality and productivity of work, besides, structural and managerial, social, work in it-self, environmental and welfare factors, policy makers should be taken into account individual characteristics of the employee as a factor affecting job satisfaction. PMID:24688933

  3. Estimation of the electric plasma membrane potential difference in yeast with fluorescent dyes: comparative study of methods.

    PubMed

    Peña, Antonio; Sánchez, Norma Silvia; Calahorra, Martha

    2010-10-01

    Different methods to estimate the plasma membrane potential difference (PMP) of yeast cells with fluorescent monitors were compared. The validity of the methods was tested by the fluorescence difference with or without glucose, and its decrease by the addition of 10 mM KCl. Low CaCl₂ concentrations avoid binding of the dye to the cell surface, and low CCCP concentrations avoid its accumulation by mitochondria. Lower concentrations of Ba²+ produce a similar effect as Ca²+, without producing the fluorescence changes derived from its transport. Fluorescence changes without considering binding of the dyes to the cells and accumulation by mitochondria are overshadowed by their distribution between this organelle and the cytoplasm. Other factors, such as yeast starvation, dye used, parameters of the fluorescence changes, as well as buffers and incubation times were analyzed. An additional approach to measure the actual or relative values of PMP, determining the accumulation of the dye, is presented.

  4. Modifications to the Patient Rule-Induction Method that utilize non-additive combinations of genetic and environmental effects to define partitions that predict ischemic heart disease.

    PubMed

    Dyson, Greg; Frikke-Schmidt, Ruth; Nordestgaard, Børge G; Tybjaerg-Hansen, Anne; Sing, Charles F

    2009-05-01

    This article extends the Patient Rule-Induction Method (PRIM) for modeling cumulative incidence of disease developed by Dyson et al. (Genet Epidemiol 31:515-527) to include the simultaneous consideration of non-additive combinations of predictor variables, a significance test of each combination, an adjustment for multiple testing and a confidence interval for the estimate of the cumulative incidence of disease in each partition. We employ the partitioning algorithm component of the Combinatorial Partitioning Method to construct combinations of predictors, permutation testing to assess the significance of each combination, theoretical arguments for incorporating a multiple testing adjustment and bootstrap resampling to produce the confidence intervals. An illustration of this revised PRIM utilizing a sample of 2,258 European male participants from the Copenhagen City Heart Study is presented that assesses the utility of genetic variants in predicting the presence of ischemic heart disease beyond the established risk factors.

  5. Modifications to the Patient Rule-Induction Method that utilize non-additive combinations of genetic and environmental effects to define partitions that predict ischemic heart disease

    PubMed Central

    Dyson, Greg; Frikke-Schmidt, Ruth; Nordestgaard, Børge G.; Tybjærg-Hansen, Anne; Sing, Charles F.

    2009-01-01

    This paper extends the Patient Rule-Induction Method (PRIM) for modeling cumulative incidence of disease developed by Dyson et al. (2007) to include the simultaneous consideration of non-additive combinations of predictor variables, a significance test of each combination, an adjustment for multiple testing and a confidence interval for the estimate of the cumulative incidence of disease in each partition. We employ the partitioning algorithm component of the Combinatorial Partitioning Method (CPM) to construct combinations of predictors, permutation testing to assess the significance of each combination, theoretical arguments for incorporating a multiple testing adjustment and bootstrap resampling to produce the confidence intervals. An illustration of this revised PRIM utilizing a sample of 2258 European male participants from the Copenhagen City Heart Study is presented that assesses the utility of genetic variants in predicting the presence of ischemic heart disease beyond the established risk factors. PMID:19025787

  6. A method for assessing the intrinsic value and management potentials of geomorphosites

    NASA Astrophysics Data System (ADS)

    Reynard, Emmanuel; Amandine, Perret; Marco, Buchmann; Jonathan, Bussard; Lucien, Grangier; Simon, Martin

    2014-05-01

    In 2007, we have proposed a method for assessing the scientific and additional values of geomorphosites (Reynard et al., 2007). The evaluation methodology was divided in two steps: the evaluation of the scientific value of pre-selected sites, based on several criteria (rareness, integrity, representativeness, interest for reconstructing the regional morphogenesis), and the assessment of a set of so-called additional values (aesthetic, economic, ecological, and cultural). The method has proved to be quite robust and easy to use. The tests carried out in several geomorphological contexts allowed us to improve the implementation process of the method, by precising the criteria used to assess the various values of selected sites. Nevertheless, two main problems remained unsolved: (1) the selection of sites was not clear and not really systematic; (2) some additional values - in particular the economic value - were difficult to assess, and others, not considered in the method, could be evaluated (e.g. the educational value of sites). These were the factors for launching a series of modifications of the method that are presented in this poster. First of all, the assessment procedure was divided in two main steps: (1) the evaluation of the intrinsic value, in two parts (the scientific and additional values, limited to three kinds of values - cultural, ecological, aesthetic); (2) the documentation of the present use and management of the site, also divided in two parts: the sensitivity of the site (allowing us to assess the need for protection), and a series of factors influencing the (tourist) use of the site (visit conditions, educational interest, economic value). Secondly, a procedure was developed to select the potential geomorphosites - that is the sites worth to be assessed using the evaluation method. The method was then tested in four regions in the Swiss and French Alps: the Chablais area (Switzerland, France), the Hérens valley (Switzerland), the Moesano valley (Switzerland), where a project of national park is in preparation, and the Gruyère - Pays-d'Enhaut Regional Nature Park (Switzerland). The main conclusion of the research is that even if a full objectivity in the evaluation process is difficult to reach, transparency is essential almost at 3 stages: (1) the selection of potential geomorphosites: it is important to develop criteria and a method for establishing a list of potential geomorphosites; in this study, we propose to carry out the selection by crossing two dimensions: a spatial one (the selection should reflect the regional geo(morpho)diversity) and a temporal one (the selection should allow reconstructing the regional geomorphological history); (2) the assessment of the intrinsic value of the selected geomorphosites, by the establishment of clear criteria for carrying out the evaluation; (3) the development of a clear management strategy oriented to the protection and tourist promotion of the sites and based on the precise documentation of management potentials and needs, according to the assessment objectives. Reference Reynard E., Fontana G., Kozlik L., Scapozza C. (2007). A method for assessing the scientific and additional values of geomorphosites, Geogr. Helv. 62(3), 148-158.

  7. Finite-nuclear-size contribution to the g factor of a bound electron: Higher-order effects

    NASA Astrophysics Data System (ADS)

    Karshenboim, Savely G.; Ivanov, Vladimir G.

    2018-02-01

    A precision comparison of theory and experiments on the g factor of an electron bound in a hydrogenlike ion with a spinless nucleus requires a detailed account of finite-nuclear-size contributions. While the relativistic corrections to the leading finite-size contribution are known, the higher-order effects need an additional consideration. Two results are presented in the paper. One is on the anomalous-magnetic-moment correction to the finite-size effects and the other is due to higher-order effects in Z α m RN . We also present here a method to relate the contributions to the g factor of a bound electron in a hydrogenlike atom to its energy within a nonrelativistic approach.

  8. Road Lane Detection Robust to Shadows Based on a Fuzzy System Using a Visible Light Camera Sensor.

    PubMed

    Hoang, Toan Minh; Baek, Na Rae; Cho, Se Woon; Kim, Ki Wan; Park, Kang Ryoung

    2017-10-28

    Recently, autonomous vehicles, particularly self-driving cars, have received significant attention owing to rapid advancements in sensor and computation technologies. In addition to traffic sign recognition, road lane detection is one of the most important factors used in lane departure warning systems and autonomous vehicles for maintaining the safety of semi-autonomous and fully autonomous systems. Unlike traffic signs, road lanes are easily damaged by both internal and external factors such as road quality, occlusion (traffic on the road), weather conditions, and illumination (shadows from objects such as cars, trees, and buildings). Obtaining clear road lane markings for recognition processing is a difficult challenge. Therefore, we propose a method to overcome various illumination problems, particularly severe shadows, by using fuzzy system and line segment detector algorithms to obtain better results for detecting road lanes by a visible light camera sensor. Experimental results from three open databases, Caltech dataset, Santiago Lanes dataset (SLD), and Road Marking dataset, showed that our method outperformed conventional lane detection methods.

  9. Vulnerability to burnout within the nursing workforce-The role of personality and interpersonal behaviour.

    PubMed

    Geuens, Nina; Van Bogaert, Peter; Franck, Erik

    2017-12-01

    To study the combination of personality and interpersonal behaviour of staff nurses in general hospitals in relation to burnout and its separate dimensions. More research on the individual factors contributing to the development of burnout is needed to improve the risk profile of nursing staff. Therefore, a combination of Leary's interpersonal circumplex model, which depicts the interpersonal behaviour trait domain, and the five-factor model was considered in the study at hand. A cross-sectional research method was applied using self-report questionnaires. A total of 880 Belgian general hospital nurses were invited to participate in the study. Data were collected from November 2012-July 2013. The questionnaire consisted of three validated self-report instruments: the NEO five-factor inventory, the Dutch Interpersonal Behaviour Scale and the Maslach Burnout Inventory. Of the 880 nurses invited to participate, 587 (67%) returned the questionnaire. Sex, neuroticism, submissive-friendly behaviour, dominant-friendly behaviour and vector length were found to be predictive factors for emotional exhaustion. For depersonalisation, sex, neuroticism, conscientiousness, friendly behaviour, submissive-friendly behaviour, dominant-hostile behaviour and vector length were predictive factors. Finally, personal accomplishment was determined by neuroticism, openness, conscientiousness, and hostile behaviour. This study confirmed the influence of the Big Five personality factors on the separate dimensions of burnout. Interpersonal behaviour made a significant contribution to the predictive capacity of the regression models of all three dimensions of burnout. Additional longitudinal research is required to confirm the causal relationship between these individual factors and burnout. The results of this study can help to achieve a better understanding of which vulnerabilities an individual prevention programme for burnout should target. In addition, hospitals could use assessment instruments to identify nurses who are prone to burnout and thus would benefit from additional support or stress reduction programmes. © 2017 John Wiley & Sons Ltd.

  10. Chemoselective reductive nucleophilic addition to tertiary amides, secondary amides, and N-methoxyamides.

    PubMed

    Nakajima, Minami; Oda, Yukiko; Wada, Takamasa; Minamikawa, Ryo; Shirokane, Kenji; Sato, Takaaki; Chida, Noritaka

    2014-12-22

    As the complexity of targeted molecules increases in modern organic synthesis, chemoselectivity is recognized as an important factor in the development of new methodologies. Chemoselective nucleophilic addition to amide carbonyl centers is a challenge because classical methods require harsh reaction conditions to overcome the poor electrophilicity of the amide carbonyl group. We have successfully developed a reductive nucleophilic addition of mild nucleophiles to tertiary amides, secondary amides, and N-methoxyamides that uses the Schwartz reagent [Cp2 ZrHCl]. The reaction took place in a highly chemoselective fashion in the presence of a variety of sensitive functional groups, such as methyl esters, which conventionally require protection prior to nucleophilic addition. The reaction will be applicable to the concise synthesis of complex natural alkaloids from readily available amide groups. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Human factors with nonhumans - Factors that affect computer-task performance

    NASA Technical Reports Server (NTRS)

    Washburn, David A.

    1992-01-01

    There are two general strategies that may be employed for 'doing human factors research with nonhuman animals'. First, one may use the methods of traditional human factors investigations to examine the nonhuman animal-to-machine interface. Alternatively, one might use performance by nonhuman animals as a surrogate for or model of performance by a human operator. Each of these approaches is illustrated with data in the present review. Chronic ambient noise was found to have a significant but inconsequential effect on computer-task performance by rhesus monkeys (Macaca mulatta). Additional data supported the generality of findings such as these to humans, showing that rhesus monkeys are appropriate models of human psychomotor performance. It is argued that ultimately the interface between comparative psychology and technology will depend on the coordinated use of both strategies of investigation.

  12. Imputing Observed Blood Pressure for Antihypertensive Treatment: Impact on Population and Genetic Analyses

    PubMed Central

    2014-01-01

    BACKGROUND Elevated blood pressure (BP), a heritable risk factor for many age-related disorders, is commonly investigated in population and genetic studies, but antihypertensive use can confound study results. Routine methods to adjust for antihypertensives may not sufficiently account for newer treatment protocols (i.e., combination or multiple drug therapy) found in contemporary cohorts. METHODS We refined an existing method to impute unmedicated BP in individuals on antihypertensives by incorporating new treatment trends. We assessed BP and antihypertensive use in male twins (n = 1,237) from the Vietnam Era Twin Study of Aging: 36% reported antihypertensive use; 52% of those treated were on multiple drugs. RESULTS Estimated heritability was 0.43 (95% confidence interval (CI) = 0.20–0.50) and 0.44 (95% CI = 0.22–0.61) for measured systolic BP (SBP) and diastolic BP (DBP), respectively. We imputed BP for antihypertensives by 3 approaches: (i) addition of a fixed value of 10/5mm Hg to measured SBP/DBP; (ii) incremented addition of mm Hg to BP based on number of medications; and (iii) a refined approach adding mm Hg based on antihypertensive drug class and ethnicity. The imputations did not significantly affect estimated heritability of BP. However, use of our most refined imputation method and other methods resulted in significantly increased phenotypic correlations between BP and body mass index, a trait known to be correlated with BP. CONCLUSIONS This study highlights the potential usefulness of applying a representative adjustment for medication use, such as by considering drug class, ethnicity, and the combination of drugs when assessing the relationship between BP and risk factors. PMID:24532572

  13. Comparing alternative methods for holding virgin honey bee queens for one week in mailing cages before mating.

    PubMed

    Bigio, Gianluigi; Grüter, Christoph; Ratnieks, Francis L W

    2012-01-01

    In beekeeping, queen honey bees are often temporarily kept alive in cages. We determined the survival of newly-emerged virgin honey bee queens every day for seven days in an experiment that simultaneously investigated three factors: queen cage type (wooden three-hole or plastic), attendant workers (present or absent) and food type (sugar candy, honey, or both). Ten queens were tested in each of the 12 combinations. Queens were reared using standard beekeeping methods (Doolittle/grafting) and emerged from their cells into vials held in an incubator at 34C. All 12 combinations gave high survival (90 or 100%) for three days but only one method (wooden cage, with attendants, honey) gave 100% survival to day seven. Factors affecting queen survival were analysed. Across all combinations, attendant bees significantly increased survival (18% vs. 53%, p<0.001). In addition, there was an interaction between food type and cage type (p<0.001) with the honey and plastic cage combination giving reduced survival. An additional group of queens was reared and held for seven days using the best method, and then directly introduced using smoke into queenless nucleus colonies that had been dequeened five days previously. Acceptance was high (80%, 8/10) showing that this combination is also suitable for preparing queens for introduction into colonies. Having a simple method for keeping newly-emerged virgin queens alive in cages for one week and acceptable for introduction into queenless colonies will be useful in honey bee breeding. In particular, it facilitates the screening of many queens for genetic or phenotypic characteristics when only a small proportion meets the desired criteria. These can then be introduced into queenless hives for natural mating or insemination, both of which take place when queens are one week old.

  14. Comparing Alternative Methods for Holding Virgin Honey Bee Queens for One Week in Mailing Cages before Mating

    PubMed Central

    Bigio, Gianluigi; Grüter, Christoph; Ratnieks, Francis L. W.

    2012-01-01

    In beekeeping, queen honey bees are often temporarily kept alive in cages. We determined the survival of newly-emerged virgin honey bee queens every day for seven days in an experiment that simultaneously investigated three factors: queen cage type (wooden three-hole or plastic), attendant workers (present or absent) and food type (sugar candy, honey, or both). Ten queens were tested in each of the 12 combinations. Queens were reared using standard beekeeping methods (Doolittle/grafting) and emerged from their cells into vials held in an incubator at 34C. All 12 combinations gave high survival (90 or 100%) for three days but only one method (wooden cage, with attendants, honey) gave 100% survival to day seven. Factors affecting queen survival were analysed. Across all combinations, attendant bees significantly increased survival (18% vs. 53%, p<0.001). In addition, there was an interaction between food type and cage type (p<0.001) with the honey and plastic cage combination giving reduced survival. An additional group of queens was reared and held for seven days using the best method, and then directly introduced using smoke into queenless nucleus colonies that had been dequeened five days previously. Acceptance was high (80%, 8/10) showing that this combination is also suitable for preparing queens for introduction into colonies. Having a simple method for keeping newly-emerged virgin queens alive in cages for one week and acceptable for introduction into queenless colonies will be useful in honey bee breeding. In particular, it facilitates the screening of many queens for genetic or phenotypic characteristics when only a small proportion meets the desired criteria. These can then be introduced into queenless hives for natural mating or insemination, both of which take place when queens are one week old. PMID:23166832

  15. Determination of rivaroxaban in patient’s plasma samples by anti-Xa chromogenic test associated to High Performance Liquid Chromatography tandem Mass Spectrometry (HPLC-MS/MS)

    PubMed Central

    Derogis, Priscilla Bento Matos; Sanches, Livia Rentas; de Aranda, Valdir Fernandes; Colombini, Marjorie Paris; Mangueira, Cristóvão Luis Pitangueira; Katz, Marcelo; Faulhaber, Adriana Caschera Leme; Mendes, Claudio Ernesto Albers; Ferreira, Carlos Eduardo dos Santos; França, Carolina Nunes; Guerra, João Carlos de Campos

    2017-01-01

    Rivaroxaban is an oral direct factor Xa inhibitor, therapeutically indicated in the treatment of thromboembolic diseases. As other new oral anticoagulants, routine monitoring of rivaroxaban is not necessary, but important in some clinical circumstances. In our study a high-performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) method was validated to measure rivaroxaban plasmatic concentration. Our method used a simple sample preparation, protein precipitation, and a fast chromatographic run. It was developed a precise and accurate method, with a linear range from 2 to 500 ng/mL, and a lower limit of quantification of 4 pg on column. The new method was compared to a reference method (anti-factor Xa activity) and both presented a good correlation (r = 0.98, p < 0.001). In addition, we validated hemolytic, icteric or lipemic plasma samples for rivaroxaban measurement by HPLC-MS/MS without interferences. The chromogenic and HPLC-MS/MS methods were highly correlated and should be used as clinical tools for drug monitoring. The method was applied successfully in a group of 49 real-life patients, which allowed an accurate determination of rivaroxaban in peak and trough levels. PMID:28170419

  16. A retrospective chart review to identify perinatal factors associated with food allergies

    PubMed Central

    2012-01-01

    Background Gut flora are important immunomodulators that may be disrupted in individuals with atopic conditions. Probiotic bacteria have been suggested as therapeutic modalities to mitigate or prevent food allergic manifestations. We wished to investigate whether perinatal factors known to disrupt gut flora increase the risk of IgE-mediated food allergies. Methods Birth records obtained from 192 healthy children and 99 children diagnosed with food allergies were reviewed retrospectively. Data pertaining to delivery method, perinatal antibiotic exposure, neonatal nursery environment, and maternal variables were recorded. Logistic regression analysis was used to assess the association between variables of interest and subsequent food allergy diagnosis. Results Retrospective investigation did not find perinatal antibiotics, NICU admission, or cesarean section to be associated with increased risk of food allergy diagnosis. However, associations between food allergy diagnosis and male gender (66 vs. 33; p=0.02) were apparent in this cohort. Additionally, increasing maternal age at delivery was significantly associated with food allergy diagnosis during childhood (OR, 1.05; 95% CI, 1.017 to 1.105; p=0.005). Conclusions Gut flora are potent immunomodulators, but their overall contribution to immune maturation remains to be elucidated. Additional understanding of the interplay between immunologic, genetic, and environmental factors underlying food allergy development need to be clarified before probiotic therapeutic interventions can routinely be recommended for prevention or mitigation of food allergies. Such interventions may be well-suited in male infants and in infants born to older mothers. PMID:23078601

  17. Developing a robust methodology for assessing the value of weather/climate services

    NASA Astrophysics Data System (ADS)

    Krijnen, Justin; Golding, Nicola; Buontempo, Carlo

    2016-04-01

    Increasingly, scientists involved in providing weather and climate services are expected to demonstrate the value of their work for end users in order to justify the costs of developing and delivering these services. This talk will outline different approaches that can be used to assess the socio-economic benefits of weather and climate services, including, among others, willingness to pay and avoided costs. The advantages and limitations of these methods will be discussed and relevant case-studies will be used to illustrate each approach. The choice of valuation method may be influenced by different factors, such as resource and time constraints and the end purposes of the study. In addition, there are important methodological differences which will affect the value assessed. For instance the ultimate value of a weather/climate forecast to a decision-maker will not only depend on forecast accuracy but also on other factors, such as how the forecast is communicated to and consequently interpreted by the end-user. Thus, excluding these additional factors may result in inaccurate socio-economic value estimates. In order to reduce the inaccuracies in this valuation process we propose an approach that assesses how the initial weather/climate forecast information can be incorporated within the value chain of a given sector, taking into account value gains and losses at each stage of the delivery process. By this we aim to more accurately depict the socio-economic benefits of a weather/climate forecast to decision-makers.

  18. Spatial Data Mining for Estimating Cover Management Factor of Universal Soil Loss Equation

    NASA Astrophysics Data System (ADS)

    Tsai, F.; Lin, T. C.; Chiang, S. H.; Chen, W. W.

    2016-12-01

    Universal Soil Loss Equation (USLE) is a widely used mathematical model that describes long-term soil erosion processes. Among the six different soil erosion risk factors of USLE, the cover-management factor (C-factor) is related to land-cover/land-use. The value of C-factor ranges from 0.001 to 1, so it alone might cause a thousandfold difference in a soil erosion analysis using USLE. The traditional methods for the estimation of USLE C-factor include in situ experiments, soil physical parameter models, USLE look-up tables with land use maps, and regression models between vegetation indices and C-factors. However, these methods are either difficult or too expensive to implement in large areas. In addition, the values of C-factor obtained using these methods can not be updated frequently, either. To address this issue, this research developed a spatial data mining approach to estimate the values of C-factor with assorted spatial datasets for a multi-temporal (2004 to 2008) annual soil loss analysis of a reservoir watershed in northern Taiwan. The idea is to establish the relationship between the USLE C-factor and spatial data consisting of vegetation indices and texture features extracted from satellite images, soil and geology attributes, digital elevation model, road and river distribution etc. A decision tree classifier was used to rank influential conditional attributes in the preliminary data mining. Then, factor simplification and separation were considered to optimize the model and the random forest classifier was used to analyze 9 simplified factor groups. Experimental results indicate that the overall accuracy of the data mining model is about 79% with a kappa value of 0.76. The estimated soil erosion amounts in 2004-2008 according to the data mining results are about 50.39 - 74.57 ton/ha-year after applying the sediment delivery ratio and correction coefficient. Comparing with estimations calculated with C-factors from look-up tables, the soil erosion values estimated with C-factors generated from spatial data mining results are more in agreement with the values published by the watershed administration authority.

  19. Supplier Selection Using Weighted Utility Additive Method

    NASA Astrophysics Data System (ADS)

    Karande, Prasad; Chakraborty, Shankar

    2015-10-01

    Supplier selection is a multi-criteria decision-making (MCDM) problem which mainly involves evaluating a number of available suppliers according to a set of common criteria for choosing the best one to meet the organizational needs. For any manufacturing or service organization, selecting the right upstream suppliers is a key success factor that will significantly reduce purchasing cost, increase downstream customer satisfaction and improve competitive ability. The past researchers have attempted to solve the supplier selection problem employing different MCDM techniques which involve active participation of the decision makers in the decision-making process. This paper deals with the application of weighted utility additive (WUTA) method for solving supplier selection problems. The WUTA method, an extension of utility additive approach, is based on ordinal regression and consists of building a piece-wise linear additive decision model from a preference structure using linear programming (LP). It adopts preference disaggregation principle and addresses the decision-making activities through operational models which need implicit preferences in the form of a preorder of reference alternatives or a subset of these alternatives present in the process. The preferential preorder provided by the decision maker is used as a restriction of a LP problem, which has its own objective function, minimization of the sum of the errors associated with the ranking of each alternative. Based on a given reference ranking of alternatives, one or more additive utility functions are derived. Using these utility functions, the weighted utilities for individual criterion values are combined into an overall weighted utility for a given alternative. It is observed that WUTA method, having a sound mathematical background, can provide accurate ranking to the candidate suppliers and choose the best one to fulfill the organizational requirements. Two real time examples are illustrated to prove its applicability and appropriateness in solving supplier selection problems.

  20. A Longitudinal Study in Adults with Sequential Bilateral Cochlear Implants: Time Course for Individual Ear and Bilateral Performance

    ERIC Educational Resources Information Center

    Reeder, Ruth M.; Firszt, Jill B.; Holden, Laura K.; Strube, Michael J.

    2014-01-01

    Purpose: The purpose of this study was to examine the rate of progress in the 2nd implanted ear as it relates to the 1st implanted ear and to bilateral performance in adult sequential cochlear implant recipients. In addition, this study aimed to identify factors that contribute to patient outcomes. Method: The authors performed a prospective…

  1. The Development and Validation of an End-User Satisfaction Measure in a Student Laptop Environment

    ERIC Educational Resources Information Center

    Kim, Sung; Meng, Juan; Kalinowski, Jon; Shin, Dooyoung

    2014-01-01

    The purpose of this paper is to present the development and validation of a measurement model for student user satisfaction in a laptop environment. Using a "quasi Delphi" method in addition to contributions from prior research we used EFA and CFA (LISREL) to identify a five factor (14 item) measurement model that best fit the data. The…

  2. Modeling and Recovery of Iron (Fe) from Red Mud by Coal Reduction

    NASA Astrophysics Data System (ADS)

    Zhao, Xiancong; Li, Hongxu; Wang, Lei; Zhang, Lifeng

    Recovery of Fe from red mud has been studied using statistically designed experiments. The effects of three factors, namely: reduction temperature, reduction time and proportion of additive on recovery of Fe have been investigated. Experiments have been carried out using orthogonal central composite design and factorial design methods. A model has been obtained through variance analysis at 92.5% confidence level.

  3. Predictive Factor Analysis of Response-Adapted Radiation Therapy for Chemotherapy-Sensitive Pediatric Hodgkin Lymphoma: Analysis of the Children's Oncology Group AHOD 0031 Trial

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Charpentier, Anne-Marie; Friedman, Debra L.; Wolden, Suzanne

    Purpose: To evaluate whether clinical risk factors could further distinguish children with intermediate-risk Hodgkin lymphoma (HL) with rapid early and complete anatomic response (RER/CR) who benefit significantly from involved-field RT (IFRT) from those who do not, and thereby aid refinement of treatment selection. Methods and Materials: Children with intermediate-risk HL treated on the Children's Oncology Group AHOD 0031 trial who achieved RER/CR with 4 cycles of chemotherapy, and who were randomized to 21-Gy IFRT or no additional therapy (n=716) were the subject of this study. Recursive partitioning analysis was used to identify factors associated with clinically and statistically significant improvement inmore » event-free survival (EFS) after randomization to IFRT. Bootstrap sampling was used to evaluate the robustness of the findings. Result: Although most RER/CR patients did not benefit significantly from IFRT, those with a combination of anemia and bulky limited-stage disease (n=190) had significantly better 4-year EFS with the addition of IFRT (89.3% vs 77.9% without IFRT; P=.019); this benefit was consistently reproduced in bootstrap analyses and after adjusting for other prognostic factors. Conclusion: Although most patients achieving RER/CR had favorable outcomes with 4 cycles of chemotherapy alone, those children with initial bulky stage I/II disease and anemia had significantly better EFS with the addition of IFRT as part of combined-modality therapy. Further work evaluating the interaction of clinical and biologic factors and imaging response is needed to further optimize and refine treatment selection.« less

  4. Acute Diarrheal Syndromic Surveillance

    PubMed Central

    Kam, H.J.; Choi, S.; Cho, J.P.; Min, Y.G.; Park, R.W.

    2010-01-01

    Objective In an effort to identify and characterize the environmental factors that affect the number of patients with acute diarrheal (AD) syndrome, we developed and tested two regional surveillance models including holiday and weather information in addition to visitor records, at emergency medical facilities in the Seoul metropolitan area of Korea. Methods With 1,328,686 emergency department visitor records from the National Emergency Department Information system (NEDIS) and the holiday and weather information, two seasonal ARIMA models were constructed: (1) The simple model (only with total patient number), (2) the environmental factor-added model. The stationary R-squared was utilized as an in-sample model goodness-of-fit statistic for the constructed models, and the cumulative mean of the Mean Absolute Percentage Error (MAPE) was used to measure post-sample forecast accuracy over the next 1 month. Results The (1,0,1)(0,1,1)7 ARIMA model resulted in an adequate model fit for the daily number of AD patient visits over 12 months for both cases. Among various features, the total number of patient visits was selected as a commonly influential independent variable. Additionally, for the environmental factor-added model, holidays and daily precipitation were selected as features that statistically significantly affected model fitting. Stationary R-squared values were changed in a range of 0.651-0.828 (simple), and 0.805-0.844 (environmental factor-added) with p<0.05. In terms of prediction, the MAPE values changed within 0.090-0.120 and 0.089-0.114, respectively. Conclusion The environmental factor-added model yielded better MAPE values. Holiday and weather information appear to be crucial for the construction of an accurate syndromic surveillance model for AD, in addition to the visitor and assessment records. PMID:23616829

  5. Integrating the Ergonomics Techniques with Multi Criteria Decision Making as a New Approach for Risk Management: An Assessment of Repetitive Tasks -Entropy Case Study.

    PubMed

    Khandan, Mohammad; Nili, Majid; Koohpaei, Alireza; Mosaferchi, Saeedeh

    2016-01-01

    Nowadays, the health work decision makers need to analyze a huge amount of data and consider many conflicting evaluation criteria and sub-criteria. Therefore, an ergonomic evaluation in the work environment in order to the control occupational disorders is considered as the Multi Criteria Decision Making (MCDM) problem. In this study, the ergonomic risks factors, which may influence health, were evaluated in a manufacturing company in 2014. Then entropy method was applied to prioritize the different risk factors. This study was done with a descriptive-analytical approach and 13 tasks were included from total number of employees who were working in the seven halls of an ark opal manufacturing (240). Required information was gathered by the demographic questionnaire and Assessment of Repetitive Tasks (ART) method for repetitive task assessment. In addition, entropy was used to prioritize the risk factors based on the ergonomic control needs. The total exposure score based on the ART method calculated was equal to 30.07 ±12.43. Data analysis illustrated that 179 cases (74.6% of tasks) were in the high level of risk area and 13.8% were in the medium level of risk. ART- entropy results revealed that based on the weighted factors, higher value belongs to grip factor and the lowest value was related to neck and hand posture and duration. Based on the limited financial resources, it seems that MCDM in many challenging situations such as control procedures and priority approaches could be used successfully. Other MCDM methods for evaluating and prioritizing the ergonomic problems are recommended.

  6. A quantitative method for evaluating alternatives. [aid to decision making

    NASA Technical Reports Server (NTRS)

    Forthofer, M. J.

    1981-01-01

    When faced with choosing between alternatives, people tend to use a number of criteria (often subjective, rather than objective) to decide which is the best alternative for them given their unique situation. The subjectivity inherent in the decision-making process can be reduced by the definition and use of a quantitative method for evaluating alternatives. This type of method can help decision makers achieve degree of uniformity and completeness in the evaluation process, as well as an increased sensitivity to the factors involved. Additional side-effects are better documentation and visibility of the rationale behind the resulting decisions. General guidelines for defining a quantitative method are presented and a particular method (called 'hierarchical weighted average') is defined and applied to the evaluation of design alternatives for a hypothetical computer system capability.

  7. Regression dilution bias: tools for correction methods and sample size calculation.

    PubMed

    Berglund, Lars

    2012-08-01

    Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.

  8. Heart rate calculation from ensemble brain wave using wavelet and Teager-Kaiser energy operator.

    PubMed

    Srinivasan, Jayaraman; Adithya, V

    2015-01-01

    Electroencephalogram (EEG) signal artifacts are caused by various factors, such as, Electro-oculogram (EOG), Electromyogram (EMG), Electrocardiogram (ECG), movement artifact and line interference. The relatively high electrical energy cardiac activity causes EEG artifacts. In EEG signal processing the general approach is to remove the ECG signal. In this paper, we introduce an automated method to extract the ECG signal from EEG using wavelet and Teager-Kaiser energy operator for R-peak enhancement and detection. From the detected R-peaks the heart rate (HR) is calculated for clinical diagnosis. To check the efficiency of our method, we compare the HR calculated from ECG signal recorded in synchronous with EEG. The proposed method yields a mean error of 1.4% for the heart rate and 1.7% for mean R-R interval. The result illustrates that, proposed method can be used for ECG extraction from single channel EEG and used in clinical diagnosis like estimation for stress analysis, fatigue, and sleep stages classification studies as a multi-model system. In addition, this method eliminates the dependence of additional synchronous ECG in extraction of ECG from EEG signal process.

  9. Box-Counting Method of 2D Neuronal Image: Method Modification and Quantitative Analysis Demonstrated on Images from the Monkey and Human Brain.

    PubMed

    Rajković, Nemanja; Krstonošić, Bojana; Milošević, Nebojša

    2017-01-01

    This study calls attention to the difference between traditional box-counting method and its modification. The appropriate scaling factor, influence on image size and resolution, and image rotation, as well as different image presentation, are showed on the sample of asymmetrical neurons from the monkey dentate nucleus. The standard BC method and its modification were evaluated on the sample of 2D neuronal images from the human neostriatum. In addition, three box dimensions (which estimate the space-filling property, the shape, complexity, and the irregularity of dendritic tree) were used to evaluate differences in the morphology of type III aspiny neurons between two parts of the neostriatum.

  10. Comprehensive human transcription factor binding site map for combinatory binding motifs discovery.

    PubMed

    Müller-Molina, Arnoldo J; Schöler, Hans R; Araúzo-Bravo, Marcos J

    2012-01-01

    To know the map between transcription factors (TFs) and their binding sites is essential to reverse engineer the regulation process. Only about 10%-20% of the transcription factor binding motifs (TFBMs) have been reported. This lack of data hinders understanding gene regulation. To address this drawback, we propose a computational method that exploits never used TF properties to discover the missing TFBMs and their sites in all human gene promoters. The method starts by predicting a dictionary of regulatory "DNA words." From this dictionary, it distills 4098 novel predictions. To disclose the crosstalk between motifs, an additional algorithm extracts TF combinatorial binding patterns creating a collection of TF regulatory syntactic rules. Using these rules, we narrowed down a list of 504 novel motifs that appear frequently in syntax patterns. We tested the predictions against 509 known motifs confirming that our system can reliably predict ab initio motifs with an accuracy of 81%-far higher than previous approaches. We found that on average, 90% of the discovered combinatorial binding patterns target at least 10 genes, suggesting that to control in an independent manner smaller gene sets, supplementary regulatory mechanisms are required. Additionally, we discovered that the new TFBMs and their combinatorial patterns convey biological meaning, targeting TFs and genes related to developmental functions. Thus, among all the possible available targets in the genome, the TFs tend to regulate other TFs and genes involved in developmental functions. We provide a comprehensive resource for regulation analysis that includes a dictionary of "DNA words," newly predicted motifs and their corresponding combinatorial patterns. Combinatorial patterns are a useful filter to discover TFBMs that play a major role in orchestrating other factors and thus, are likely to lock/unlock cellular functional clusters.

  11. Comprehensive Human Transcription Factor Binding Site Map for Combinatory Binding Motifs Discovery

    PubMed Central

    Müller-Molina, Arnoldo J.; Schöler, Hans R.; Araúzo-Bravo, Marcos J.

    2012-01-01

    To know the map between transcription factors (TFs) and their binding sites is essential to reverse engineer the regulation process. Only about 10%–20% of the transcription factor binding motifs (TFBMs) have been reported. This lack of data hinders understanding gene regulation. To address this drawback, we propose a computational method that exploits never used TF properties to discover the missing TFBMs and their sites in all human gene promoters. The method starts by predicting a dictionary of regulatory “DNA words.” From this dictionary, it distills 4098 novel predictions. To disclose the crosstalk between motifs, an additional algorithm extracts TF combinatorial binding patterns creating a collection of TF regulatory syntactic rules. Using these rules, we narrowed down a list of 504 novel motifs that appear frequently in syntax patterns. We tested the predictions against 509 known motifs confirming that our system can reliably predict ab initio motifs with an accuracy of 81%—far higher than previous approaches. We found that on average, 90% of the discovered combinatorial binding patterns target at least 10 genes, suggesting that to control in an independent manner smaller gene sets, supplementary regulatory mechanisms are required. Additionally, we discovered that the new TFBMs and their combinatorial patterns convey biological meaning, targeting TFs and genes related to developmental functions. Thus, among all the possible available targets in the genome, the TFs tend to regulate other TFs and genes involved in developmental functions. We provide a comprehensive resource for regulation analysis that includes a dictionary of “DNA words,” newly predicted motifs and their corresponding combinatorial patterns. Combinatorial patterns are a useful filter to discover TFBMs that play a major role in orchestrating other factors and thus, are likely to lock/unlock cellular functional clusters. PMID:23209563

  12. A Bayesian Framework to Account for Complex Non-Genetic Factors in Gene Expression Levels Greatly Increases Power in eQTL Studies

    PubMed Central

    Durbin, Richard; Winn, John

    2010-01-01

    Gene expression measurements are influenced by a wide range of factors, such as the state of the cell, experimental conditions and variants in the sequence of regulatory regions. To understand the effect of a variable of interest, such as the genotype of a locus, it is important to account for variation that is due to confounding causes. Here, we present VBQTL, a probabilistic approach for mapping expression quantitative trait loci (eQTLs) that jointly models contributions from genotype as well as known and hidden confounding factors. VBQTL is implemented within an efficient and flexible inference framework, making it fast and tractable on large-scale problems. We compare the performance of VBQTL with alternative methods for dealing with confounding variability on eQTL mapping datasets from simulations, yeast, mouse, and human. Employing Bayesian complexity control and joint modelling is shown to result in more precise estimates of the contribution of different confounding factors resulting in additional associations to measured transcript levels compared to alternative approaches. We present a threefold larger collection of cis eQTLs than previously found in a whole-genome eQTL scan of an outbred human population. Altogether, 27% of the tested probes show a significant genetic association in cis, and we validate that the additional eQTLs are likely to be real by replicating them in different sets of individuals. Our method is the next step in the analysis of high-dimensional phenotype data, and its application has revealed insights into genetic regulation of gene expression by demonstrating more abundant cis-acting eQTLs in human than previously shown. Our software is freely available online at http://www.sanger.ac.uk/resources/software/peer/. PMID:20463871

  13. Preformulation considerations for controlled release dosage forms. Part III. Candidate form selection using numerical weighting and scoring.

    PubMed

    Chrzanowski, Frank

    2008-01-01

    Two numerical methods, Decision Analysis (DA) and Potential Problem Analysis (PPA) are presented as alternative selection methods to the logical method presented in Part I. In DA properties are weighted and outcomes are scored. The weighted scores for each candidate are totaled and final selection is based on the totals. Higher scores indicate better candidates. In PPA potential problems are assigned a seriousness factor and test outcomes are used to define the probability of occurrence. The seriousness-probability products are totaled and forms with minimal scores are preferred. DA and PPA have never been compared to the logical-elimination method. Additional data were available for two forms of McN-5707 to provide complete preformulation data for five candidate forms. Weight and seriousness factors (independent variables) were obtained from a survey of experienced formulators. Scores and probabilities (dependent variables) were provided independently by Preformulation. The rankings of the five candidate forms, best to worst, were similar for all three methods. These results validate the applicability of DA and PPA for candidate form selection. DA and PPA are particularly applicable in cases where there are many candidate forms and where each form has some degree of unfavorable properties.

  14. Tracing the conformational changes in BSA using FRET with environmentally-sensitive squaraine probes

    NASA Astrophysics Data System (ADS)

    Govor, Iryna V.; Tatarets, Anatoliy L.; Obukhova, Olena M.; Terpetschnig, Ewald A.; Gellerman, Gary; Patsenker, Leonid D.

    2016-06-01

    A new potential method of detecting the conformational changes in hydrophobic proteins such as bovine serum albumin (BSA) is introduced. The method is based on the change in the Förster resonance energy transfer (FRET) efficiency between protein-sensitive fluorescent probes. As compared to conventional FRET based methods, in this new approach the donor and acceptor dyes are not covalently linked to protein molecules. Performance of the new method is demonstrated using the protein-sensitive squaraine probes Square-634 (donor) and Square-685 (acceptor) to detect the urea-induced conformational changes of BSA. The FRET efficiency between these probes can be considered a more sensitive parameter to trace protein unfolding as compared to the changes in fluorescence intensity of each of these probes. Addition of urea followed by BSA unfolding causes a noticeable decrease in the emission intensities of these probes (factor of 5.6 for Square-634 and 3.0 for Square-685), and the FRET efficiency changes by a factor of up to 17. Compared to the conventional method the new approach therefore demonstrates to be a more sensitive way to detect the conformational changes in BSA.

  15. Investigation of Chemical Exchange at Intermediate Exchange Rates using a Combination of Chemical Exchange Saturation Transfer (CEST) and Spin-Locking methods (CESTrho)

    PubMed Central

    Kogan, Feliks; Singh, Anup; Cai, Keija; Haris, Mohammad; Hariharan, Hari; Reddy, Ravinder

    2011-01-01

    Proton exchange imaging is important as it allows for visualization and quantification of the distribution of specific metabolites with conventional MRI. Current exchange mediated MRI methods suffer from poor contrast as well as confounding factors that influence exchange rates. In this study we developed a new method to measure proton exchange which combines chemical exchange saturation transfer (CEST) and T1ρ magnetization preparation methods (CESTrho). We demonstrated that this new CESTrho sequence can detect proton exchange in the slow to intermediate exchange regimes. It has a linear dependence on proton concentration which allows it to be used to quantitatively measure changes in metabolite concentration. Additionally, the magnetization scheme of this new method can be customized to make it insensitive to changes in exchange rate while retaining its dependency on solute concentration. Finally, we showed the feasibility of using CESTrho in vivo. This sequence is able to detect proton exchange at intermediate exchange rates and is unaffected by the confounding factors that influence proton exchange rates thus making it ideal for the measurement of metabolites with exchangeable protons in this exchange regime. PMID:22009759

  16. Investigation of chemical exchange at intermediate exchange rates using a combination of chemical exchange saturation transfer (CEST) and spin-locking methods (CESTrho).

    PubMed

    Kogan, Feliks; Singh, Anup; Cai, Keija; Haris, Mohammad; Hariharan, Hari; Reddy, Ravinder

    2012-07-01

    Proton exchange imaging is important as it allows for visualization and quantification of the distribution of specific metabolites with conventional MRI. Current exchange mediated MRI methods suffer from poor contrast as well as confounding factors that influence exchange rates. In this study we developed a new method to measure proton exchange which combines chemical exchange saturation transfer and T(1)(ρ) magnetization preparation methods (CESTrho). We demonstrated that this new CESTrho sequence can detect proton exchange in the slow to intermediate exchange regimes. It has a linear dependence on proton concentration which allows it to be used to quantitatively measure changes in metabolite concentration. Additionally, the magnetization scheme of this new method can be customized to make it insensitive to changes in exchange rate while retaining its dependency on solute concentration. Finally, we showed the feasibility of using CESTrho in vivo. This sequence is able to detect proton exchange at intermediate exchange rates and is unaffected by the confounding factors that influence proton exchange rates thus making it ideal for the measurement of metabolites with exchangeable protons in this exchange regime. Copyright © 2011 Wiley Periodicals, Inc.

  17. A method of self-pursued boundary value on a body and the Magnus effect calculated with this method

    NASA Astrophysics Data System (ADS)

    Yoshino, Fumio; Hayashi, Tatsuo; Waka, Ryoji

    1991-03-01

    A computational method, designated 'SPB', is proposed for the automatic determination of the stream function Phi on an arbitrarily profiled body without recourse to empirical factors. The method is applied to the case of a rotating, circular cross-section cylinder in a uniform shear flow, and the results obtained are compared with those of both the method in which the value of Phi is fixed on a body and the conventional empirical method; it is in view of this established that the SPB method is very efficient and applicable to both steady and unsteady flows. The SPB method, in addition to yielding the aerodynamic forces acting on a cylinder, shows that the Magnus effect lift force decreases as the velocity gradient of the shear flow increases while the cylinder's rotational speed is kept constant.

  18. Factors influencing unsafe behaviors and accidents on construction sites: a review.

    PubMed

    Khosravi, Yahya; Asilian-Mahabadi, Hassan; Hajizadeh, Ebrahim; Hassanzadeh-Rangi, Narmin; Bastani, Hamid; Behzadan, Amir H

    2014-01-01

    Construction is a hazardous occupation due to the unique nature of activities involved and the repetitiveness of several field behaviors. The aim of this methodological and theoretical review is to explore the empirical factors influencing unsafe behaviors and accidents on construction sites. In this work, results and findings from 56 related previous studies were investigated. These studies were categorized based on their design, type, methods of data collection, analytical methods, variables, and key findings. A qualitative content analysis procedure was used to extract variables, themes, and factors. In addition, all studies were reviewed to determine the quality rating and to evaluate the strength of provided evidence. The content analysis identified 8 main categories: (a) society, (b) organization, (c) project management, (d) supervision, (e) contractor, (f) site condition, (g) work group, and (h) individual characteristics. The review highlighted the importance of more distal factors, e.g., society and organization, and project management, that may contribute to reducing the likelihood of unsafe behaviors and accidents through the promotion of site condition and individual features (as proximal factors). Further research is necessary to provide a better understanding of the links between unsafe behavior theories and empirical findings, challenge theoretical assumptions, develop new applied theories, and make stronger recommendations.

  19. Constraining the 7Be(p,γ)8B S-factor with the new precise 7Be solar neutrino flux from Borexino

    NASA Astrophysics Data System (ADS)

    Takács, M. P.; Bemmerer, D.; Junghans, A. R.; Zuber, K.

    2018-02-01

    Among the solar fusion reactions, the rate of the 7Be(p , γ)8B reaction is one of the most difficult to determine rates. In a number of previous experiments, its astrophysical S-factor has been measured at E = 0.1- 2.5 MeV centre-of-mass energy. However, no experimental data is available below 0.1 MeV. Thus, an extrapolation to solar energies is necessary, resulting in significant uncertainty for the extrapolated S-factor. On the other hand, the measured solar neutrino fluxes are now very precise. Therefore, the problem of the S-factor determination is turned around here: Using the measured 7Be and 8B neutrino fluxes and the Standard Solar Model, the 7Be(p , γ)8B astrophysical S-factor is determined at the solar Gamow peak. In addition, the 3He(α , γ)7Be S-factor is redetermined with a similar method.

  20. Bioinformatics Identification of Modules of Transcription Factor Binding Sites in Alzheimer's Disease-Related Genes by In Silico Promoter Analysis and Microarrays

    PubMed Central

    Augustin, Regina; Lichtenthaler, Stefan F.; Greeff, Michael; Hansen, Jens; Wurst, Wolfgang; Trümbach, Dietrich

    2011-01-01

    The molecular mechanisms and genetic risk factors underlying Alzheimer's disease (AD) pathogenesis are only partly understood. To identify new factors, which may contribute to AD, different approaches are taken including proteomics, genetics, and functional genomics. Here, we used a bioinformatics approach and found that distinct AD-related genes share modules of transcription factor binding sites, suggesting a transcriptional coregulation. To detect additional coregulated genes, which may potentially contribute to AD, we established a new bioinformatics workflow with known multivariate methods like support vector machines, biclustering, and predicted transcription factor binding site modules by using in silico analysis and over 400 expression arrays from human and mouse. Two significant modules are composed of three transcription factor families: CTCF, SP1F, and EGRF/ZBPF, which are conserved between human and mouse APP promoter sequences. The specific combination of in silico promoter and multivariate analysis can identify regulation mechanisms of genes involved in multifactorial diseases. PMID:21559189

  1. Effects of Environmental Factors on Soluble Expression of a Humanized Anti-TNF-α scFv Antibody in Escherichia coli

    PubMed Central

    Sina, Mohammad; Farajzadeh, Davoud; Dastmalchi, Siavoush

    2015-01-01

    Purpose: The bacterial cultivation conditions for obtaining anti-TNF-α single chain variable fragment (scFv) antibody as the soluble product in E. coli was investigated. Methods: To avoid the production of inclusion bodies, the effects of lactose, IPTG, incubation time, temperature, shaking protocol, medium additives (Mg+2, sucrose), pH, osmotic and heat shocks were examined. Samples from bacterial growth conditions with promising results of soluble expression of GST-hD2 scFv were affinity purified and quantified by SDS-PAGE and image processing for further evaluation. Results: The results showed that cultivation in LB medium under induction by low concentrations of lactose and incubation at 10 °C led to partial solubilization of the expressed anti-TNF-α scFv (GST-hD2). Other variables which showed promising increase in soluble expression of GST-hD2 were osmotic shock and addition of magnesium chloride. Furthermore, addition of sucrose to medium suppressed the expression of scFv completely. The other finding was that the addition of sorbitol decreased the growth rate of bacteria. Conclusion: It can be concluded that low cultivation temperature in the presence of low amount of inducer under a long incubation time or addition of magnesium chloride are the most effective environmental factors studied for obtaining the maximum solubilization of GST-hD2 recombinant protein. PMID:26819916

  2. Practical security and privacy attacks against biometric hashing using sparse recovery

    NASA Astrophysics Data System (ADS)

    Topcu, Berkay; Karabat, Cagatay; Azadmanesh, Matin; Erdogan, Hakan

    2016-12-01

    Biometric hashing is a cancelable biometric verification method that has received research interest recently. This method can be considered as a two-factor authentication method which combines a personal password (or secret key) with a biometric to obtain a secure binary template which is used for authentication. We present novel practical security and privacy attacks against biometric hashing when the attacker is assumed to know the user's password in order to quantify the additional protection due to biometrics when the password is compromised. We present four methods that can reconstruct a biometric feature and/or the image from a hash and one method which can find the closest biometric data (i.e., face image) from a database. Two of the reconstruction methods are based on 1-bit compressed sensing signal reconstruction for which the data acquisition scenario is very similar to biometric hashing. Previous literature introduced simple attack methods, but we show that we can achieve higher level of security threats using compressed sensing recovery techniques. In addition, we present privacy attacks which reconstruct a biometric image which resembles the original image. We quantify the performance of the attacks using detection error tradeoff curves and equal error rates under advanced attack scenarios. We show that conventional biometric hashing methods suffer from high security and privacy leaks under practical attacks, and we believe more advanced hash generation methods are necessary to avoid these attacks.

  3. A deconvolution extraction method for 2D multi-object fibre spectroscopy based on the regularized least-squares QR-factorization algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Jian; Yin, Qian; Guo, Ping; Luo, A.-li

    2014-09-01

    This paper presents an efficient method for the extraction of astronomical spectra from two-dimensional (2D) multifibre spectrographs based on the regularized least-squares QR-factorization (LSQR) algorithm. We address two issues: we propose a modified Gaussian point spread function (PSF) for modelling the 2D PSF from multi-emission-line gas-discharge lamp images (arc images), and we develop an efficient deconvolution method to extract spectra in real circumstances. The proposed modified 2D Gaussian PSF model can fit various types of 2D PSFs, including different radial distortion angles and ellipticities. We adopt the regularized LSQR algorithm to solve the sparse linear equations constructed from the sparse convolution matrix, which we designate the deconvolution spectrum extraction method. Furthermore, we implement a parallelized LSQR algorithm based on graphics processing unit programming in the Compute Unified Device Architecture to accelerate the computational processing. Experimental results illustrate that the proposed extraction method can greatly reduce the computational cost and memory use of the deconvolution method and, consequently, increase its efficiency and practicability. In addition, the proposed extraction method has a stronger noise tolerance than other methods, such as the boxcar (aperture) extraction and profile extraction methods. Finally, we present an analysis of the sensitivity of the extraction results to the radius and full width at half-maximum of the 2D PSF.

  4. Experiment study on RC frame retrofitted by the external structure

    NASA Astrophysics Data System (ADS)

    Liu, Chunyang; Shi, Junji; Hiroshi, Kuramoto; Taguchi, Takashi; Kamiya, Takashi

    2016-09-01

    A new retrofitting method is proposed herein for reinforced concrete (RC) structures through attachment of an external structure. The external structure consists of a fiber concrete encased steel frame, connection slab and transverse beams. The external structure is connected to the existing structure through a connection slab and transverse beams. Pseudostatic experiments were carried out on one unretrofitted specimen and three retrofitted frame specimens. The characteristics, including failure mode, crack pattern, hysteresis loops behavior, relationship of strain and displacement of the concrete slab, are demonstrated. The results show that the load carrying capacity is obviously increased, and the extension length of the slab and the number of columns within the external frame are important influence factors on the working performance of the existing structure. In addition, the displacement difference between the existing structure and the outer structure was caused mainly by three factors: shear deformation of the slab, extraction of transverse beams, and drift of the conjunction part between the slab and the existing frame. Furthermore, the total deformation determined by the first two factors accounted for approximately 80% of the damage, therefore these factors should be carefully considered in engineering practice to enhance the effects of this new retrofitting method.

  5. Gene-Environment Interactions in Cardiovascular Disease

    PubMed Central

    Flowers, Elena; Froelicher, Erika Sivarajan; Aouizerat, Bradley E.

    2011-01-01

    Background Historically, models to describe disease were exclusively nature-based or nurture-based. Current theoretical models for complex conditions such as cardiovascular disease acknowledge the importance of both biologic and non-biologic contributors to disease. A critical feature is the occurrence of interactions between numerous risk factors for disease. The interaction between genetic (i.e. biologic, nature) and environmental (i.e. non-biologic, nurture) causes of disease is an important mechanism for understanding both the etiology and public health impact of cardiovascular disease. Objectives The purpose of this paper is to describe theoretical underpinnings of gene-environment interactions, models of interaction, methods for studying gene-environment interactions, and the related concept of interactions between epigenetic mechanisms and the environment. Discussion Advances in methods for measurement of genetic predictors of disease have enabled an increasingly comprehensive understanding of the causes of disease. In order to fully describe the effects of genetic predictors of disease, it is necessary to place genetic predictors within the context of known environmental risk factors. The additive or multiplicative effect of the interaction between genetic and environmental risk factors is often greater than the contribution of either risk factor alone. PMID:21684212

  6. Collaborative Learning in Higher Education: Evoking Positive Interdependence

    PubMed Central

    Scager, Karin; Boonstra, Johannes; Peeters, Ton; Vulperhorst, Jonne; Wiegant, Fred

    2016-01-01

    Collaborative learning is a widely used instructional method, but the learning potential of this instructional method is often underused in practice. Therefore, the importance of various factors underlying effective collaborative learning should be determined. In the current study, five different life sciences undergraduate courses with successful collaborative-learning results were selected. This study focuses on factors that increased the effectiveness of collaboration in these courses, according to the students. Nine focus group interviews were conducted and analyzed. Results show that factors evoking effective collaboration were student autonomy and self-regulatory behavior, combined with a challenging, open, and complex group task that required the students to create something new and original. The design factors of these courses fostered a sense of responsibility and of shared ownership of both the collaborative process and the end product of the group assignment. In addition, students reported the absence of any free riders in these group assignments. Interestingly, it was observed that students seemed to value their sense of achievement, their learning processes, and the products they were working on more than their grades. It is concluded that collaborative learning in higher education should be designed using challenging and relevant tasks that build shared ownership with students. PMID:27909019

  7. Effect of Ferrous Additives on Magnesia Stone Hydration

    NASA Astrophysics Data System (ADS)

    Zimich, V.

    2017-11-01

    The article deals with the modification of the magnesia binder with additives containing two- and three-valent iron cations which could be embedded in the chloromagnesium stone structure and also increase the strength from 60 MPa in a non-additive stone to 80MPa, water resistance from 0.58 for clear stone to 0.8 and reduce the hygroscopicity from 8% in the non-additive stone to 2% in the modified chloromagnesium stone. It is proposed to use the iron hydroxide sol as an additive in the quantities of up to 1% of the weight of the binder. The studies were carried out using the modern analysis methods: the differentialthermal and X-ray phase analysis. The structure was studied with an electron microscope with an X-ray microanalyzer. A two-factor plan-experiment was designed which allowed constructing mathematical models characterizing the influence of variable factors, such as the density of the zatcher and the amount of sol in the binder, on the basic properties of the magnesian stone. The result of the research was the magnesia stone with the claimed properties and formed from minerals characteristic for magnesian materials as well as additionally formed from amachenite and goethite. It has been established that a highly active iron hydroxide sol the ion sizes of which are commensurate with magnesium ions is actively incorporated into the structure of pentahydroxychloride and magnesium hydroxide changing the habit of crystals compacting the structure of the stone and changing its hygroscopicity.

  8. Confirmatory factor analysis of the PTSD Checklist and the Clinician-Administered PTSD Scale in disaster workers exposed to the World Trade Center Ground Zero.

    PubMed

    Palmieri, Patrick A; Weathers, Frank W; Difede, JoAnn; King, Dainel W

    2007-05-01

    Although posttraumatic stress disorder (PTSD) factor analytic research has yielded little support for the DSM-IV 3-factor model of reexperiencing, avoidance, and hyperarousal symptoms, no clear consensus regarding alternative models has emerged. One possible explanation is differential instrumentation across studies. In the present study, the authors used confirmatory factor analysis to compare a self-report measure, the PTSD Checklist (PCL), and a structured clinical interview, the Clinician-Administered PTSD Scale (CAPS), in 2,960 utility workers exposed to the World Trade Center Ground Zero site. Although two 4-factor models fit adequately for each measure, the latent structure of the PCL was slightly better represented by correlated reexperiencing, avoidance, dysphoria, and hyperarousal factors, whereas that of the CAPS was slightly better represented by correlated reexperiencing, avoidance, emotional numbing, and hyperarousal factors. After accounting for method variance, the model specifying dysphoria as a distinct factor achieved slightly better fit. Patterns of correlations with external variables provided additional support for the dysphoria model. Implications regarding the underlying structure of PTSD are discussed.

  9. Spatial Characteristics and Driving Factors of Provincial Wastewater Discharge in China

    PubMed Central

    Chen, Kunlun; Liu, Xiaoqiong; Ding, Lei; Huang, Gengzhi; Li, Zhigang

    2016-01-01

    Based on the increasing pressure on the water environment, this study aims to clarify the overall status of wastewater discharge in China, including the spatio-temporal distribution characteristics of wastewater discharge and its driving factors, so as to provide reference for developing “emission reduction” strategies in China and discuss regional sustainable development and resources environment policies. We utilized the Exploratory Spatial Data Analysis (ESDA) method to analyze the characteristics of the spatio-temporal distribution of the total wastewater discharge among 31 provinces in China from 2002 to 2013. Then, we discussed about the driving factors, affected the wastewater discharge through the Logarithmic Mean Divisia Index (LMDI) method and classified those driving factors. Results indicate that: (1) the total wastewater discharge steadily increased, based on the social economic development, with an average growth rate of 5.3% per year; the domestic wastewater discharge is the main source of total wastewater discharge, and the amount of domestic wastewater discharge is larger than the industrial wastewater discharge. There are many spatial differences of wastewater discharge among provinces via the ESDA method. For example, provinces with high wastewater discharge are mainly the developed coastal provinces such as Jiangsu Province and Guangdong Province. Provinces and their surrounding areas with low wastewater discharge are mainly the undeveloped ones in Northwest China; (2) The dominant factors affecting wastewater discharge are the economy and technological advance; The secondary one is the efficiency of resource utilization, which brings about the unstable effect; population plays a less important role in wastewater discharge. The dominant driving factors affecting wastewater discharge among 31 provinces are divided into three types, including two-factor dominant type, three-factor leading type and four-factor antagonistic type. In addition, the proposals aimed at reducing the wastewater discharge are provided on the basis of these three types. PMID:27941698

  10. Pre-test probability of obstructive coronary stenosis in patients undergoing coronary CT angiography: Comparative performance of the modified diamond-Forrester algorithm versus methods incorporating cardiovascular risk factors.

    PubMed

    Ferreira, António Miguel; Marques, Hugo; Tralhão, António; Santos, Miguel Borges; Santos, Ana Rita; Cardoso, Gonçalo; Dores, Hélder; Carvalho, Maria Salomé; Madeira, Sérgio; Machado, Francisco Pereira; Cardim, Nuno; de Araújo Gonçalves, Pedro

    2016-11-01

    Current guidelines recommend the use of the Modified Diamond-Forrester (MDF) method to assess the pre-test likelihood of obstructive coronary artery disease (CAD). We aimed to compare the performance of the MDF method with two contemporary algorithms derived from multicenter trials that additionally incorporate cardiovascular risk factors: the calculator-based 'CAD Consortium 2' method, and the integer-based CONFIRM score. We assessed 1069 consecutive patients without known CAD undergoing coronary CT angiography (CCTA) for stable chest pain. Obstructive CAD was defined as the presence of coronary stenosis ≥50% on 64-slice dual-source CT. The three methods were assessed for calibration, discrimination, net reclassification, and changes in proposed downstream testing based upon calculated pre-test likelihoods. The observed prevalence of obstructive CAD was 13.8% (n=147). Overestimations of the likelihood of obstructive CAD were 140.1%, 9.8%, and 18.8%, respectively, for the MDF, CAD Consortium 2 and CONFIRM methods. The CAD Consortium 2 showed greater discriminative power than the MDF method, with a C-statistic of 0.73 vs. 0.70 (p<0.001), while the CONFIRM score did not (C-statistic 0.71, p=0.492). Reclassification of pre-test likelihood using the 'CAD Consortium 2' or CONFIRM scores resulted in a net reclassification improvement of 0.19 and 0.18, respectively, which would change the diagnostic strategy in approximately half of the patients. Newer risk factor-encompassing models allow for a more precise estimation of pre-test probabilities of obstructive CAD than the guideline-recommended MDF method. Adoption of these scores may improve disease prediction and change the diagnostic pathway in a significant proportion of patients. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. Contribution of population growth to per capita income and sectoral output growth in Japan, 1880-1970.

    PubMed

    Yamaguchi, M; Kennedy, G

    1984-09-01

    The authors measured the positive and negative contributions of population and labor force growth to the growth of per capita income and sectoral output in Japan in the 1880-1970 period. A 2-sector growth accounting model that treats population and labor growth as separate variables was used. 3 alternative methods were used: the Residual method, the Verdoorn method, and the factor augmenting rate method. The total contribution of population cum labor growth to per capita income growth tended to be negative in the 1880-1930 period and positive in the 1930-40 and 1950-70. Over the 1880-1970 period as a whole, population cum labor growth made a positive contribution to per capita income growth under the Residual method (0.35%/year), the factor augmenting rate method (0.29%/year), and the Verdoorn method (0.01%/year). In addition, population cum labor growth contributed positively to sectoral output growth. The average contribution to agricultural output growth ranged from 1.03% (Verdoorn) - 1.46%/year (factor augmenting rate), while the average contribution to nonagricultural output growth ranged from 1.22% (Verdoorn) - 1.60%/year (Residual). Although these results are dependent on the model used, the fact that all 3 methods yielded consistent results suggests that population cum labor growth did make a positive contribution to per capita income and sectoral output growth in Japan. These findings imply that in economies where the rate of technical change in agricultural and nonagricultural sectors exceeds population growth, policies that reduce agricultural elasticities may be preferable; on the other hand, policies that reduce agricultural elasticities are to be avoided in economies with low rates of technical change. Moreover, in the early stages of economic development, policies that increase agricultural income and price elasticities should be considered.

  12. Oxygen tension level and human viral infections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morinet, Frédéric, E-mail: frederic.morinet@sls.aphp.fr; Université Denis Diderot, Sorbonne Paris Cité Paris, Paris; Casetti, Luana

    2013-09-15

    The role of oxygen tension level is a well-known phenomenon that has been studied in oncology and radiotherapy since about 60 years. Oxygen tension may inhibit or stimulate propagation of viruses in vitro as well as in vivo. In turn modulating oxygen metabolism may constitute a novel approach to treat viral infections as an adjuvant therapy. The major transcription factor which regulates oxygen tension level is hypoxia-inducible factor-1 alpha (HIF-1α). Down-regulating the expression of HIF-1α is a possible method in the treatment of chronic viral infection such as human immunodeficiency virus infection, chronic hepatitis B and C viral infections andmore » Kaposi sarcoma in addition to classic chemotherapy. The aim of this review is to supply an updating concerning the influence of oxygen tension level in human viral infections and to evoke possible new therapeutic strategies regarding this environmental condition. - Highlights: • Oxygen tension level regulates viral replication in vitro and possibly in vivo. • Hypoxia-inducible factor 1 (HIF-1α) is the principal factor involved in Oxygen tension level. • HIF-1α upregulates gene expression for example of HIV, JC and Kaposi sarcoma viruses. • In addition to classical chemotherapy inhibition of HIF-1α may constitute a new track to treat human viral infections.« less

  13. Epidemiology and disease control in everyday beef practice.

    PubMed

    Larson, R L

    2008-08-01

    It is important for food animal veterinarians to understand the interaction among animals, pathogens, and the environment, in order to implement herd-specific biosecurity plans. Animal factors such as the number of immunologically protected individuals influence the number of individuals that a potential pathogen is able to infect, as well as the speed of spread through a population. Pathogens differ in their virulence and contagiousness. In addition, pathogens have various methods of transmission that impact how they interact with a host population. A cattle population's environment includes its housing type, animal density, air quality, and exposure to mud or dust and other health antagonists such as parasites and stress; these environmental factors influence the innate immunity of a herd by their impact on immunosuppression. In addition, a herd's environment also dictates the "animal flow" or contact and mixing patterns of potentially infectious and susceptible animals. Biosecurity is the attempt to keep infectious agents away from a herd, state, or country, and to control the spread of infectious agents within a herd. Infectious agents (bacteria, viruses, or parasites) alone are seldom able to cause disease in cattle without contributing factors from other infectious agents and/or the cattle's environment. Therefore to develop biosecurity plans for infectious disease in cattle, veterinarians must consider the pathogen, as well as environmental and animal factors.

  14. Opportunities to improve the conversion of food waste to lactate: Fine-tuning secondary factors.

    PubMed

    RedCorn, Raymond; Engelberth, Abigail S

    2017-11-01

    Extensive research has demonstrated the potential for bioconversion of food waste to lactate, with major emphasis on adjusting temperature, pH, and loading rate of the fermentation. Each of these factors has a significant effect on lactate production; however, additional secondary factors have received little attention. Here we investigate three additional factors where opportunities exist for process improvement: freezing of samples during storage, discontinuous pH control, and holdover of fermentation broth between fermentations. Freezing samples prior to fermentation was shown to reduce the production rate of lactate by 8%, indicating freeze-thaw should be avoided in experiments. Prior work indicated a trade-off in pH control strategies, where discontinuous pH control correlated with higher lactate accumulation while continuous pH control correlated with higher production rate. Here we demonstrate that continuous pH control can achieve both higher lactate accumulation and higher production rate. Finally, holding over fermentation broth was shown to be a simple method to improve production rate (by 18%) at high food waste loading rates (>140 g volatile solids L -1 ) but resulted in lower lactate accumulation (by 17%). The results inform continued process improvements within the waste treatment of food waste through fermentation to lactic acid.

  15. Bandwidth and Fidelity on the NEO-Five Factor Inventory: Replicability and Reliability of Saucier’s (1998) Item Cluster Subcomponents

    PubMed Central

    Chapman, Benjamin P.

    2012-01-01

    Many users of the NEO-Five Factor Inventory (NEO-FFI; Costa & McCrae, 1992) are unaware that Saucier (1998) developed item cluster subcomponents for each broad domain of the instrument similar to the facets of the Revised NEO Personality Inventory (Costa & McCrae, 1992). In this study, I examined the following: the replicability of the subcomponents in young adult university and middle-aged community samples; whether item keying accounted for additional covariance among items; subcomponent correlations with a measure of socially desirable responding; subcomponent reliabilities; and subcomponent discriminant validity with respect to age-relevant criterion items expected to reflect varying associations with broad and narrow traits. Confirmatory factor analyses revealed that all subcomponents were recoverable across samples and that the addition of method factors representing positive and negative item keying improved model fit. The subcomponents correlated no more with a measure of socially desirable responding than their parent domains and showed good average reliability. Correlations with criterion items suggested that subcomponents may prove useful in specifying which elements of NEO-FFI domains are more or less related to variables of interest. I discuss their use for enhancing the precision of findings obtained with NEO-FFI domain scores. PMID:17437386

  16. Modeling per capita state health expenditure variation: state-level characteristics matter.

    PubMed

    Cuckler, Gigi; Sisko, Andrea

    2013-01-01

    In this paper, we describe the methods underlying the econometric model developed by the Office of the Actuary in the Centers for Medicare & Medicaid Services, to explain differences in per capita total personal health care spending by state, as described in Cuckler, et al. (2011). Additionally, we discuss many alternative model specifications to provide additional insights for valid interpretation of the model. We study per capita personal health care spending as measured by the State Health Expenditures, by State of Residence for 1991-2009, produced by the Centers for Medicare & Medicaid Services' Office of the Actuary. State-level demographic, health status, economic, and health economy characteristics were gathered from a variety of U.S. government sources, such as the Census Bureau, Bureau of Economic Analysis, the Centers for Disease Control, the American Hospital Association, and HealthLeaders-InterStudy. State-specific factors, such as income, health care capacity, and the share of elderly residents, are important factors in explaining the level of per capita personal health care spending variation among states over time. However, the slow-moving nature of health spending per capita and close relationships among state-level factors create inefficiencies in modeling this variation, likely resulting in incorrectly estimated standard errors. In addition, we find that both pooled and fixed effects models primarily capture cross-sectional variation rather than period-specific variation.

  17. Patterns and biases in climate change research on amphibians and reptiles: a systematic review

    PubMed Central

    2016-01-01

    Climate change probably has severe impacts on animal populations, but demonstrating a causal link can be difficult because of potential influences by additional factors. Assessing global impacts of climate change effects may also be hampered by narrow taxonomic and geographical research foci. We review studies on the effects of climate change on populations of amphibians and reptiles to assess climate change effects and potential biases associated with the body of work that has been conducted within the last decade. We use data from 104 studies regarding the effect of climate on 313 species, from 464 species–study combinations. Climate change effects were reported in 65% of studies. Climate change was identified as causing population declines or range restrictions in half of the cases. The probability of identifying an effect of climate change varied among regions, taxa and research methods. Climatic effects were equally prevalent in studies exclusively investigating climate factors (more than 50% of studies) and in studies including additional factors, thus bolstering confidence in the results of studies exclusively examining effects of climate change. Our analyses reveal biases with respect to geography, taxonomy and research question, making global conclusions impossible. Additional research should focus on under-represented regions, taxa and questions. Conservation and climate policy should consider the documented harm climate change causes reptiles and amphibians. PMID:27703684

  18. Modeling Per Capita State Health Expenditure Variation: State-Level Characteristics Matter

    PubMed Central

    Cuckler, Gigi; Sisko, Andrea

    2013-01-01

    Objective In this paper, we describe the methods underlying the econometric model developed by the Office of the Actuary in the Centers for Medicare & Medicaid Services, to explain differences in per capita total personal health care spending by state, as described in Cuckler, et al. (2011). Additionally, we discuss many alternative model specifications to provide additional insights for valid interpretation of the model. Data Source We study per capita personal health care spending as measured by the State Health Expenditures, by State of Residence for 1991–2009, produced by the Centers for Medicare & Medicaid Services’ Office of the Actuary. State-level demographic, health status, economic, and health economy characteristics were gathered from a variety of U.S. government sources, such as the Census Bureau, Bureau of Economic Analysis, the Centers for Disease Control, the American Hospital Association, and HealthLeaders-InterStudy. Principal Findings State-specific factors, such as income, health care capacity, and the share of elderly residents, are important factors in explaining the level of per capita personal health care spending variation among states over time. However, the slow-moving nature of health spending per capita and close relationships among state-level factors create inefficiencies in modeling this variation, likely resulting in incorrectly estimated standard errors. In addition, we find that both pooled and fixed effects models primarily capture cross-sectional variation rather than period-specific variation. PMID:24834363

  19. Contextual Factors Related to Implementation and Reach of a Pragmatic Multisite Trial– The My Own Health Report (MOHR) Study

    PubMed Central

    Balasubramanian, Bijal A.; Heurtin-Roberts, Suzanne; Krasny, Sarah; Rohweder, Catherine; Fair, Kayla; Olmos, Tanya; Stange, Kurt C.; Gorin, Sherri Sheinfeld

    2018-01-01

    Background Contextual factors relevant to health care improvement studies are important for translating findings to other settings; however, these are rarely collected systematically and reported. This study articulates a prospective method for assessing contextual factors and describes factors related to implementation and patient reach of a pragmatic multisite trial conducted in nine primary care practices. Methods In a qualitative case-series, contextual factors were assessed from the My Own Health Report (MOHR) study, focused on systematically conducting health risk assessments and goal setting for unhealthy behaviors and behavioral health in primary care. Data were collected prospectively at baseline, mid-point, and end of intervention using a template that guided conduct of interviews and observations at practice sites. A multidisciplinary team used an iterative process to summarize themes describing contextual factors related to intervention implementation and patient reach, calculated by dividing the number of patients who completed the MOHR assessment by the number of patients offered MOHR. Results Contextual factors operational both within and external to the practice environment influenced implementation and patient reach over time. These included practice members’ motivations towards the MOHR intervention, practice staff capacity to take on additional responsibilities for implementation, practice information system capacity, external resources to support quality improvement, linkages with community resources, and fit of implementation strategy to patient populations. Conclusions Systematic assessment of contextual factors throughout implementation of quality improvement initiatives is needed to meaningfully interpret findings and translate lessons learned to other health care settings. Thus, knowledge of contextual factors is essential for scaling up of effective improvement strategies. PMID:28484066

  20. A Real-Time Robust Method to Detect BeiDou GEO/IGSO Orbital Maneuvers

    PubMed Central

    Huang, Guanwen; Qin, Zhiwei; Zhang, Qin; Wang, Le; Yan, Xingyuan; Fan, Lihong; Wang, Xiaolei

    2017-01-01

    The frequent maneuvering of BeiDou Geostationary Orbit (GEO) and Inclined Geosynchronous Orbit (IGSO) satellites affects the availability of real-time orbit, and decreases the accuracy and performance of positioning, navigation and time (PNT) services. BeiDou satellite maneuver information cannot be obtained by common users. BeiDou broadcast ephemeris is the only indicator of the health status of satellites, which are broadcast on an hourly basis, easily leading to ineffective observations. Sometimes, identification errors of satellite abnormity also appear in the broadcast ephemeris. This study presents a real-time robust detection method for a satellite orbital maneuver with high frequency and high reliability. By using the broadcast ephemeris and pseudo-range observations, the time discrimination factor and the satellite identification factor were defined and used for the real-time detection of start time and the pseudo-random noise code (PRN) of satellites was used for orbital maneuvers. Data from a Multi-GNSS Experiment (MGEX) was collected and analyzed. The results show that the start time and the PRN of the satellite orbital maneuver could be detected accurately in real time. In addition, abnormal start times and satellite abnormities caused by non-maneuver factors also could be detected using the proposed method. The new method not only improves the utilization of observations for users with the data effective for about 92 min, but also promotes the reliability of real-time PNT services. PMID:29186058

  1. Prediction of soil organic carbon partition coefficients by soil column liquid chromatography.

    PubMed

    Guo, Rongbo; Liang, Xinmiao; Chen, Jiping; Wu, Wenzhong; Zhang, Qing; Martens, Dieter; Kettrup, Antonius

    2004-04-30

    To avoid the limitation of the widely used prediction methods of soil organic carbon partition coefficients (KOC) from hydrophobic parameters, e.g., the n-octanol/water partition coefficients (KOW) and the reversed phase high performance liquid chromatographic (RP-HPLC) retention factors, the soil column liquid chromatographic (SCLC) method was developed for KOC prediction. The real soils were used as the packing materials of RP-HPLC columns, and the correlations between the retention factors of organic compounds on soil columns (ksoil) and KOC measured by batch equilibrium method were studied. Good correlations were achieved between ksoil and KOC for three types of soils with different properties. All the square of the correlation coefficients (R2) of the linear regression between log ksoil and log KOC were higher than 0.89 with standard deviations of less than 0.21. In addition, the prediction of KOC from KOW and the RP-HPLC retention factors on cyanopropyl (CN) stationary phase (kCN) was comparatively evaluated for the three types of soils. The results show that the prediction of KOC from kCN and KOW is only applicable to some specific types of soils. The results obtained in the present study proved that the SCLC method is appropriate for the KOC prediction for different types of soils, however the applicability of using hydrophobic parameters to predict KOC largely depends on the properties of soil concerned.

  2. Adaptation Method for Overall and Local Performances of Gas Turbine Engine Model

    NASA Astrophysics Data System (ADS)

    Kim, Sangjo; Kim, Kuisoon; Son, Changmin

    2018-04-01

    An adaptation method was proposed to improve the modeling accuracy of overall and local performances of gas turbine engine. The adaptation method was divided into two steps. First, the overall performance parameters such as engine thrust, thermal efficiency, and pressure ratio were adapted by calibrating compressor maps, and second, the local performance parameters such as temperature of component intersection and shaft speed were adjusted by additional adaptation factors. An optimization technique was used to find the correlation equation of adaptation factors for compressor performance maps. The multi-island genetic algorithm (MIGA) was employed in the present optimization. The correlations of local adaptation factors were generated based on the difference between the first adapted engine model and performance test data. The proposed adaptation method applied to a low-bypass ratio turbofan engine of 12,000 lb thrust. The gas turbine engine model was generated and validated based on the performance test data in the sea-level static condition. In flight condition at 20,000 ft and 0.9 Mach number, the result of adapted engine model showed improved prediction in engine thrust (overall performance parameter) by reducing the difference from 14.5 to 3.3%. Moreover, there was further improvement in the comparison of low-pressure turbine exit temperature (local performance parameter) as the difference is reduced from 3.2 to 0.4%.

  3. A Real-Time Robust Method to Detect BeiDou GEO/IGSO Orbital Maneuvers.

    PubMed

    Huang, Guanwen; Qin, Zhiwei; Zhang, Qin; Wang, Le; Yan, Xingyuan; Fan, Lihong; Wang, Xiaolei

    2017-11-29

    The frequent maneuvering of BeiDou Geostationary Orbit (GEO) and Inclined Geosynchronous Orbit (IGSO) satellites affects the availability of real-time orbit, and decreases the accuracy and performance of positioning, navigation and time (PNT) services. BeiDou satellite maneuver information cannot be obtained by common users. BeiDou broadcast ephemeris is the only indicator of the health status of satellites, which are broadcast on an hourly basis, easily leading to ineffective observations. Sometimes, identification errors of satellite abnormity also appear in the broadcast ephemeris. This study presents a real-time robust detection method for a satellite orbital maneuver with high frequency and high reliability. By using the broadcast ephemeris and pseudo-range observations, the time discrimination factor and the satellite identification factor were defined and used for the real-time detection of start time and the pseudo-random noise code (PRN) of satellites was used for orbital maneuvers. Data from a Multi-GNSS Experiment (MGEX) was collected and analyzed. The results show that the start time and the PRN of the satellite orbital maneuver could be detected accurately in real time. In addition, abnormal start times and satellite abnormities caused by non-maneuver factors also could be detected using the proposed method. The new method not only improves the utilization of observations for users with the data effective for about 92 min, but also promotes the reliability of real-time PNT services.

  4. An Analysis of the Impact of Valve Closure Time on the Course of Water Hammer

    NASA Astrophysics Data System (ADS)

    Kodura, Apoloniusz

    2016-06-01

    The knowledge of transient flow in pressure pipelines is very important for the designing and describing of pressure networks. The water hammer is the most common example of transient flow in pressure pipelines. During this phenomenon, the transformation of kinetic energy into pressure energy causes significant changes in pressure, which can lead to serious problems in the management of pressure networks. The phenomenon is very complex, and a large number of different factors influence its course. In the case of a water hammer caused by valve closing, the characteristic of gate closure is one of the most important factors. However, this factor is rarely investigated. In this paper, the results of physical experiments with water hammer in steel and PE pipelines are described and analyzed. For each water hammer, characteristics of pressure change and valve closing were recorded. The measurements were compared with the results of calculations perfomed by common methods used by engineers - Michaud's equation and Wood and Jones's method. The comparison revealed very significant differences between the results of calculations and the results of experiments. In addition, it was shown that, the characteristic of butterfly valve closure has a significant influence on water hammer, which should be taken into account in analyzing this phenomenon. Comparison of the results of experiments with the results of calculations? may lead to new, improved calculation methods and to new methods to describe transient flow.

  5. Prevalent Inhibitors in Hemophilia B Subjects Enrolled in the Universal Data Collection Database

    PubMed Central

    Puetz, John; Soucie, J. Michael; Kempton, Christine L.; Monahan, Paul E.

    2015-01-01

    Summary Background Several risk factors for inhibitors have recently been described for hemophilia A. It has been assumed that similar risk factors are also relevant for hemophilia B, but there is limited data to confirm this notion. Objectives To determine the prevalence of and risk factors associated with inhibitors in hemophilia B Methods The database of the Universal Data Collection (UDC) project of the Centers for Disease Control for the years 1998 – 2011 was queried to determine the prevalence of inhibitors in hemophilia B subjects. In addition, disease severity, race/ethnicity, age, factor exposure, and prophylaxis usage were evaluated to determine their impact on inhibitor prevalence. Results Of the 3800 male subjects with hemophilia B enrolled in the UDC database, 75 (2%) were determined to have an inhibitor at some point during the study period. Severe disease (OR 13.1, 95% CI 6.2-27.7), black race (OR 2.2, 95% CI 1.2-4.1), and age less than 11 (OR 2.5, 95% CI 1.5-4.0) were found to be significantly associated with having an inhibitor. There was insufficient data to determine if type of factor used and prophylaxis were associated with inhibitors. Conclusions Inhibitors in hemophilia B are much less prevalent than hemophilia A, especially in patients with mild disease. Similar factors associated with inhibitors in hemophilia A also seem to be present for hemophilia B. The information collected by this large surveillance project did not permit evaluation of potential risk factors related to treatment approaches and exposures, and additional studies will be required. PMID:23855900

  6. Gravity gradient preprocessing at the GOCE HPF

    NASA Astrophysics Data System (ADS)

    Bouman, J.; Rispens, S.; Gruber, T.; Schrama, E.; Visser, P.; Tscherning, C. C.; Veicherts, M.

    2009-04-01

    One of the products derived from the GOCE observations are the gravity gradients. These gravity gradients are provided in the Gradiometer Reference Frame (GRF) and are calibrated in-flight using satellite shaking and star sensor data. In order to use these gravity gradients for application in Earth sciences and gravity field analysis, additional pre-processing needs to be done, including corrections for temporal gravity field signals to isolate the static gravity field part, screening for outliers, calibration by comparison with existing external gravity field information and error assessment. The temporal gravity gradient corrections consist of tidal and non-tidal corrections. These are all generally below the gravity gradient error level, which is predicted to show a 1/f behaviour for low frequencies. In the outlier detection the 1/f error is compensated for by subtracting a local median from the data, while the data error is assessed using the median absolute deviation. The local median acts as a high-pass filter and it is robust as is the median absolute deviation. Three different methods have been implemented for the calibration of the gravity gradients. All three methods use a high-pass filter to compensate for the 1/f gravity gradient error. The baseline method uses state-of-the-art global gravity field models and the most accurate results are obtained if star sensor misalignments are estimated along with the calibration parameters. A second calibration method uses GOCE GPS data to estimate a low degree gravity field model as well as gravity gradient scale factors. Both methods allow to estimate gravity gradient scale factors down to the 10-3 level. The third calibration method uses high accurate terrestrial gravity data in selected regions to validate the gravity gradient scale factors, focussing on the measurement band. Gravity gradient scale factors may be estimated down to the 10-2 level with this method.

  7. Preprocessing of gravity gradients at the GOCE high-level processing facility

    NASA Astrophysics Data System (ADS)

    Bouman, Johannes; Rispens, Sietse; Gruber, Thomas; Koop, Radboud; Schrama, Ernst; Visser, Pieter; Tscherning, Carl Christian; Veicherts, Martin

    2009-07-01

    One of the products derived from the gravity field and steady-state ocean circulation explorer (GOCE) observations are the gravity gradients. These gravity gradients are provided in the gradiometer reference frame (GRF) and are calibrated in-flight using satellite shaking and star sensor data. To use these gravity gradients for application in Earth scienes and gravity field analysis, additional preprocessing needs to be done, including corrections for temporal gravity field signals to isolate the static gravity field part, screening for outliers, calibration by comparison with existing external gravity field information and error assessment. The temporal gravity gradient corrections consist of tidal and nontidal corrections. These are all generally below the gravity gradient error level, which is predicted to show a 1/ f behaviour for low frequencies. In the outlier detection, the 1/ f error is compensated for by subtracting a local median from the data, while the data error is assessed using the median absolute deviation. The local median acts as a high-pass filter and it is robust as is the median absolute deviation. Three different methods have been implemented for the calibration of the gravity gradients. All three methods use a high-pass filter to compensate for the 1/ f gravity gradient error. The baseline method uses state-of-the-art global gravity field models and the most accurate results are obtained if star sensor misalignments are estimated along with the calibration parameters. A second calibration method uses GOCE GPS data to estimate a low-degree gravity field model as well as gravity gradient scale factors. Both methods allow to estimate gravity gradient scale factors down to the 10-3 level. The third calibration method uses high accurate terrestrial gravity data in selected regions to validate the gravity gradient scale factors, focussing on the measurement band. Gravity gradient scale factors may be estimated down to the 10-2 level with this method.

  8. Psychosocial factors are independent risk factors for the development of Type 2 diabetes in Japanese workers with impaired fasting glucose and/or impaired glucose tolerance1

    PubMed Central

    Toshihiro, M; Saito, K; Takikawa, S; Takebe, N; Onoda, T; Satoh, J

    2008-01-01

    Aims We prospectively studied Japanese workers with impaired fasting glucose (IFG) and/or impaired glucose tolerance (IGT) and analysed possible risk factors for diabetes, including psychosocial factors such as stress. Methods The participants were 128 male Japanese company employees (mean age, 49.3 ± 5.9 years) with IFG and/or IGT diagnosed by oral glucose tolerance test (OGTT). Participants were prospectively studied for 5 years with annual OGTTs. The Kaplan–Meier method and Cox's proportional hazard model were used to analyse the incidence of diabetes and the factors affecting glucose tolerance, including anthropometric, biochemical and social–psychological factors. Results Of 128 participants, 36 (28.1%) developed diabetes and 39 (30.5%) returned to normal glucose tolerance (NGT) during a mean follow-up of 3.2 years. Independent risk factors for diabetes were night duty [hazard ratio (HR) = 5.48, P = 0.002], higher fasting plasma glucose (FPG) levels within 6.1–6.9 mmol/l (HR = 1.05, P = 0.031), stress (HR = 3.81, P = 0.037) and administrative position (HR = 12.70, P = 0.045), while independent factors associated with recovery were lower FPG levels (HR = 0.94, P = 0.017), being a white-collar worker (HR = 0.34, P = 0.033), non-smoking (HR = 0.31, P = 0.040) and lower serum alanine aminotransferase (ALT) levels (HR = 0.97, P = 0.042). Conclusions In addition to FPG levels at baseline, psychosocial factors (night duty, stress and administrative position) are risk factors for Type 2 diabetes, while being a white-collar worker, a non-smoker and lower serum ALT levels are factors associated with return to NGT in Japanese workers with IFG and/or IGT. PMID:19046200

  9. Effect of ensiling moist field bean (Vicia faba), pea (Pisum sativum) and lupine (Lupinus spp.) grains on the contents of alkaloids, oligosaccharides and tannins.

    PubMed

    Gefrom, A; Ott, E M; Hoedtke, S; Zeyner, A

    2013-12-01

    Ensiling legume grain may be an inexpensive and ecologically interesting method to produce a high-protein feed of local origin. The typically patchy maturation recommends harvesting and ensiling the seeds in moist condition. Developing a method for preserving legume grains harvested before maturation by lactic acid fermentation would have several advantages. Under laboratory conditions, crushed legume seeds of beans, peas and lupines with high moisture content of 35 % were ensiled with different additives (molasses and lactic acid bacteria). To characterize the final silages, contents of proximate nutrients and antinutritional factors (alkaloids, oligosaccharides, tannins) were analysed. The addition of lactic acid bacteria ensured a fast and pronounced lactic acid production and decreased contents of undesired fermentation products like ethanol. An additional use of molasses for ensilage did not provide a remarkable additional benefit. Excluding sugar and starch, the contents of proximate nutrients were not remarkably altered after ensiling. As an overall effect, lactic acid fermentation reduced tannins and oligosaccharides. It can be supposed that the oligosaccharides after breakdown of the complex molecules acted as a source of fermentable carbohydrates. A relevant reduction of alkaloids did not occur. The lactic acid fermentation of legume grains can be recommended as an appropriate method for conservation. With respect to the economic advantages and compared with methods of chemical preservation, the lactic acid fermentation of legume grains under anaerobic conditions is an environmentally compliant procedure and therefore also an option for organic farming. © 2012 Blackwell Verlag GmbH.

  10. Comparative study on the selectivity of various spectrophotometric techniques for the determination of binary mixture of fenbendazole and rafoxanide.

    PubMed

    Saad, Ahmed S; Attia, Ali K; Alaraki, Manal S; Elzanfaly, Eman S

    2015-11-05

    Five different spectrophotometric methods were applied for simultaneous determination of fenbendazole and rafoxanide in their binary mixture; namely first derivative, derivative ratio, ratio difference, dual wavelength and H-point standard addition spectrophotometric methods. Different factors affecting each of the applied spectrophotometric methods were studied and the selectivity of the applied methods was compared. The applied methods were validated as per the ICH guidelines and good accuracy; specificity and precision were proven within the concentration range of 5-50 μg/mL for both drugs. Statistical analysis using one-way ANOVA proved no significant differences among the proposed methods for the determination of the two drugs. The proposed methods successfully determined both drugs in laboratory prepared and commercially available binary mixtures, and were found applicable for the routine analysis in quality control laboratories. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Food additives and preschool children.

    PubMed

    Martyn, Danika M; McNulty, Breige A; Nugent, Anne P; Gibney, Michael J

    2013-02-01

    Food additives have been used throughout history to perform specific functions in foods. A comprehensive framework of legislation is in place within Europe to control the use of additives in the food supply and ensure they pose no risk to human health. Further to this, exposure assessments are regularly carried out to monitor population intakes and verify that intakes are not above acceptable levels (acceptable daily intakes). Young children may have a higher dietary exposure to chemicals than adults due to a combination of rapid growth rates and distinct food intake patterns. For this reason, exposure assessments are particularly important in this age group. The paper will review the use of additives and exposure assessment methods and examine factors that affect dietary exposure by young children. One of the most widely investigated unfavourable health effects associated with food additive intake in preschool-aged children are suggested adverse behavioural effects. Research that has examined this relationship has reported a variety of responses, with many noting an increase in hyperactivity as reported by parents but not when assessed using objective examiners. This review has examined the experimental approaches used in such studies and suggests that efforts are needed to standardise objective methods of measuring behaviour in preschool children. Further to this, a more holistic approach to examining food additive intakes by preschool children is advisable, where overall exposure is considered rather than focusing solely on behavioural effects and possibly examining intakes of food additives other than food colours.

  12. [Primary culture of human normal epithelial cells].

    PubMed

    Tang, Yu; Xu, Wenji; Guo, Wanbei; Xie, Ming; Fang, Huilong; Chen, Chen; Zhou, Jun

    2017-11-28

    The traditional primary culture methods of human normal epithelial cells have disadvantages of low activity of cultured cells, the low cultivated rate and complicated operation. To solve these problems, researchers made many studies on culture process of human normal primary epithelial cell. In this paper, we mainly introduce some methods used in separation and purification of human normal epithelial cells, such as tissue separation method, enzyme digestion separation method, mechanical brushing method, red blood cell lysis method, percoll layered medium density gradient separation method. We also review some methods used in the culture and subculture, including serum-free medium combined with low mass fraction serum culture method, mouse tail collagen coating method, and glass culture bottle combined with plastic culture dish culture method. The biological characteristics of human normal epithelial cells, the methods of immunocytochemical staining, trypan blue exclusion are described. Moreover, the factors affecting the aseptic operation, the conditions of the extracellular environment, the conditions of the extracellular environment during culture, the number of differential adhesion, and the selection and dosage of additives are summarized.

  13. In-house validation of a liquid chromatography-tandem mass spectrometry method for the determination of selective androgen receptor modulators (SARMS) in bovine urine.

    PubMed

    Schmidt, Kathrin S; Mankertz, Joachim

    2018-06-01

    A sensitive and robust LC-MS/MS method allowing the rapid screening and confirmation of selective androgen receptor modulators in bovine urine was developed and successfully validated according to Commission Decision 2002/657/EC, chapter 3.1.3 'alternative validation', by applying a matrix-comprehensive in-house validation concept. The confirmation of the analytes in the validation samples was achieved both on the basis of the MRM ion ratios as laid down in Commission Decision 2002/657/EC and by comparison of their enhanced product ion (EPI) spectra with a reference mass spectral library by making use of the QTRAP technology. Here, in addition to the MRM survey scan, EPI spectra were generated in a data-dependent way according to an information-dependent acquisition criterion. Moreover, stability studies of the analytes in solution and in matrix according to an isochronous approach proved the stability of the analytes in solution and in matrix for at least the duration of the validation study. To identify factors that have a significant influence on the test method in routine analysis, a factorial effect analysis was performed. To this end, factors considered to be relevant for the method in routine analysis (e.g. operator, storage duration of the extracts before measurement, different cartridge lots and different hydrolysis conditions) were systematically varied on two levels. The examination of the extent to which these factors influence the measurement results of the individual analytes showed that none of the validation factors exerts a significant influence on the measurement results.

  14. Growth of saprotrophic fungi and bacteria in soil.

    PubMed

    Rousk, Johannes; Bååth, Erland

    2011-10-01

    Bacterial and fungal growth rate measurements are sensitive variables to detect changes in environmental conditions. However, while considerable progress has been made in methods to assess the species composition and biomass of fungi and bacteria, information about growth rates remains surprisingly rudimentary. We review the recent history of approaches to assess bacterial and fungal growth rates, leading up to current methods, especially focusing on leucine/thymidine incorporation to estimate bacterial growth and acetate incorporation into ergosterol to estimate fungal growth. We present the underlying assumptions for these methods, compare estimates of turnover times for fungi and bacteria based on them, and discuss issues, including for example elusive conversion factors. We review what the application of fungal and bacterial growth rate methods has revealed regarding the influence of the environmental factors of temperature, moisture (including drying/rewetting), pH, as well as the influence of substrate additions, the presence of plants and toxins. We highlight experiments exploring the competitive and facilitative interaction between bacteria and fungi enabled using growth rate methods. Finally, we predict that growth methods will be an important complement to molecular approaches to elucidate fungal and bacterial ecology, and we identify methodological concerns and how they should be addressed. © 2011 Federation of European Microbiological Societies. Published by Blackwell Publishing Ltd. All rights reserved.

  15. Photo diagnosis of early pre cancer (LSIL) in genital tissue

    NASA Astrophysics Data System (ADS)

    Vaitkuviene, A.; Andersen-Engels, S.; Auksorius, E.; Bendsoe, N.; Gavriushin, V.; Gustafsson, U.; Oyama, J.; Palsson, S.; Soto Thompson, M.; Stenram, U.; Svanberg, K.; Viliunas, V.; De Weert, M. J.

    2005-11-01

    Permanent infections recognized as oncogenic factor. STD is common concomitant diseases in early precancerous genital tract lesions. Simple optical detection of early regressive pre cancer in cervix is the aim of this study. Hereditary immunosupression most likely is risk factor for cervical cancer development. Light induced fluorescence point monitoring fitted to live cervical tissue diagnostics in 42 patients. Human papilloma virus DNR in cervix tested by means of Hybrid Capture II method. Ultraviolet (337 nm) laser excited fluorescence spectra in the live cervical tissue analyzed by Principal Component (PrC) regression method and spectra decomposition method. PCr method best discriminated pathology group "CIN I and inflammation"(AUC=75%) related to fluorescence emission in short wave region. Spectra decomposition method suggested a few possible fluorophores in a long wave region. Ultraviolet (398 nm) light excitation of live cervix proved sharp selective spectra intensity enhancement in region above 600nm for High-grade cervical lesion. Conclusion: PC analysis of UV (337 nm) light excitation fluorescence spectra gives opportunity to obtain local immunity and Low-grade cervical lesion related information. Addition of shorter and longer wavelengths is promising for multi wave LIF point monitoring method progress in cervical pre-cancer diagnostics and utility for cancer prevention especially in developing countries.

  16. In-Vivo Assessment of Femoral Bone Strength Using Finite Element Analysis (FEA) Based on Routine MDCT Imaging: A Preliminary Study on Patients with Vertebral Fractures

    PubMed Central

    Liebl, Hans; Garcia, Eduardo Grande; Holzner, Fabian; Noel, Peter B.; Burgkart, Rainer; Rummeny, Ernst J.; Baum, Thomas; Bauer, Jan S.

    2015-01-01

    Purpose To experimentally validate a non-linear finite element analysis (FEA) modeling approach assessing in-vitro fracture risk at the proximal femur and to transfer the method to standard in-vivo multi-detector computed tomography (MDCT) data of the hip aiming to predict additional hip fracture risk in subjects with and without osteoporosis associated vertebral fractures using bone mineral density (BMD) measurements as gold standard. Methods One fresh-frozen human femur specimen was mechanically tested and fractured simulating stance and clinically relevant fall loading configurations to the hip. After experimental in-vitro validation, the FEA simulation protocol was transferred to standard contrast-enhanced in-vivo MDCT images to calculate individual hip fracture risk each for 4 subjects with and without a history of osteoporotic vertebral fractures matched by age and gender. In addition, FEA based risk factor calculations were compared to manual femoral BMD measurements of all subjects. Results In-vitro simulations showed good correlation with the experimentally measured strains both in stance (R2 = 0.963) and fall configuration (R2 = 0.976). The simulated maximum stress overestimated the experimental failure load (4743 N) by 14.7% (5440 N) while the simulated maximum strain overestimated by 4.7% (4968 N). The simulated failed elements coincided precisely with the experimentally determined fracture locations. BMD measurements in subjects with a history of osteoporotic vertebral fractures did not differ significantly from subjects without fragility fractures (femoral head: p = 0.989; femoral neck: p = 0.366), but showed higher FEA based risk factors for additional incident hip fractures (p = 0.028). Conclusion FEA simulations were successfully validated by elastic and destructive in-vitro experiments. In the subsequent in-vivo analyses, MDCT based FEA based risk factor differences for additional hip fractures were not mirrored by according BMD measurements. Our data suggests, that MDCT derived FEA models may assess bone strength more accurately than BMD measurements alone, providing a valuable in-vivo fracture risk assessment tool. PMID:25723187

  17. Landslide risk assessment

    USGS Publications Warehouse

    Lessing, P.; Messina, C.P.; Fonner, R.F.

    1983-01-01

    Landslide risk can be assessed by evaluating geological conditions associated with past events. A sample of 2,4 16 slides from urban areas in West Virginia, each with 12 associated geological factors, has been analyzed using SAS computer methods. In addition, selected data have been normalized to account for areal distribution of rock formations, soil series, and slope percents. Final calculations yield landslide risk assessments of 1.50=high risk. The simplicity of the method provides for a rapid, initial assessment prior to financial investment. However, it does not replace on-site investigations, nor excuse poor construction. ?? 1983 Springer-Verlag New York Inc.

  18. Progressive damage, fracture predictions and post mortem correlations for fiber composites

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Lewis Research Center is involved in the development of computational mechanics methods for predicting the structural behavior and response of composite structures. In conjunction with the analytical methods development, experimental programs including post failure examination are conducted to study various factors affecting composite fracture such as laminate thickness effects, ply configuration, and notch sensitivity. Results indicate that the analytical capabilities incorporated in the CODSTRAN computer code are effective in predicting the progressive damage and fracture of composite structures. In addition, the results being generated are establishing a data base which will aid in the characterization of composite fracture.

  19. Fully decoupled monolithic projection method for natural convection problems

    NASA Astrophysics Data System (ADS)

    Pan, Xiaomin; Kim, Kyoungyoun; Lee, Changhoon; Choi, Jung-Il

    2017-04-01

    To solve time-dependent natural convection problems, we propose a fully decoupled monolithic projection method. The proposed method applies the Crank-Nicolson scheme in time and the second-order central finite difference in space. To obtain a non-iterative monolithic method from the fully discretized nonlinear system, we first adopt linearizations of the nonlinear convection terms and the general buoyancy term with incurring second-order errors in time. Approximate block lower-upper decompositions, along with an approximate factorization technique, are additionally employed to a global linearly coupled system, which leads to several decoupled subsystems, i.e., a fully decoupled monolithic procedure. We establish global error estimates to verify the second-order temporal accuracy of the proposed method for velocity, pressure, and temperature in terms of a discrete l2-norm. Moreover, according to the energy evolution, the proposed method is proved to be stable if the time step is less than or equal to a constant. In addition, we provide numerical simulations of two-dimensional Rayleigh-Bénard convection and periodic forced flow. The results demonstrate that the proposed method significantly mitigates the time step limitation, reduces the computational cost because only one Poisson equation is required to be solved, and preserves the second-order temporal accuracy for velocity, pressure, and temperature. Finally, the proposed method reasonably predicts a three-dimensional Rayleigh-Bénard convection for different Rayleigh numbers.

  20. Spatial modelling of disease using data- and knowledge-driven approaches.

    PubMed

    Stevens, Kim B; Pfeiffer, Dirk U

    2011-09-01

    The purpose of spatial modelling in animal and public health is three-fold: describing existing spatial patterns of risk, attempting to understand the biological mechanisms that lead to disease occurrence and predicting what will happen in the medium to long-term future (temporal prediction) or in different geographical areas (spatial prediction). Traditional methods for temporal and spatial predictions include general and generalized linear models (GLM), generalized additive models (GAM) and Bayesian estimation methods. However, such models require both disease presence and absence data which are not always easy to obtain. Novel spatial modelling methods such as maximum entropy (MAXENT) and the genetic algorithm for rule set production (GARP) require only disease presence data and have been used extensively in the fields of ecology and conservation, to model species distribution and habitat suitability. Other methods, such as multicriteria decision analysis (MCDA), use knowledge of the causal factors of disease occurrence to identify areas potentially suitable for disease. In addition to their less restrictive data requirements, some of these novel methods have been shown to outperform traditional statistical methods in predictive ability (Elith et al., 2006). This review paper provides details of some of these novel methods for mapping disease distribution, highlights their advantages and limitations, and identifies studies which have used the methods to model various aspects of disease distribution. Copyright © 2011. Published by Elsevier Ltd.

  1. Overview Article: Identifying transcriptional cis-regulatory modules in animal genomes

    PubMed Central

    Suryamohan, Kushal; Halfon, Marc S.

    2014-01-01

    Gene expression is regulated through the activity of transcription factors and chromatin modifying proteins acting on specific DNA sequences, referred to as cis-regulatory elements. These include promoters, located at the transcription initiation sites of genes, and a variety of distal cis-regulatory modules (CRMs), the most common of which are transcriptional enhancers. Because regulated gene expression is fundamental to cell differentiation and acquisition of new cell fates, identifying, characterizing, and understanding the mechanisms of action of CRMs is critical for understanding development. CRM discovery has historically been challenging, as CRMs can be located far from the genes they regulate, have few readily-identifiable sequence characteristics, and for many years were not amenable to high-throughput discovery methods. However, the recent availability of complete genome sequences and the development of next-generation sequencing methods has led to an explosion of both computational and empirical methods for CRM discovery in model and non-model organisms alike. Experimentally, CRMs can be identified through chromatin immunoprecipitation directed against transcription factors or histone post-translational modifications, identification of nucleosome-depleted “open” chromatin regions, or sequencing-based high-throughput functional screening. Computational methods include comparative genomics, clustering of known or predicted transcription factor binding sites, and supervised machine-learning approaches trained on known CRMs. All of these methods have proven effective for CRM discovery, but each has its own considerations and limitations, and each is subject to a greater or lesser number of false-positive identifications. Experimental confirmation of predictions is essential, although shortcomings in current methods suggest that additional means of validation need to be developed. PMID:25704908

  2. Developmental Testing of Habitability and Human Factors Tools and Methods During Neemo 15

    NASA Technical Reports Server (NTRS)

    Thaxton, S. S.; Litaker, H. L., Jr.; Holden, K. L.; Adolf, J. A.; Pace, J.; Morency, R. M.

    2011-01-01

    Currently, no established methods exist to collect real-time human factors and habitability data while crewmembers are living aboard the International Space Station (ISS), traveling aboard other space vehicles, or living in remote habitats. Currently, human factors and habitability data regarding space vehicles and habitats are acquired at the end of missions during postflight crew debriefs. These debriefs occur weeks or often longer after events have occurred, which forces a significant reliance on incomplete human memory, which is imperfect. Without a means to collect real-time data, small issues may have a cumulative effect and continue to cause crew frustration and inefficiencies. Without timely and appropriate reporting methodologies, issues may be repeated or lost. TOOL DEVELOPMENT AND EVALUATION: As part of a directed research project (DRP) aiming to develop and validate tools and methods for collecting near real-time human factors and habitability data, a preliminary set of tools and methods was developed. These tools and methods were evaluated during the NASA Extreme Environments Mission Operations (NEEMO) 15 mission in October 2011. Two versions of a software tool were used to collect observational data from NEEMO crewmembers that also used targeted strategies for using video cameras to collect observations. Space habitability observation reporting tool (SHORT) was created based on a tool previously developed by NASA to capture human factors and habitability issues during spaceflight. SHORT uses a web-based interface that allows users to enter a text description of any observations they wish to report and assign a priority level if changes are needed. In addition to the web-based format, a mobile Apple (iOS) format was implemented, referred to as iSHORT. iSHORT allows users to provide text, audio, photograph, and video data to report observations. iSHORT can be deployed on an iPod Touch, iPhone, or iPad; for NEEMO 15, the app was provided on an iPad2.

  3. Application of Various Types of Liposomes in Drug Delivery Systems

    PubMed Central

    Alavi, Mehran; Karimi, Naser; Safaei, Mohsen

    2017-01-01

    Liposomes, due to their various forms, require further exploration. These structures can deliver both hydrophilic and hydrophobic drugs for cancer, antibacterial, antifungal, immunomodulation, diagnostics, ophtalmica, vaccines, enzymes and genetic elements. Preparation of liposomes results in different properties for these systems. In addition, based on preparation methods, liposomes types can be unilamellar, multilamellar and giant unilamellar; however, there are many factors and difficulties that affect the development of liposome drug delivery structure. In the present review, we discuss some problems that impact drug delivery by liposomes. In addition, we discuss a new generation of liposomes, which is utilized for decreasing the limitation of the conventional liposomes. PMID:28507932

  4. Chronic pain and fatigue: Associations with religion and spirituality

    PubMed Central

    Baetz, Marilyn; Bowen, Rudy

    2008-01-01

    BACKGROUND: Conditions with chronic, non-life-threatening pain and fatigue remain a challenge to treat, and are associated with high health care use. Understanding psychological and psychosocial contributing and coping factors, and working with patients to modify them, is one goal of management. An individual’s spirituality and/or religion may be one such factor that can influence the experience of chronic pain or fatigue. METHODS: The Canadian Community Health Survey (2002) obtained data from 37,000 individuals 15 years of age or older. From these data, four conditions with chronic pain and fatigue were analyzed together – fibromyalgia, back pain, migraine headaches and chronic fatigue syndrome. Additional data from the survey were used to determine how religion and spirituality affect psychological well-being, as well as the use of various coping methods. RESULTS: Religious persons were less likely to have chronic pain and fatigue, while those who were spiritual but not affiliated with regular worship attendance were more likely to have those conditions. Individuals with chronic pain and fatigue were more likely to use prayer and seek spiritual support as a coping method than the general population. Furthermore, chronic pain and fatigue sufferers who were both religious and spiritual were more likely to have better psychological well-being and use positive coping strategies. INTERPRETATION: Consideration of an individual’s spirituality and/or religion, and how it may be used in coping may be an additional component to the overall management of chronic pain and fatigue. PMID:18958309

  5. Development of a simple and valid method for the trace determination of phthalate esters in human plasma using dispersive liquid-liquid microextraction coupled with gas chromatography-mass spectrometry.

    PubMed

    Ebrahim, Karim; Poursafa, Parinaz; Amin, Mohammad Mehdi

    2017-11-01

    A new method was developed for the trace determination of phthalic acid esters in plasma using dispersive liquid-liquid microextraction and gas chromatography with mass spectrometry analysis. Plasma proteins were efficiently precipitated by trichloroacetic acid and then a mixture of chlorobenzene (as extraction solvent) and acetonitrile (as dispersive solvent) rapidly injected to clear supernatant using a syringe. After centrifuging, chlorobenzene sedimented at the bottom of the test tube. 1 μL of this sedimented phase was injected into the gas chromatograph for phthalic acid esters analysis. Different factors affecting the extraction performance, such as the type of extraction and dispersive solvent, their volume, extraction time, and the effects of salt addition were investigated and optimized. Under the optimum conditions, the enrichment factors and extraction recoveries were satisfactory and ranged between 820-1020 and 91-97%, respectively. The linear range was wide (50-1000 ng/mL) and limit of detection was very low (1.5-2.5 ng/mL for all analytes). The relative standard deviations for analysis of 1 μg/mL of the analytes were between 3.2-6.1%. Salt addition showed no significant effect on extraction recovery. Finally, the proposed method was successfully utilized for the extraction and determination of the phthalic acid esters in human plasma samples and satisfactory results were obtained. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Estimation of sample size and testing power (Part 4).

    PubMed

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-01-01

    Sample size estimation is necessary for any experimental or survey research. An appropriate estimation of sample size based on known information and statistical knowledge is of great significance. This article introduces methods of sample size estimation of difference test for data with the design of one factor with two levels, including sample size estimation formulas and realization based on the formulas and the POWER procedure of SAS software for quantitative data and qualitative data with the design of one factor with two levels. In addition, this article presents examples for analysis, which will play a leading role for researchers to implement the repetition principle during the research design phase.

  7. High quality factor whispering gallery modes from self-assembled hexagonal GaN rods grown by metal-organic vapor phase epitaxy.

    PubMed

    Tessarek, C; Sarau, G; Kiometzis, M; Christiansen, S

    2013-02-11

    Self-assembled GaN rods were grown on sapphire by metal-organic vapor phase epitaxy using a simple two-step method that relies first on a nitridation step followed by GaN epitaxy. The mask-free rods formed without any additional catalyst. Most of the vertically aligned rods exhibit a regular hexagonal shape with sharp edges and smooth sidewall facets. Cathodo- and microphotoluminescence investigations were carried out on single GaN rods. Whispering gallery modes with quality factors greater than 4000 were measured demonstrating the high morphological and optical quality of the self-assembled GaN rods.

  8. Highly efficient removal of ammonia nitrogen from wastewater by dielectrophoresis-enhanced adsorption.

    PubMed

    Liu, Dongyang; Cui, Chenyang; Wu, Yanhong; Chen, Huiying; Geng, Junfeng; Xia, Jianxin

    2018-01-01

    A new approach, based on dielectrophoresis (DEP), was developed in this work to enhance traditional adsorption for the removal of ammonia nitrogen (NH 3 -N) from wastewater. The factors that affected the removal efficiency were systematically investigated, which allowed us to determine optimal operation parameters. With this new method we found that the removal efficiency was significantly improved from 66.7% by adsorption only to 95% by adsorption-DEP using titanium metal mesh as electrodes of the DEP and zeolite as the absorbent material. In addition, the dosage of the absorbent/zeolite and the processing time needed for the removal were greatly reduced after the introduction of DEP into the process. In addition, a very low discharge concentration (C, 1.5 mg/L) of NH 3 -N was achieved by the new method, which well met the discharge criterion of C < 8 mg/L (the emission standard of pollutants for rare earth industry in China).

  9. Preparation and rheological behavior of polymer-modified asphalts

    NASA Astrophysics Data System (ADS)

    Yousefi, Ali Akbar

    1999-09-01

    Different materials and methods were used to prepare and stabilize polymer-modified asphalts. Addition of thermoplastic elastomers improved some technically important properties of asphalt. Due to inherent factors like large density difference between asphalt and polyethylene, many physical methods in which the structure of asphalt is unchanged, failed to stabilize this system. The effect of addition of copolymers and a pyrolytic oil residue derived from used tire rubber were also studied and found to be ineffective on the storage stability of the polymer-asphalt emulsions while high and moderate temperature properties of the asphalt were found to be improved. Finally, the technique of catalytic grafting of polymer on the surface of high-density particles (e.g. carbon black) was used to balance the large density difference between asphalt and polymer. The resulting polymer-asphalts were stable at high temperatures and showed enhanced properties at low and high temperatures.

  10. Preferred Primary Healthcare Provider Choice Among Insured Persons in Ashanti Region, Ghana

    PubMed Central

    Boachie, Micheal Kofi

    2016-01-01

    Background: In early 2012, National Health Insurance Scheme (NHIS) members in Ashanti Region were allowed to choose their own primary healthcare providers. This paper investigates the factors that enrolees in the Ashanti Region considered in choosing preferred primary healthcare providers (PPPs) and direction of association of such factors with the choice of PPP. Methods: Using a cross-sectional study design, the study sampled 600 NHIS enrolees in Kumasi Metro area and Kwabre East district. The sampling methods were a combination of simple random and systematic sampling techniques at different stages. Descriptive statistics were used to analyse demographic information and the criteria for selecting PPP. Multinomial logistic regression technique was used to ascertain the direction of association of the factors and the choice of PPP using mission PPPs as the base outcome. Results: Out of the 600 questionnaires administered, 496 were retained for further analysis. The results show that availability of essential drugs (53.63%) and doctors (39.92%), distance or proximity (49.60%), provider reputation (39.52%), waiting time (39.92), additional charges (37.10%), and recommendations (48.79%) were the main criteria adopted by enrolees in selecting PPPs. In the regression, income (-0.0027), availability of doctors (-1.82), additional charges (-2.14) and reputation (-2.09) were statistically significant at 1% in influencing the choice of government PPPs. On the part of private PPPs, availability of drugs (2.59), waiting time (1.45), residence (-2.62), gender (-2.89), and reputation (-2.69) were statistically significant at 1% level. Presence of additional charges (-1.29) was statistically significant at 5% level. Conclusion: Enrolees select their PPPs based on such factors as availability of doctors and essential drugs, reputation, waiting time, income, and their residence. Based on these findings, there is the need for healthcare providers to improve on their quality levels by ensuring constant availability of essential drugs, doctors, and shorter waiting time. However, individual enrolees may value each criterion differently. Thus, not all enrolees may be motivated by same concerns. This requires providers to be circumspect regarding the factors that may attract enrolees. The National Health Insurance Authority (NHIA) should also ensure timely release of funds to help providers procure the necessary medical supplies to ensure quality service PMID:26927586

  11. Using the Time-Correlated Induced Fission Method to Simultaneously Measure the 235U Content and the Burnable Poison Content in LWR Fuel Assemblies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Root, M. A.; Menlove, H. O.; Lanza, R. C.

    The uranium neutron coincidence collar uses thermal neutron interrogation to verify the 235U mass in low-enriched uranium (LEU) fuel assemblies in fuel fabrication facilities. Burnable poisons are commonly added to nuclear fuel to increase the lifetime of the fuel. The high thermal neutron absorption by these poisons reduces the active neutron signal produced by the fuel. Burnable poison correction factors or fast-mode runs with Cd liners can help compensate for this effect, but the correction factors rely on operator declarations of burnable poison content, and fast-mode runs are time-consuming. Finally, this paper describes a new analysis method to measure themore » 235U mass and burnable poison content in LEU nuclear fuel simultaneously in a timely manner, without requiring additional hardware.« less

  12. Using the Time-Correlated Induced Fission Method to Simultaneously Measure the 235U Content and the Burnable Poison Content in LWR Fuel Assemblies

    DOE PAGES

    Root, M. A.; Menlove, H. O.; Lanza, R. C.; ...

    2018-03-21

    The uranium neutron coincidence collar uses thermal neutron interrogation to verify the 235U mass in low-enriched uranium (LEU) fuel assemblies in fuel fabrication facilities. Burnable poisons are commonly added to nuclear fuel to increase the lifetime of the fuel. The high thermal neutron absorption by these poisons reduces the active neutron signal produced by the fuel. Burnable poison correction factors or fast-mode runs with Cd liners can help compensate for this effect, but the correction factors rely on operator declarations of burnable poison content, and fast-mode runs are time-consuming. Finally, this paper describes a new analysis method to measure themore » 235U mass and burnable poison content in LEU nuclear fuel simultaneously in a timely manner, without requiring additional hardware.« less

  13. Optimized distortion correction technique for echo planar imaging.

    PubMed

    Chen , N K; Wyrwicz, A M

    2001-03-01

    A new phase-shifted EPI pulse sequence is described that encodes EPI phase errors due to all off-resonance factors, including B(o) field inhomogeneity, eddy current effects, and gradient waveform imperfections. Combined with the previously proposed multichannel modulation postprocessing algorithm (Chen and Wyrwicz, MRM 1999;41:1206-1213), the encoded phase error information can be used to effectively remove geometric distortions in subsequent EPI scans. The proposed EPI distortion correction technique has been shown to be effective in removing distortions due to gradient waveform imperfections and phase gradient-induced eddy current effects. In addition, this new method retains advantages of the earlier method, such as simultaneous correction of different off-resonance factors without use of a complicated phase unwrapping procedure. The effectiveness of this technique is illustrated with EPI studies on phantoms and animal subjects. Implementation to different versions of EPI sequences is also described. Magn Reson Med 45:525-528, 2001. Copyright 2001 Wiley-Liss, Inc.

  14. Mathematical model for the Bridgman-Stockbarger crystal growing system

    NASA Technical Reports Server (NTRS)

    Roberts, G. O.

    1986-01-01

    In a major technical breakthrough, a computer model for Bridgman-Stockbarger crystal growth was developed. The model includes melt convection, solute effects, thermal conduction in the ampule, melt, and crystal, and the determination of the curved moving crystal-melt interface. The key to the numerical method is the use of a nonuniform computational mesh which moves with the interface, so that the interface is a mesh surface. In addition, implicit methods are used for advection and diffusion of heat, concentration, and vorticity, for interface movement, and for internal gracity waves. This allows large time-steps without loss of stability or accuracy. Numerical results are presented for the interface shape, temperature distribution, and concentration distribution, in steady-state crystl growth. Solutions are presented for two test cases using water, with two different salts in solution. The two diffusivities differ by a factor of ten, and the concentrations differ by a factor of twenty.

  15. Large deflection angle, high-power adaptive fiber optics collimator with preserved near-diffraction-limited beam quality.

    PubMed

    Zhi, Dong; Ma, Yanxing; Chen, Zilun; Wang, Xiaolin; Zhou, Pu; Si, Lei

    2016-05-15

    We report on the development of a monolithic adaptive fiber optics collimator, with a large deflection angle and preserved near-diffraction-limited beam quality, that has been tested at a maximal output power at the 300 W level. Additionally, a new measurement method of beam quality (M2 factor) is developed. Experimental results show that the deflection angle of the collimated beam is in the range of 0-0.27 mrad in the X direction and 0-0.19 mrad in the Y direction. The effective working frequency of the device is about 710 Hz. By employing the new measurement method of the M2 factor, we calculate that the beam quality is Mx2=1.35 and My2=1.24, which is in agreement with the result from the beam propagation analyzer and is preserved well with the increasing output power.

  16. Applications of integrated human error identification techniques on the chemical cylinder change task.

    PubMed

    Cheng, Ching-Min; Hwang, Sheue-Ling

    2015-03-01

    This paper outlines the human error identification (HEI) techniques that currently exist to assess latent human errors. Many formal error identification techniques have existed for years, but few have been validated to cover latent human error analysis in different domains. This study considers many possible error modes and influential factors, including external error modes, internal error modes, psychological error mechanisms, and performance shaping factors, and integrates several execution procedures and frameworks of HEI techniques. The case study in this research was the operational process of changing chemical cylinders in a factory. In addition, the integrated HEI method was used to assess the operational processes and the system's reliability. It was concluded that the integrated method is a valuable aid to develop much safer operational processes and can be used to predict human error rates on critical tasks in the plant. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  17. Estimating sub-surface dispersed oil concentration using acoustic backscatter response.

    PubMed

    Fuller, Christopher B; Bonner, James S; Islam, Mohammad S; Page, Cheryl; Ojo, Temitope; Kirkey, William

    2013-05-15

    The recent Deepwater Horizon disaster resulted in a dispersed oil plume at an approximate depth of 1000 m. Several methods were used to characterize this plume with respect to concentration and spatial extent including surface supported sampling and autonomous underwater vehicles with in situ instrument payloads. Additionally, echo sounders were used to track the plume location, demonstrating the potential for remote detection using acoustic backscatter (ABS). This study evaluated use of an Acoustic Doppler Current Profiler (ADCP) to quantitatively detect oil-droplet suspensions from the ABS response in a controlled laboratory setting. Results from this study showed log-linear ABS responses to oil-droplet volume concentration. However, the inability to reproduce ABS response factors suggests the difficultly in developing meaningful calibration factors for quantitative field analysis. Evaluation of theoretical ABS intensity derived from the particle size distribution provided insight regarding method sensitivity in the presence of interfering ambient particles. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Brachytherapy devices and methods employing americium-241

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gray, L. A.

    1985-04-16

    Sources and methods for radiation therapy, particularly brachytherapy, employing americium-241 (60 keV gamma emission and 433 year half-life) provide major advantages for radiotherapy, including simplified radiation protection, dose reduction to healthy tissue, increased dose to tumor, and improved dose distributions. A number of apparent drawbacks and unfavorable considerations including low gamma factor, high self-absorption, increased activity required and alpha-particle generation leading to helium gas pressure buildup and potential neutron contamination in the generated radiation are all effectively dealt with and overcome through recognition of subtle favorable factors unique to americium-241 among brachytherapy sources and through suitable constructional techniques. Due tomore » an additional amount of radiation, in the order of 50%, provided primarily to nearby regions as a result of Compton scatter in tissue and water, higher dose rates occur than would be predicted by conventional calculations.« less

  19. Bioreactor concepts for cell culture-based viral vaccine production.

    PubMed

    Gallo-Ramírez, Lilí Esmeralda; Nikolay, Alexander; Genzel, Yvonne; Reichl, Udo

    2015-01-01

    Vaccine manufacturing processes are designed to meet present and upcoming challenges associated with a growing vaccine market and to include multi-use facilities offering a broad portfolio and faster reaction times in case of pandemics and emerging diseases. The final products, from whole viruses to recombinant viral proteins, are very diverse, making standard process strategies hardly universally applicable. Numerous factors such as cell substrate, virus strain or expression system, medium, cultivation system, cultivation method, and scale need consideration. Reviewing options for efficient and economical production of human vaccines, this paper discusses basic factors relevant for viral antigen production in mammalian cells, avian cells and insect cells. In addition, bioreactor concepts, including static systems, single-use systems, stirred tanks and packed-beds are addressed. On this basis, methods towards process intensification, in particular operational strategies, the use of perfusion systems for high product yields, and steps to establish continuous processes are introduced.

  20. Measurement of lung volumes from supine portable chest radiographs.

    PubMed

    Ries, A L; Clausen, J L; Friedman, P J

    1979-12-01

    Lung volumes in supine nonambulatory patients are physiological parameters often difficult to measure with current techniques (plethysmograph, gas dilution). Existing radiographic methods for measuring lung volumes require standard upright chest radiographs. Accordingly, in 31 normal supine adults, we determined helium-dilution functional residual and total lung capacities and measured planimetric lung field areas (LFA) from corresponding portable anteroposterior and lateral radiographs. Low radiation dose methods, which delivered less than 10% of that from standard portable X-ray technique, were utilized. Correlation between lung volume and radiographic LFA was highly significant (r = 0.96, SEE = 10.6%). Multiple-step regressions using height and chest diameter correction factors reduced variance, but weight and radiographic magnification factors did not. In 17 additional subjects studied for validation, the regression equations accurately predicted radiographic lung volume. Thus, this technique can provide accurate and rapid measurement of lung volume in studies involving supine patients.

  1. a Cognitive Approach to Teaching a Graduate-Level Geobia Course

    NASA Astrophysics Data System (ADS)

    Bianchetti, Raechel A.

    2016-06-01

    Remote sensing image analysis training occurs both in the classroom and the research lab. Education in the classroom for traditional pixel-based image analysis has been standardized across college curriculums. However, with the increasing interest in Geographic Object-Based Image Analysis (GEOBIA), there is a need to develop classroom instruction for this method of image analysis. While traditional remote sensing courses emphasize the expansion of skills and knowledge related to the use of computer-based analysis, GEOBIA courses should examine the cognitive factors underlying visual interpretation. This current paper provides an initial analysis of the development, implementation, and outcomes of a GEOBIA course that considers not only the computational methods of GEOBIA, but also the cognitive factors of expertise, that such software attempts to replicate. Finally, a reflection on the first instantiation of this course is presented, in addition to plans for development of an open-source repository for course materials.

  2. Preliminary Assessment of Various Additives on the Specific Reactivity of Anti- rHBsAg Monoclonal Antibodies

    PubMed Central

    Yazdani, Yaghoub; Mohammadi, Saeed; Yousefi, Mehdi; Shokri, Fazel

    2015-01-01

    Background: Antibodies have a wide application in diagnosis and treatment. In order to maintain optimal stability of various functional parts of antibodies such as antigen binding sites, several approaches have been suggested. Using additives such as polysaccharides and polyols is one of the main methods in protecting antibodies against aggregation or degradation in the formulation. The aim of this study was to evaluate the protective effect of various additives on the specific reactivity of monoclonal antibodies (mAbs) against recombinant HBsAg (rHBsAg) epitopes. Methods: To estimate the protective effect of different additives on the stability of antibody against conformational epitopes (S3 antibody) and linear epitopes (S7 and S11 antibodies) of rHBsAg, heat shock at 37°C was performed in liquid and solid phases. Environmental factors were considered to be constant. The specific reactivity of antibodies was evaluated using ELISA method. The data were analyzed using SPSS software by Mann-Whitney nonparametric test with the confidence interval of 95%. Results: Our results showed that 0.25 M sucrose, 0.04 M trehalose and 0.5% BSA had the most protective effect on maintaining the reactivity of mAbs (S3) against conformational epitopes of rHBsAg. Results obtained from S7 and S11 mAbs against linear characteristics showed minor differences. The most efficient protective additives were 0.04 M trehalose and 1 M sucrose. Conclusion: Nowadays, application of appropriate additives is important for increasing the stability of antibodies. It was concluded that sucrose, trehalose and BSA have considerable effects on the specific reactivity of anti rHBsAg mAbs during long storage. PMID:26605008

  3. Chitosan conduits combined with nerve growth factor microspheres repair facial nerve defects

    PubMed Central

    Liu, Huawei; Wen, Weisheng; Hu, Min; Bi, Wenting; Chen, Lijie; Liu, Sanxia; Chen, Peng; Tan, Xinying

    2013-01-01

    Microspheres containing nerve growth factor for sustained release were prepared by a compound method, and implanted into chitosan conduits to repair 10-mm defects on the right buccal branches of the facial nerve in rabbits. In addition, chitosan conduits combined with nerve growth factor or normal saline, as well as autologous nerve, were used as controls. At 90 days post-surgery, the muscular atrophy on the right upper lip was more evident in the nerve growth factor and normal sa-line groups than in the nerve growth factor-microspheres and autologous nerve groups. physiological analysis revealed that the nerve conduction velocity and amplitude were significantly higher in the nerve growth factor-microspheres and autologous nerve groups than in the nerve growth factor and normal saline groups. Moreover, histological observation illustrated that the di-ameter, number, alignment and myelin sheath thickness of myelinated nerves derived from rabbits were higher in the nerve growth factor-microspheres and autologous nerve groups than in the nerve growth factor and normal saline groups. These findings indicate that chitosan nerve conduits bined with microspheres for sustained release of nerve growth factor can significantly improve facial nerve defect repair in rabbits. PMID:25206635

  4. An analysis of the ArcCHECK-MR diode array's performance for ViewRay quality assurance.

    PubMed

    Ellefson, Steven T; Culberson, Wesley S; Bednarz, Bryan P; DeWerd, Larry A; Bayouth, John E

    2017-07-01

    The ArcCHECK-MR diode array utilizes a correction system with a virtual inclinometer to correct the angular response dependencies of the diodes. However, this correction system cannot be applied to measurements on the ViewRay MR-IGRT system due to the virtual inclinometer's incompatibility with the ViewRay's multiple simultaneous beams. Additionally, the ArcCHECK's current correction factors were determined without magnetic field effects taken into account. In the course of performing ViewRay IMRT quality assurance with the ArcCHECK, measurements were observed to be consistently higher than the ViewRay TPS predictions. The goals of this study were to quantify the observed discrepancies and test whether applying the current factors improves the ArcCHECK's accuracy for measurements on the ViewRay. Gamma and frequency analysis were performed on 19 ViewRay patient plans. Ion chamber measurements were performed at a subset of diode locations using a PMMA phantom with the same dimensions as the ArcCHECK. A new method for applying directionally dependent factors utilizing beam information from the ViewRay TPS was developed in order to analyze the current ArcCHECK correction factors. To test the current factors, nine ViewRay plans were altered to be delivered with only a single simultaneous beam and were measured with the ArcCHECK. The current correction factors were applied using both the new and current methods. The new method was also used to apply corrections to the original 19 ViewRay plans. It was found the ArcCHECK systematically reports doses higher than those actually delivered by the ViewRay. Application of the current correction factors by either method did not consistently improve measurement accuracy. As dose deposition and diode response have both been shown to change under the influence of a magnetic field, it can be concluded the current ArcCHECK correction factors are invalid and/or inadequate to correct measurements on the ViewRay system. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  5. Pretreatment of paper tube residuals for improved biogas production.

    PubMed

    Teghammar, Anna; Yngvesson, Johan; Lundin, Magnus; Taherzadeh, Mohammad J; Horváth, Ilona Sárvári

    2010-02-01

    Paper tube residuals, which are lignocellulosic wastes, have been studied as substrate for biogas (methane) production. Steam explosion and nonexplosive hydrothermal pretreatment, in combination with sodium hydroxide and/or hydrogen peroxide, have been used to improve the biogas production. The treatment conditions of temperature, time and addition of NaOH and H(2)O(2) were statistically evaluated for methane production. Explosive pretreatment was more successful than the nonexplosive method, and gave the best results at 220 degrees C, 10 min, with addition of both 2% NaOH and 2% H(2)O(2). Digestion of the pretreated materials at these conditions yielded 493 N ml/g VS methane which was 107% more than the untreated materials. In addition, the initial digestion rate was improved by 132% compared to the untreated samples. The addition of NaOH was, besides the explosion effect, the most important factor to improve the biogas production.

  6. Egg and Egg-Derived Foods: Effects on Human Health and Use as Functional Foods

    PubMed Central

    Miranda, Jose M.; Anton, Xaquin; Redondo-Valbuena, Celia; Roca-Saavedra, Paula; Rodriguez, Jose A.; Lamas, Alexandre; Franco, Carlos M.; Cepeda, Alberto

    2015-01-01

    Eggs are sources of protein, fats and micronutrients that play an important role in basic nutrition. However, eggs are traditionally associated with adverse factors in human health, mainly due to their cholesterol content. Nowadays, however, it is known that the response of cholesterol in human serum levels to dietary cholesterol consumption depends on several factors, such as ethnicity, genetic makeup, hormonal factors and the nutritional status of the consumer. Additionally, in recent decades, there has been an increasing demand for functional foods, which is expected to continue to increase in the future, owing to their capacity to decrease the risks of some diseases and socio-demographic factors such as the increase in life expectancy. This work offers a brief overview of the advantages and disadvantages of egg consumption and the potential market of functional eggs, and it explores the possibilities of the development of functional eggs by technological methods. PMID:25608941

  7. When Does Speech Sound Disorder Matter for Literacy? The Role of Disordered Speech Errors, Co-Occurring Language Impairment and Family Risk of Dyslexia

    ERIC Educational Resources Information Center

    Hayiou-Thomas, Marianna E.; Carroll, Julia M.; Leavett, Ruth; Hulme, Charles; Snowling, Margaret J.

    2017-01-01

    Background: This study considers the role of early speech difficulties in literacy development, in the context of additional risk factors. Method: Children were identified with speech sound disorder (SSD) at the age of 3½ years, on the basis of performance on the Diagnostic Evaluation of Articulation and Phonology. Their literacy skills were…

  8. Laser Scanning Systems and Techniques in Rockfall Source Identification and Risk Assessment: A Critical Review

    NASA Astrophysics Data System (ADS)

    Fanos, Ali Mutar; Pradhan, Biswajeet

    2018-04-01

    Rockfall poses risk to people, their properties and to transportation ways in mountainous and hilly regions. This catastrophe shows various characteristics such as vast distribution, sudden occurrence, variable magnitude, strong fatalness and randomicity. Therefore, prediction of rockfall phenomenon both spatially and temporally is a challenging task. Digital Terrain model (DTM) is one of the most significant elements in rockfall source identification and risk assessment. Light detection and ranging (LiDAR) is the most advanced effective technique to derive high-resolution and accurate DTM. This paper presents a critical overview of rockfall phenomenon (definition, triggering factors, motion modes and modeling) and LiDAR technique in terms of data pre-processing, DTM generation and the factors that can be obtained from this technique for rockfall source identification and risk assessment. It also reviews the existing methods that are utilized for the evaluation of the rockfall trajectories and their characteristics (frequency, velocity, bouncing height and kinetic energy), probability, susceptibility, hazard and risk. Detail consideration is given on quantitative methodologies in addition to the qualitative ones. Various methods are demonstrated with respect to their application scales (local and regional). Additionally, attention is given to the latest improvement, particularly including the consideration of the intensity of the phenomena and the magnitude of the events at chosen sites.

  9. A retrospective investigation into risk factors of sarcoptic mange in dogs.

    PubMed

    Feather, Lucy; Gough, Kevin; Flynn, Robin J; Elsheikha, Hany M

    2010-07-01

    This retrospective study of sarcoptic mange in dogs aimed to identify risk factors for this disease and determine their influence on treatment outcome. Data regarding dog demographics, clinical presentation, diagnostic method, treatment, and outcome were analyzed. No statistical association was found between sex and incidence of sarcoptic mange. However, age of dogs was found to be a risk factor which could increase the chances of dogs contracting sarcoptic mange. The results indicate that the disease predominantly affects young dogs, of all breeds and both sexes, implicating age-related immunity. The most common clinical feature reported was pruritus, with the ear margins preferentially affected. Additionally, contact with other animals played an important role in occurrence of the disease indicating the highly transmissible nature of the disease.

  10. Adiponectin Provides Additional Information to Conventional Cardiovascular Risk Factors for Assessing the Risk of Atherosclerosis in Both Genders

    PubMed Central

    Yoon, Jin-Ha; Kim, Sung-Kyung; Choi, Ho-June; Choi, Soo-In; Cha, So-Youn; Koh, Sang-Baek

    2013-01-01

    Background This study evaluated the relation between adiponectin and atherosclerosis in both genders, and investigated whether adiponectin provides useful additional information for assessing the risk of atherosclerosis. Methods We measured serum adiponectin levels and other cardiovascular risk factors in 1033 subjects (454 men, 579 women) from the Korean Genomic Rural Cohort study. Carotid intima–media-thickness (CIMT) was used as measure of atherosclerosis. Odds ratios (ORs) with 95% confidence intervals (95% CI) were calculated using multiple logistic regression, and receiver operating characteristic curves (ROC), the category-free net reclassification improvement (NRI) and integrated discrimination improvement (IDI) were calculated. Results After adjustment for conventional cardiovascular risk factors, such as age, waist circumference, smoking history, low-density and high-density lipoprotein cholesterol, triglycerides, systolic blood pressure and insulin resistance, the ORs (95%CI) of the third tertile adiponectin group were 0.42 (0.25–0.72) in men and 0.47 (0.29–0.75) in women. The area under the curve (AUC) on the ROC analysis increased significantly by 0.025 in men and 0.022 in women when adiponectin was added to the logistic model of conventional cardiovascular risk factors (AUC in men: 0.655 to 0.680, p = 0.038; AUC in women: 0.654 to 0.676, p = 0.041). The NRI was 0.32 (95%CI: 0.13–0.50, p<0.001), and the IDI was 0.03 (95%CI: 0.01–0.04, p<0.001) for men. For women, the category-free NRI was 0.18 (95%CI: 0.02–0.34, p = 0.031) and the IDI was 0.003 (95%CI: −0.002–0.008, p = 0.189). Conclusion Adiponectin and atherosclerosis were significantly related in both genders, and these relationships were independent of conventional cardiovascular risk factors. Furthermore, adiponectin provided additional information to conventional cardiovascular risk factors regarding the risk of atherosclerosis. PMID:24116054

  11. Factors affecting survival outcomes of patients with non-metastatic Ewing's sarcoma family tumors in the spine: a retrospective analysis of 63 patients in a single center.

    PubMed

    Wan, Wei; Lou, Yan; Hu, Zhiqi; Wang, Ting; Li, Jinsong; Tang, Yu; Wu, Zhipeng; Xu, Leqin; Yang, Xinghai; Song, Dianwen; Xiao, Jianru

    2017-01-01

    Little information has been published in the literature regarding survival outcomes of patients with Ewing's sarcoma family tumors (ESFTs) of the spine. The purpose of this study is to explore factors that may affect the prognosis of patients with non-metastatic spinal ESFTs. A retrospective analysis of survival outcomes was performed in patients with non-metastatic spinal ESFTs. Univariate and multivariate analyses were employed to identify prognostic factors for recurrence and survival. Recurrence-free survival (RFS) and overall survival (OS) were defined as the date of surgery to the date of local relapse and death. Kaplan-Meier methods were applied to estimate RFS and OS. Log-rank test was used to analyze single factors for RFS and OS. Factors with p values ≤0.1 were subjected to multivariate analysis. A total of 63 patients with non-metastatic spinal ESFTs were included in this study. The mean follow-up period was 35.1 months (range 1-155). Postoperative recurrence was detected in 25 patients, and distant metastasis and death occurred in 22 and 36 patients respectively. The result of multivariate analysis suggested that age older than 25 years and neoadjuvant chemotherapy were favorable independent prognostic factors for RFS and OS. In addition, total en-bloc resection, postoperative chemotherapy, radiotherapy and non-distant metastasis were favorable independent prognostic factors for OS. Age older than 25 years and neoadjuvant chemotherapy are favorable prognostic factors for both RFS and OS. In addition, total en-bloc resection, postoperative chemotherapy, radiotherapy and non-distant metastasis are closely associated with favorable survival.

  12. Analysis of risk factors causing short-term cement leakages and long-term complications after percutaneous kyphoplasty for osteoporotic vertebral compression fractures.

    PubMed

    Gao, Chang; Zong, Min; Wang, Wen-Tao; Xu, Lei; Cao, Da; Zou, Yue-Fen

    2018-05-01

    Background Percutaneous kyphoplasty (PKP) is a common treatment modality for painful osteoporotic vertebral compression fractures (OVCFs). Pre- and postoperative identification of risk factors for cement leakage and follow-up complications would therefore be helpful but has not been systematically investigated. Purpose To evaluate pre- and postoperative risk factors for the occurrence of short-term cement leakages and long-term complications after PKP for OVCFs. Material and Methods A total of 283 vertebrae with PKP in 239 patients were investigated. Possible risk factors causing cement leakage and complications during follow-up periods were retrospectively assessed using multivariate analysis. Cement leakage in general, three fundamental leakage types, and complications during follow-up period were directly identified through postoperative computed tomography (CT). Results Generally, the presence of cortical disruption ( P = 0.001), large volume of cement ( P = 0.012), and low bone mineral density (BMD) ( P = 0.002) were three strong predictors for cement leakage. While the presence of intravertebral cleft and Schmorl nodes ( P = 0.045 and 0.025, respectively) were respectively identified as additional risk factors for paravertebral and intradiscal subtype of cortical (C-type) leakages. In terms of follow-up complications, occurrence of cortical leakage was a strong risk factor both for new VCFs ( P = 0.043) and for recompression ( P = 0.004). Conclusion The presence of cortical disruption, large volume of cement, and low BMD of treated level are general but strong predictors for cement leakage. The presence of intravertebral cleft and Schmorl nodes are additional risk factors for cortical leakage. During follow-up, the occurrence of C-type leakage is a strong risk factor, for both new VCFs and recompression.

  13. Sweeping as a multistep enrichment process in micellar electrokinetic chromatography: the retention factor gradient effect.

    PubMed

    El-Awady, Mohamed; Pyell, Ute

    2013-07-05

    The application of a new method developed for the assessment of sweeping efficiency in MEKC under homogeneous and inhomogeneous electric field conditions is extended to the general case, in which the distribution coefficient and the electric conductivity of the analyte in the sample zone and in the separation compartment are varied. As test analytes p-hydroxybenzoates (parabens), benzamide and some aromatic amines are studied under MEKC conditions with SDS as anionic surfactant. We show that in the general case - in contrast to the classical description - the obtainable enrichment factor is not only dependent on the retention factor of the analyte in the sample zone but also dependent on the retention factor in the background electrolyte (BGE). It is shown that in the general case sweeping is inherently a multistep focusing process. We describe an additional focusing/defocusing step (the retention factor gradient effect, RFGE) quantitatively by extending the classical equation employed for the description of the sweeping process with an additional focusing/defocusing factor. The validity of this equation is demonstrated experimentally (and theoretically) under variation of the organic solvent content (in the sample and/or the BGE), the type of organic solvent (in the sample and/or the BGE), the electric conductivity (in the sample), the pH (in the sample), and the concentration of surfactant (in the BGE). It is shown that very high enrichment factors can be obtained, if the pH in the sample zone makes possible to convert the analyte into a charged species that has a high distribution coefficient with respect to an oppositely charged micellar phase, while the pH in the BGE enables separation of the neutral species under moderate retention factor conditions. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Enhanced Growth and Hepatic Differentiation of Fetal Liver Epithelial Cells through Combinational and Temporal Adjustment of Soluble Factors

    PubMed Central

    Qian, Lichuan; Krause, Diane S.; Saltzman, W. Mark

    2012-01-01

    Fetal liver epithelial cells (FLEC) are valuable for liver cell therapy and tissue engineering, but methods for culture and characterization of these cells are not well developed. This work explores the influence of multiple soluble factors on FLEC, with the long-term goal of developing an optimal culture system to generate functional liver tissue. Our comparative analysis suggests hepatocyte growth factor (HGF) is required throughout the culture period. In the presence of HGF, addition of oncostatin M (OSM) at culture initiation results in concurrent growth and maturation, while constant presence of protective agents like ascorbic acid enhances cell survival. Study observations led to the development of a culture medium that provided optimal growth and hepatic differentiation conditions. FLEC expansion was observed to be ~2 fold of that under standard conditions, albumin secretion rate was 2 – 3 times greater than maximal values obtained with other media, and the highest level of glycogen accumulation among all conditions was observed with the developed medium. Our findings serve to advance culture methods for liver progenitors in cell therapy and tissue engineering applications. PMID:21922669

  15. Impacts of natural history and exhibit factors on carnivore welfare.

    PubMed

    Miller, Lance J; Ivy, Jamie A; Vicino, Greg A; Schork, Ivana G

    2018-04-06

    To improve the welfare of nonhuman animals under professional care, zoological institutions are continuously utilizing new methods to identify factors that lead to optimal welfare. Comparative methods have historically been used in the field of evolutionary biology but are increasingly being applied in the field of animal welfare. In the current study, data were obtained from direct behavioral observation and institutional records representing 80 individual animals from 34 different species of the order Carnivora. Data were examined to determine if a variety of natural history and animal management factors impacted the welfare of animals in zoological institutions. Output variables indicating welfare status included behavioral diversity, pacing, offspring production, and infant mortality. Results suggested that generalist species have higher behavioral diversity and offspring production in zoos compared with their specialist counterparts. In addition, increased minimum distance from the public decreased pacing and increased offspring production, while increased maximum distance from the public and large enclosure size decreased infant mortality. These results have implications for future exhibit design or renovation, as well as management practices and priorities for future research.

  16. Practical sliced configuration spaces for curved planar pairs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sacks, E.

    1999-01-01

    In this article, the author presents a practical configuration-space computation algorithm for pairs of curved planar parts, based on the general algorithm developed by Bajaj and the author. The general algorithm advances the theoretical understanding of configuration-space computation, but is too slow and fragile for some applications. The new algorithm solves these problems by restricting the analysis to parts bounded by line segments and circular arcs, whereas the general algorithm handles rational parametric curves. The trade-off is worthwhile, because the restricted class handles most robotics and mechanical engineering applications. The algorithm reduces run time by a factor of 60 onmore » nine representative engineering pairs, and by a factor of 9 on two human-knee pairs. It also handles common special pairs by specialized methods. A survey of 2,500 mechanisms shows that these methods cover 90% of pairs and yield an additional factor of 10 reduction in average run time. The theme of this article is that application requirements, as well as intrinsic theoretical interest, should drive configuration-space research.« less

  17. Quantification method analysis of the relationship between occupant injury and environmental factors in traffic accidents.

    PubMed

    Ju, Yong Han; Sohn, So Young

    2011-01-01

    Injury analysis following a vehicle crash is one of the most important research areas. However, most injury analyses have focused on one-dimensional injury variables, such as the AIS (Abbreviated Injury Scale) or the IIS (Injury Impairment Scale), at a time in relation to various traffic accident factors. However, these studies cannot reflect the various injury phenomena that appear simultaneously. In this paper, we apply quantification method II to the NASS (National Automotive Sampling System) CDS (Crashworthiness Data System) to find the relationship between the categorical injury phenomena, such as the injury scale, injury position, and injury type, and the various traffic accident condition factors, such as speed, collision direction, vehicle type, and seat position. Our empirical analysis indicated the importance of safety devices, such as restraint equipment and airbags. In addition, we found that narrow impact, ejection, air bag deployment, and higher speed are associated with more severe than minor injury to the thigh, ankle, and leg in terms of dislocation, abrasion, or laceration. Copyright © 2010 Elsevier Ltd. All rights reserved.

  18. Improvement of Quench Factor Analysis in Phase and Hardness Prediction of a Quenched Steel

    NASA Astrophysics Data System (ADS)

    Kianezhad, M.; Sajjadi, S. A.

    2013-05-01

    The accurate prediction of alloys' properties introduced by heat treatment has been considered by many researchers. The advantages of such predictions are reduction of test trails and materials' consumption as well as time and energy saving. One of the most important methods to predict hardness in quenched steel parts is Quench Factor Analysis (QFA). Classical QFA is based on the Johnson-Mehl-Avrami-Kolmogorov (JMAK) equation. In this study, a modified form of the QFA based on the work by Rometsch et al. is compared with the classical QFA, and they are applied to prediction of hardness of steels. For this purpose, samples of CK60 steel were utilized as raw material. They were austenitized at 1103 K (830 °C). After quenching in different environments, they were cut and their hardness was determined. In addition, the hardness values of the samples were fitted using the classical and modified equations for the quench factor analysis and the results were compared. Results showed a significant improvement in fitted values of the hardness and proved the higher efficiency of the new method.

  19. Comparison of particle swarm optimization and simulated annealing for locating additional boreholes considering combined variance minimization

    NASA Astrophysics Data System (ADS)

    Soltani-Mohammadi, Saeed; Safa, Mohammad; Mokhtari, Hadi

    2016-10-01

    One of the most important stages in complementary exploration is optimal designing the additional drilling pattern or defining the optimum number and location of additional boreholes. Quite a lot research has been carried out in this regard in which for most of the proposed algorithms, kriging variance minimization as a criterion for uncertainty assessment is defined as objective function and the problem could be solved through optimization methods. Although kriging variance implementation is known to have many advantages in objective function definition, it is not sensitive to local variability. As a result, the only factors evaluated for locating the additional boreholes are initial data configuration and variogram model parameters and the effects of local variability are omitted. In this paper, with the goal of considering the local variability in boundaries uncertainty assessment, the application of combined variance is investigated to define the objective function. Thus in order to verify the applicability of the proposed objective function, it is used to locate the additional boreholes in Esfordi phosphate mine through the implementation of metaheuristic optimization methods such as simulated annealing and particle swarm optimization. Comparison of results from the proposed objective function and conventional methods indicates that the new changes imposed on the objective function has caused the algorithm output to be sensitive to the variations of grade, domain's boundaries and the thickness of mineralization domain. The comparison between the results of different optimization algorithms proved that for the presented case the application of particle swarm optimization is more appropriate than simulated annealing.

  20. A Systematic Review of Methodology: Time Series Regression Analysis for Environmental Factors and Infectious Diseases

    PubMed Central

    Imai, Chisato; Hashizume, Masahiro

    2015-01-01

    Background: Time series analysis is suitable for investigations of relatively direct and short-term effects of exposures on outcomes. In environmental epidemiology studies, this method has been one of the standard approaches to assess impacts of environmental factors on acute non-infectious diseases (e.g. cardiovascular deaths), with conventionally generalized linear or additive models (GLM and GAM). However, the same analysis practices are often observed with infectious diseases despite of the substantial differences from non-infectious diseases that may result in analytical challenges. Methods: Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, systematic review was conducted to elucidate important issues in assessing the associations between environmental factors and infectious diseases using time series analysis with GLM and GAM. Published studies on the associations between weather factors and malaria, cholera, dengue, and influenza were targeted. Findings: Our review raised issues regarding the estimation of susceptible population and exposure lag times, the adequacy of seasonal adjustments, the presence of strong autocorrelations, and the lack of a smaller observation time unit of outcomes (i.e. daily data). These concerns may be attributable to features specific to infectious diseases, such as transmission among individuals and complicated causal mechanisms. Conclusion: The consequence of not taking adequate measures to address these issues is distortion of the appropriate risk quantifications of exposures factors. Future studies should pay careful attention to details and examine alternative models or methods that improve studies using time series regression analysis for environmental determinants of infectious diseases. PMID:25859149

  1. Review of cardiometabolic risk factors in a cohort of paediatric type 1 diabetes mellitus patients.

    PubMed

    Donovan, A; Finner, N; O'Connor, C; Quinn, A; O'Gorman, C S

    2017-05-01

    Type 1 diabetes mellitus (T1DM) is a recognised risk factor for cardiometabolic disease. Other risk factors include age, gender, family history, glycaemic control, dyslipidaemia, weight, and activity levels. To estimate the point prevalence of cardiometabolic risk factors in a paediatric population with T1DM. Eighty-one patients with T1DM aged between 10 and 16 years attended during the study and 56 (69.1 %) patients agreed to participate. Mixed methods data collection included a questionnaire developed for this study, supplemented by retrospective and prospective data collected from the patient records. Of 56 subjects with T1DM, aged 12.7 ± 1.7 years (10-16 years) 26 were male and 30 were female. Mean HbA1c was 72 ± 14 mmol/mol. 53 subjects (94.6 %) had at least one additional cardiometabolic risk factor. Cardiometabolic risk factors are present in this population with T1DM. Identifying cardiometabolic risk factors in adolescent T1DM patients is the first step in prevention of future morbidity and mortality.

  2. The Role of Environment and Lifestyle in Determining the Risk of Multiple Sclerosis.

    PubMed

    Hedström, Anna Karin; Olsson, Tomas; Alfredsson, Lars

    2015-01-01

    MS is a complex disease where both genetic and environmental factors contribute to disease susceptibility. The substantially increased risk of developing MS in relatives of affected individuals gives solid evidence for a genetic base for susceptibility, whereas the modest familial risk, most strikingly demonstrated in the twin studies, is a very strong argument for an important role of lifestyle/environmental factors in determining the risk of MS, sometimes interacting with MS risk genes. Lifestyle factors and environmental exposures are harder to accurately study and quantify than genetic factors. However, it is important to identify these factors since they, as opposed to risk genes, are potentially preventable. We have reviewed the evidence for environmental factors that have been repeatedly shown to influence the risk of MS: Epstein-Barr virus (EBV) infection, ultraviolet radiation (UVR) exposure habits /vitamin D status, and smoking. We have also reviewed a number of additional environmental factors, published in the past 5 years, that have been described to influence MS risk. Independent replication, preferably by a variety of methods, may give still more firm evidence for their involvement.

  3. Design of experiments as a tool for LC-MS/MS method development for the trace analysis of the potentially genotoxic 4-dimethylaminopyridine impurity in glucocorticoids.

    PubMed

    Székely, Gy; Henriques, B; Gil, M; Ramos, A; Alvarez, C

    2012-11-01

    The present study reports on a liquid chromatography-tandem mass spectrometry (LC-MS/MS) method development strategy supported by design of experiments (DoE) for the trace analysis of 4-dimethylaminopyridine (DMAP). The conventional approaches for development of LC-MS/MS methods are usually via trial and error, varying intentionally the experimental factors which is time consuming and interactions between experimental factors are not considered. The LC factors chosen for the DoE study include flow (F), gradient (G) and injection volume (V(inj)) while cone voltage (E(con)) and collision energy (E(col)) were chosen as MS parameters. All of the five factors were studied simultaneously. The method was optimized with respect to four responses: separation of peaks (Sep), peak area (A(peak)), length of the analysis (T) and the signal to noise ratio (S/N). A quadratic model, namely central composite face (CCF) featuring 29 runs was used instead of a less powerful linear model since the increase in the number of injections was insignificant. In order to determine the robustness of the method a new set of DoE experiments was carried out applying robustness around the optimal conditions was evaluated applying a fractional factorial of resolution III with 11 runs, wherein additional factors - such as column temperature and quadrupole resolution - were considered. The method utilizes a Phenomenex Gemini NX C-18 HPLC analytical column with electrospray ionization and a triple quadrupole mass detector in multiple reaction monitoring (MRM) mode, resulting in short analyses with a 10min runtime. Drawbacks of derivatization, namely incomplete reaction and time consuming sample preparation, have been avoided and the change from SIM to MRM mode resulted in increased sensitivity and lower LOQ. The DoE method development strategy led to a method allowing the trace analysis of DMAP at 0.5 ng/ml absolute concentration which corresponds to a 0.1 ppm limit of quantification in 5mg/ml mometasone furoate glucocorticoid. The obtained method was validated in a linear range of 0.1-10 ppm and presented a %RSD of 0.02% for system precision. Regarding DMAP recovery in mometasone furoate, spiked samples produced %recoveries between 83 and 113% in the range of 0.1-2 ppm. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Comparison of methods for acid quantification: impact of resist components on acid-generating efficiency

    NASA Astrophysics Data System (ADS)

    Cameron, James F.; Fradkin, Leslie; Moore, Kathryn; Pohlers, Gerd

    2000-06-01

    Chemically amplified deep UV (CA-DUV) positive resists are the enabling materials for manufacture of devices at and below 0.18 micrometer design rules in the semiconductor industry. CA-DUV resists are typically based on a combination of an acid labile polymer and a photoacid generator (PAG). Upon UV exposure, a catalytic amount of a strong Bronsted acid is released and is subsequently used in a post-exposure bake step to deprotect the acid labile polymer. Deprotection transforms the acid labile polymer into a base soluble polymer and ultimately enables positive tone image development in dilute aqueous base. As CA-DUV resist systems continue to mature and are used in increasingly demanding situations, it is critical to develop a fundamental understanding of how robust these materials are. One of the most important factors to quantify is how much acid is photogenerated in these systems at key exposure doses. For the purpose of quantifying photoacid generation several methods have been devised. These include spectrophotometric methods, ion conductivity methods and most recently an acid-base type titration similar to the standard addition method. This paper compares many of these techniques. First, comparisons between the most commonly used acid sensitive dye, tetrabromophenol blue sodium salt (TBPB) and a less common acid sensitive dye, Rhodamine B base (RB) are made in several resist systems. Second, the novel acid-base type titration based on the standard addition method is compared to the spectrophotometric titration method. During these studies, the make up of the resist system is probed as follows: the photoacid generator and resist additives are varied to understand the impact of each of these resist components on the acid generation process.

  5. Surface texture measurement for additive manufacturing

    NASA Astrophysics Data System (ADS)

    Triantaphyllou, Andrew; Giusca, Claudiu L.; Macaulay, Gavin D.; Roerig, Felix; Hoebel, Matthias; Leach, Richard K.; Tomita, Ben; Milne, Katherine A.

    2015-06-01

    The surface texture of additively manufactured metallic surfaces made by powder bed methods is affected by a number of factors, including the powder’s particle size distribution, the effect of the heat source, the thickness of the printed layers, the angle of the surface relative to the horizontal build bed and the effect of any post processing/finishing. The aim of the research reported here is to understand the way these surfaces should be measured in order to characterise them. In published research to date, the surface texture is generally reported as an Ra value, measured across the lay. The appropriateness of this method for such surfaces is investigated here. A preliminary investigation was carried out on two additive manufacturing processes—selective laser melting (SLM) and electron beam melting (EBM)—focusing on the effect of build angle and post processing. The surfaces were measured using both tactile and optical methods and a range of profile and areal parameters were reported. Test coupons were manufactured at four angles relative to the horizontal plane of the powder bed using both SLM and EBM. The effect of lay—caused by the layered nature of the manufacturing process—was investigated, as was the required sample area for optical measurements. The surfaces were also measured before and after grit blasting.

  6. LC-method development for the quantification of neuromedin-like peptides. Emphasis on column choice and mobile phase composition.

    PubMed

    Van Wanseele, Yannick; Viaene, Johan; Van den Borre, Leslie; Dewachter, Kathleen; Vander Heyden, Yvan; Smolders, Ilse; Van Eeckhaut, Ann

    2017-04-15

    In this study, the separation of four neuromedin-like peptides is investigated on four different core-shell stationary phases. Moreover, the effect of the mobile phase composition, i.e. organic modifier (acetonitrile and methanol) and additive (trifluoroacetic acid, formic acid, acetic acid, ammonium formate and ammonium acetate) on the chromatographic performance is studied. An improvement in chromatographic performance is observed when using the ammonium salt instead of its corresponding acid as additive, except for the column containing a positively charged surface (C18+). In general, the RP-Amide column provided the highest separation power with different mobile phases. However, for the neuromedin-like peptides of interest, the C18+ column in combination with a mobile phase containing methanol as organic modifier and acetic acid as additive provided narrower and higher peaks. A three-factor, three-level design is applied to further optimize the method in terms of increased peak height and reduced solvent consumption, without loss in resolution. The optimized method was subsequently used to assess the in vitro microdialysis recovery of the peptides of interest. Recovery values between 4 and 8% were obtained using a perfusion flow rate of 2μL/min. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Quantitative fluorescence angiography for neurosurgical interventions.

    PubMed

    Weichelt, Claudia; Duscha, Philipp; Steinmeier, Ralf; Meyer, Tobias; Kuß, Julia; Cimalla, Peter; Kirsch, Matthias; Sobottka, Stephan B; Koch, Edmund; Schackert, Gabriele; Morgenstern, Ute

    2013-06-01

    Present methods for quantitative measurement of cerebral perfusion during neurosurgical operations require additional technology for measurement, data acquisition, and processing. This study used conventional fluorescence video angiography--as an established method to visualize blood flow in brain vessels--enhanced by a quantifying perfusion software tool. For these purposes, the fluorescence dye indocyanine green is given intravenously, and after activation by a near-infrared light source the fluorescence signal is recorded. Video data are analyzed by software algorithms to allow quantification of the blood flow. Additionally, perfusion is measured intraoperatively by a reference system. Furthermore, comparing reference measurements using a flow phantom were performed to verify the quantitative blood flow results of the software and to validate the software algorithm. Analysis of intraoperative video data provides characteristic biological parameters. These parameters were implemented in the special flow phantom for experimental validation of the developed software algorithms. Furthermore, various factors that influence the determination of perfusion parameters were analyzed by means of mathematical simulation. Comparing patient measurement, phantom experiment, and computer simulation under certain conditions (variable frame rate, vessel diameter, etc.), the results of the software algorithms are within the range of parameter accuracy of the reference methods. Therefore, the software algorithm for calculating cortical perfusion parameters from video data presents a helpful intraoperative tool without complex additional measurement technology.

  8. Combining heuristic and statistical techniques in landslide hazard assessments

    NASA Astrophysics Data System (ADS)

    Cepeda, Jose; Schwendtner, Barbara; Quan, Byron; Nadim, Farrokh; Diaz, Manuel; Molina, Giovanni

    2014-05-01

    As a contribution to the Global Assessment Report 2013 - GAR2013, coordinated by the United Nations International Strategy for Disaster Reduction - UNISDR, a drill-down exercise for landslide hazard assessment was carried out by entering the results of both heuristic and statistical techniques into a new but simple combination rule. The data available for this evaluation included landslide inventories, both historical and event-based. In addition to the application of a heuristic method used in the previous editions of GAR, the availability of inventories motivated the use of statistical methods. The heuristic technique is largely based on the Mora & Vahrson method, which estimates hazard as the product of susceptibility and triggering factors, where classes are weighted based on expert judgment and experience. Two statistical methods were also applied: the landslide index method, which estimates weights of the classes for the susceptibility and triggering factors based on the evidence provided by the density of landslides in each class of the factors; and the weights of evidence method, which extends the previous technique to include both positive and negative evidence of landslide occurrence in the estimation of weights for the classes. One key aspect during the hazard evaluation was the decision on the methodology to be chosen for the final assessment. Instead of opting for a single methodology, it was decided to combine the results of the three implemented techniques using a combination rule based on a normalization of the results of each method. The hazard evaluation was performed for both earthquake- and rainfall-induced landslides. The country chosen for the drill-down exercise was El Salvador. The results indicate that highest hazard levels are concentrated along the central volcanic chain and at the centre of the northern mountains.

  9. Probe-Specific Procedure to Estimate Sensitivity and Detection Limits for 19F Magnetic Resonance Imaging.

    PubMed

    Taylor, Alexander J; Granwehr, Josef; Lesbats, Clémentine; Krupa, James L; Six, Joseph S; Pavlovskaya, Galina E; Thomas, Neil R; Auer, Dorothee P; Meersmann, Thomas; Faas, Henryk M

    2016-01-01

    Due to low fluorine background signal in vivo, 19F is a good marker to study the fate of exogenous molecules by magnetic resonance imaging (MRI) using equilibrium nuclear spin polarization schemes. Since 19F MRI applications require high sensitivity, it can be important to assess experimental feasibility during the design stage already by estimating the minimum detectable fluorine concentration. Here we propose a simple method for the calibration of MRI hardware, providing sensitivity estimates for a given scanner and coil configuration. An experimental "calibration factor" to account for variations in coil configuration and hardware set-up is specified. Once it has been determined in a calibration experiment, the sensitivity of an experiment or, alternatively, the minimum number of required spins or the minimum marker concentration can be estimated without the need for a pilot experiment. The definition of this calibration factor is derived based on standard equations for the sensitivity in magnetic resonance, yet the method is not restricted by the limited validity of these equations, since additional instrument-dependent factors are implicitly included during calibration. The method is demonstrated using MR spectroscopy and imaging experiments with different 19F samples, both paramagnetically and susceptibility broadened, to approximate a range of realistic environments.

  10. Dimensionality and noise in energy selective x-ray imaging

    PubMed Central

    Alvarez, Robert E.

    2013-01-01

    Purpose: To develop and test a method to quantify the effect of dimensionality on the noise in energy selective x-ray imaging. Methods: The Cramèr-Rao lower bound (CRLB), a universal lower limit of the covariance of any unbiased estimator, is used to quantify the noise. It is shown that increasing dimensionality always increases, or at best leaves the same, the variance. An analytic formula for the increase in variance in an energy selective x-ray system is derived. The formula is used to gain insight into the dependence of the increase in variance on the properties of the additional basis functions, the measurement noise covariance, and the source spectrum. The formula is also used with computer simulations to quantify the dependence of the additional variance on these factors. Simulated images of an object with three materials are used to demonstrate the trade-off of increased information with dimensionality and noise. The images are computed from energy selective data with a maximum likelihood estimator. Results: The increase in variance depends most importantly on the dimension and on the properties of the additional basis functions. With the attenuation coefficients of cortical bone, soft tissue, and adipose tissue as the basis functions, the increase in variance of the bone component from two to three dimensions is 1.4 × 103. With the soft tissue component, it is 2.7 × 104. If the attenuation coefficient of a high atomic number contrast agent is used as the third basis function, there is only a slight increase in the variance from two to three basis functions, 1.03 and 7.4 for the bone and soft tissue components, respectively. The changes in spectrum shape with beam hardening also have a substantial effect. They increase the variance by a factor of approximately 200 for the bone component and 220 for the soft tissue component as the soft tissue object thickness increases from 1 to 30 cm. Decreasing the energy resolution of the detectors increases the variance of the bone component markedly with three dimension processing, approximately a factor of 25 as the resolution decreases from 100 to 3 bins. The increase with two dimension processing for adipose tissue is a factor of two and with the contrast agent as the third material for two or three dimensions is also a factor of two for both components. The simulated images show that a maximum likelihood estimator can be used to process energy selective x-ray data to produce images with noise close to the CRLB. Conclusions: The method presented can be used to compute the effects of the object attenuation coefficients and the x-ray system properties on the relationship of dimensionality and noise in energy selective x-ray imaging systems. PMID:24320442

  11. Preparation of Nonionic Vesicles Using the Supercritical Carbon Dioxide Reverse Phase Evaporation Method and Analysis of Their Solution Properties.

    PubMed

    Yamaguchi, Shunsuke; Tsuchiya, Koji; Sakai, Kenichi; Abe, Masahiko; Sakai, Hideki

    2016-01-01

    We have previously reported a new preparation method for liposomes using supercritical carbon dioxide (scCO2) as a solvent, referred to as the supercritical carbon dioxide reverse phase evaporation (scRPE) method. In our previous work, addition of ethanol to scCO2 as a co-solvent was needed, because lipid molecules had to be dissolved in scCO2 to form liposomes. In this new study, niosomes (nonionic surfactant vesicles) were prepared from various nonionic surfactants using the scRPE method. Among the nonionic surfactants tested were polyoxyethylene (6) stearylether (C18EO6), polyoxyethylene (5) phytosterolether (BPS-5), polyoxyethylene (6) sorbitan stearylester (TS-106V), and polyoxyethylene (4) sorbitan stearylester (Tween 61). All these surfactants have hydrophilic-lipophilic balance values (HLBs) around 9.5 to 9.9, and they can all form niosomes using the scRPE method even in the absence of ethanol. The high solubility of these surfactants in scCO2 was shown to be an important factor in yielding niosomes without ethanol addition. The niosomes prepared with the scRPE method had higher trapping efficiencies than those prepared using the conventional Bangham method, since the scRPE method gives a large number of unilamellar vesicles while the Bangham method gives multilamellar vesicles. Polyoxyethylene-type nonionic surfactants with HLB values from 9.5 to 9.9 were shown to be optimal for the preparation of niosomes with the scRPE method.

  12. Bayes factors based on robust TDT-type tests for family trio design.

    PubMed

    Yuan, Min; Pan, Xiaoqing; Yang, Yaning

    2015-06-01

    Adaptive transmission disequilibrium test (aTDT) and MAX3 test are two robust-efficient association tests for case-parent family trio data. Both tests incorporate information of common genetic models including recessive, additive and dominant models and are efficient in power and robust to genetic model specifications. The aTDT uses information of departure from Hardy-Weinberg disequilibrium to identify the potential genetic model underlying the data and then applies the corresponding TDT-type test, and the MAX3 test is defined as the maximum of the absolute value of three TDT-type tests under the three common genetic models. In this article, we propose three robust Bayes procedures, the aTDT based Bayes factor, MAX3 based Bayes factor and Bayes model averaging (BMA), for association analysis with case-parent trio design. The asymptotic distributions of aTDT under the null and alternative hypothesis are derived in order to calculate its Bayes factor. Extensive simulations show that the Bayes factors and the p-values of the corresponding tests are generally consistent and these Bayes factors are robust to genetic model specifications, especially so when the priors on the genetic models are equal. When equal priors are used for the underlying genetic models, the Bayes factor method based on aTDT is more powerful than those based on MAX3 and Bayes model averaging. When the prior placed a small (large) probability on the true model, the Bayes factor based on aTDT (BMA) is more powerful. Analysis of a simulation data about RA from GAW15 is presented to illustrate applications of the proposed methods.

  13. Prenatal and Perinatal Risk Factors in a Twin Study of Autism Spectrum Disorders

    PubMed Central

    Froehlich-Santino, Wendy; Tobon, Amalia Londono; Cleveland, Sue; Torres, Andrea; Phillips, Jennifer; Cohen, Brianne; Torigoe, Tiffany; Miller, Janet; Fedele, Angie; Collins, Jack; Smith, Karen; Lotspeich, Linda; Croen, Lisa A.; Ozonoff, Sally; Lajonchere, Clara; Grether, Judith K.; O’Hara, Ruth; Hallmayer, Joachim

    2014-01-01

    Introduction Multiple studies associate prenatal and perinatal complications with increased risks for autism spectrum disorders (ASDs). The objectives of this study were to utilize a twin study design to 1) Investigate whether shared gestational and perinatal factors increase concordance for ASDs in twins, 2) Determine whether individual neonatal factors are associated with the presence of ASDs in twins, and 3) Explore whether associated factors may influence males and females differently. Methods Data from medical records and parent response questionnaires from 194 twin pairs, in which at least one twin had an ASD, were analyzed. Results Shared factors including parental age, prenatal use of medications, uterine bleeding, and prematurity did not increase concordance risks for ASDs in twins. Among the individual factors, respiratory distress demonstrated the strongest association with increased risk for ASDs in the group as a whole (OR 2.11, 95% CI 1.27–3.51). Furthermore, respiratory distress (OR 2.29, 95% CI 1.12–4.67) and other markers of hypoxia (OR 1.99, 95% CI 1.04–3.80) were associated with increased risks for ASDs in males, while jaundice was associated with an increased risk for ASDs in females (OR 2.94, 95% CI 1.28–6.74). Conclusions Perinatal factors associated with respiratory distress and other markers of hypoxia appear to increase risk for autism in a subgroup of twins. Future studies examining potential gender differences and additional prenatal, perinatal and postnatal environmental factors are required for elucidating the etiology of ASDs and suggesting new methods for treatment and prevention. PMID:24726638

  14. Taking the Error Term of the Factor Model into Account: The Factor Score Predictor Interval

    ERIC Educational Resources Information Center

    Beauducel, Andre

    2013-01-01

    The problem of factor score indeterminacy implies that the factor and the error scores cannot be completely disentangled in the factor model. It is therefore proposed to compute Harman's factor score predictor that contains an additive combination of factor and error variance. This additive combination is discussed in the framework of classical…

  15. Evaluation of selected methods for determining streamflow during periods of ice effect

    USGS Publications Warehouse

    Melcher, N.B.; Walker, J.F.

    1990-01-01

    The methods are classified into two general categories, subjective and analytical, depending on whether individual judgement is necessary for method application. On the basis of results of the evaluation for the three Iowa stations, two of the subjective methods (discharge ratio and hydrographic-and-climatic comparison) were more accurate than the other subjective methods, and approximately as accurate as the best analytical method. Three of the analytical methods (index velocity, adjusted rating curve, and uniform flow) could potentially be used for streamflow-gaging stations where the need for accurate ice-affected discharge estimates justifies the expense of collecting additional field data. One analytical method (ice adjustment factor) may be appropriate for use for stations with extremely stable stage-discharge ratings and measuring sections. Further research is needed to refine the analytical methods. The discharge ratio and multiple regression methods produce estimates of streamflow for varying ice conditions using information obtained from the existing U.S. Geological Survey streamflow-gaging network.

  16. Radiographic methods of wear analysis in total hip arthroplasty.

    PubMed

    Rahman, Luthfur; Cobb, Justin; Muirhead-Allwood, Sarah

    2012-12-01

    Polyethylene wear is an important factor in failure of total hip arthroplasty (THA). With increasing numbers of THAs being performed worldwide, particularly in younger patients, the burden of failure and revision arthroplasty is increasing, as well, along with associated costs and workload. Various radiographic methods of measuring polyethylene wear have been developed to assist in deciding when to monitor patients more closely and when to consider revision surgery. Radiographic methods that have been developed to measure polyethylene wear include manual and computer-assisted plain radiography, two- and three-dimensional techniques, and radiostereometric analysis. Some of these methods are important in both clinical and research settings. CT has the potential to provide additional information on component orientation and enables assessment of periprosthetic osteolysis, which is an important consequence of polyethylene wear.

  17. Quantitative phase imaging method based on an analytical nonparaxial partially coherent phase optical transfer function.

    PubMed

    Bao, Yijun; Gaylord, Thomas K

    2016-11-01

    Multifilter phase imaging with partially coherent light (MFPI-PC) is a promising new quantitative phase imaging method. However, the existing MFPI-PC method is based on the paraxial approximation. In the present work, an analytical nonparaxial partially coherent phase optical transfer function is derived. This enables the MFPI-PC to be extended to the realistic nonparaxial case. Simulations over a wide range of test phase objects as well as experimental measurements on a microlens array verify higher levels of imaging accuracy compared to the paraxial method. Unlike the paraxial version, the nonparaxial MFPI-PC with obliquity factor correction exhibits no systematic error. In addition, due to its analytical expression, the increase in computation time compared to the paraxial version is negligible.

  18. Development, optimization, validation and application of faster gas chromatography - flame ionization detector method for the analysis of total petroleum hydrocarbons in contaminated soils.

    PubMed

    Zubair, Abdulrazaq; Pappoe, Michael; James, Lesley A; Hawboldt, Kelly

    2015-12-18

    This paper presents an important new approach to improving the timeliness of Total Petroleum Hydrocarbon (TPH) analysis in the soil by Gas Chromatography - Flame Ionization Detector (GC-FID) using the CCME Canada-Wide Standard reference method. The Canada-Wide Standard (CWS) method is used for the analysis of petroleum hydrocarbon compounds across Canada. However, inter-laboratory application of this method for the analysis of TPH in the soil has often shown considerable variability in the results. This could be due, in part, to the different gas chromatography (GC) conditions, other steps involved in the method, as well as the soil properties. In addition, there are differences in the interpretation of the GC results, which impacts the determination of the effectiveness of remediation at hydrocarbon-contaminated sites. In this work, multivariate experimental design approach was used to develop and validate the analytical method for a faster quantitative analysis of TPH in (contaminated) soil. A fractional factorial design (fFD) was used to screen six factors to identify the most significant factors impacting the analysis. These factors included: injection volume (μL), injection temperature (°C), oven program (°C/min), detector temperature (°C), carrier gas flow rate (mL/min) and solvent ratio (v/v hexane/dichloromethane). The most important factors (carrier gas flow rate and oven program) were then optimized using a central composite response surface design. Robustness testing and validation of model compares favourably with the experimental results with percentage difference of 2.78% for the analysis time. This research successfully reduced the method's standard analytical time from 20 to 8min with all the carbon fractions eluting. The method was successfully applied for fast TPH analysis of Bunker C oil contaminated soil. A reduced analytical time would offer many benefits including an improved laboratory reporting times, and overall improved clean up efficiency. The method was successfully applied for the analysis of TPH of Bunker C oil in contaminated soil. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.

  19. Temperament and the structure of personality disorder symptoms.

    PubMed

    Mulder, R T; Joyce, P R

    1997-01-01

    This paper attempts to construct a simplified system for the classification of personality disorders, and relates this system to normally distributed human personality characteristics. One hundred and forty-eight subjects with a variety of psychiatric diagnoses were evaluated using the SCID-II structured clinical interview for personality disorders. A four-factor solution of personality disorder symptoms was obtained and we labelled these factors 'the four As': antisocial, asocial, asthenic and anankastic. The factors related to the four temperament dimensions of the Tridimensional Personality Questionnaire (TPQ), but less closely to Eysenck Personality Questionnaire (EPQ) dimensions. The four factors were similar to those identified in a number of studies using a variety of assessment methods and this lends some credibility to our findings. It suggests that a more parsimonious set of trait descriptors could be used to provide simpler, less overlapping categories that retain links with current clinical practice. In addition, these factors can be seen as extremes of normally distributed behaviours obtained using the TPQ questionnaire.

  20. Time-fixed rendezvous by impulse factoring with an intermediate timing constraint. [for transfer orbits

    NASA Technical Reports Server (NTRS)

    Green, R. N.; Kibler, J. F.; Young, G. R.

    1974-01-01

    A method is presented for factoring a two-impulse orbital transfer into a three- or four-impulse transfer which solves the rendezvous problem and satisfies an intermediate timing constraint. Both the time of rendezvous and the intermediate time of a alinement are formulated as any element of a finite sequence of times. These times are integer multiples of a constant plus an additive constant. The rendezvous condition is an equality constraint, whereas the intermediate alinement is an inequality constraint. The two timing constraints are satisfied by factoring the impulses into collinear parts that vectorially sum to the original impulse and by varying the resultant period differences and the number of revolutions in each orbit. Five different types of solutions arise by considering factoring either or both of the two impulses into two or three parts with a limit for four total impulses. The impulse-factoring technique may be applied to any two-impulse transfer which has distinct orbital periods.

  1. [Effect of home-processing in the preparation of pinto beans (Phaseolus vulgaris L.) on the tannin content and nutritive value of proteins].

    PubMed

    Goycoolea, F; González de Mejía, E; Barrón, J M; Valencia, M E

    1990-06-01

    A 3(2) factor design was carried out in order to investigate the different home-cooking treatments applied in the preparation of pinto beans (Phaseolus vulgaris L.) on the nutritive value of their protein. The factors studied were previous soaking, type of cooking and addition of cooking broth. Biological evaluation of the protein was performed, and the protein efficiency ratio (PER) and apparent digestibility of the protein (DAP) values were obtained. The tannin content was measured in hulls, cotyledons and in the cooking broths of each experimental treatment. The most significant effect of the PER value was the type of cooking (P less than 0.0001), followed by the addition of cooking broth (P less than 0.05) as well as a significant interaction between cooking method and addition of broth (P less than 0.025). Soaking did not have significant effects per se or through its interactions in relation to PER. The highest values for PER and DAP were obtained with the boiling treatment without broth. The detrimental effect of the cooking broth can be explained by its tannin content (108.5-272.25 mg Eq. catechin/100g).

  2. A Heat Vulnerability Index and Adaptation Solutions for Pittsburgh, Pennsylvania.

    PubMed

    Bradford, Kathryn; Abrahams, Leslie; Hegglin, Miriam; Klima, Kelly

    2015-10-06

    With increasing evidence of global warming, many cities have focused attention on response plans to address their populations' vulnerabilities. Despite expected increased frequency and intensity of heat waves, the health impacts of such events in urban areas can be minimized with careful policy and economic investments. We focus on Pittsburgh, Pennsylvania and ask two questions. First, what are the top factors contributing to heat vulnerability and how do these characteristics manifest geospatially throughout Pittsburgh? Second, assuming the City wishes to deploy additional cooling centers, what placement will optimally address the vulnerability of the at risk populations? We use national census data, ArcGIS geospatial modeling, and statistical analysis to determine a range of heat vulnerability indices and optimal cooling center placement. We find that while different studies use different data and statistical calculations, all methods tested locate additional cooling centers at the confluence of the three rivers (Downtown), the northeast side of Pittsburgh (Shadyside/Highland Park), and the southeast side of Pittsburgh (Squirrel Hill). This suggests that for Pittsburgh, a researcher could apply the same factor analysis procedure to compare data sets for different locations and times; factor analyses for heat vulnerability are more robust than previously thought.

  3. The role of taste in alcohol preference, consumption and risk behavior.

    PubMed

    Thibodeau, Margaret; Pickering, Gary J

    2017-10-05

    Alcohol consumption is widespread, and high levels of use are associated with increased risk of developing an alcohol use disorder. Thus, understanding the factors that influence alcohol intake is important for disease prevention and management. Additionally, elucidating the factors that associate with alcohol preference and intake in non-clinical populations allows for product development and optimisation opportunities for the alcoholic beverage industry. The literature on how taste (orosensation) influences alcohol behavior is critically appraised in this review. Ethanol, the compound common to all alcoholic beverages, is generally aversive as it primarily elicits bitterness and irritation when ingested. Individuals who experience orosensations (both taste and chemesthetic) more intensely tend to report lower liking and consumption of alcoholic beverages. Additionally, a preference for sweetness is likely associated with a paternal history of alcohol use disorders. However, conflicting findings in the literature are common and may be partially attributable to differences in the methods used to access orosensory responsiveness and taste phenotypes. We conclude that while taste is a key driver in alcohol preference, intake and use disorder, no single taste-related factor can adequately predict alcohol behaviour. Areas for further research and suggestions for improved methodological and analytical approaches are highlighted.

  4. Preparing Platelet-Rich Plasma with Whole Blood Harvested Intraoperatively During Spinal Fusion.

    PubMed

    Shen, Bin; Zhang, Zheng; Zhou, Ning-Feng; Huang, Yu-Feng; Bao, Yu-Jie; Wu, De-Sheng; Zhang, Ya-Dong

    2017-07-22

    BACKGROUND Platelet-rich plasma (PRP) has gained growing popularity in use in spinal fusion procedures in the last decade. Substantial intraoperative blood loss is frequently accompanied with spinal fusion, and it is unknown whether blood harvested intraoperatively qualifies for PRP preparation. MATERIAL AND METHODS Whole blood was harvested intraoperatively and venous blood was collected by venipuncture. Then, we investigated the platelet concentrations in whole blood and PRP, the concentration of growth factors in PRP, and the effects of PRP on the proliferation and viability of human bone marrow-derived mesenchymal stem cells (HBMSCs). RESULTS Our results revealed that intraoperatively harvested whole blood and whole blood collected by venipuncture were similar in platelet concentration. In addition, PRP formulations prepared from both kinds of whole blood were similar in concentration of platelet and growth factors. Additional analysis showed that the similar concentrations of growth factors resulted from the similar platelet concentrations of whole blood and PRP between the two groups. Moreover, these two kinds of PRP formulations had similar effects on promoting cell proliferation and enhancing cell viability. CONCLUSIONS Therefore, intraoperatively harvested whole blood may be a potential option for preparing PRP spinal fusion.

  5. A Heat Vulnerability Index and Adaptation Solutions for Pittsburgh, Pennsylvania

    NASA Astrophysics Data System (ADS)

    Klima, K.; Abrahams, L.; Bradford, K.; Hegglin, M.

    2015-12-01

    With increasing evidence of global warming, many cities have focused attention on response plans to address their populations' vulnerabilities. Despite expected increased frequency and intensity of heat waves, the health impacts of such events in urban areas can be minimized with careful policy and economic investments. We focus on Pittsburgh, Pennsylvania and ask two questions. First, what are the top factors contributing to heat vulnerability and how do these characteristics manifest geospatially throughout Pittsburgh? Second, assuming the City wishes to deploy additional cooling centers, what placement will optimally address the vulnerability of the at risk populations? We use national census data, ArcGIS geospatial modeling, and statistical analysis to determine a range of heat vulnerability indices and optimal cooling center placement. We find that while different studies use different data and statistical calculations, all methods tested locate additional cooling centers at the confluence of the three rivers (Downtown), the northeast side of Pittsburgh (Shadyside/ Highland Park), and the southeast side of Pittsburgh (Squirrel Hill). This suggests that for Pittsburgh, a researcher could apply the same factor analysis procedure to compare datasets for different locations and times; factor analyses for heat vulnerability are more robust than previously thought.

  6. Health-related quality of life and related factors of military police officers.

    PubMed

    da Silva, Franciele Cascaes; Hernandez, Salma Stéphany Soleman; Arancibia, Beatriz Angélica Valdivia; Castro, Thiago Luis da Silva; Filho, Paulo José Barbosa Gutierres; da Silva, Rudney

    2014-04-27

    The present study aimed to determine the effect of demographic characteristics, occupation, anthropometric indices, and leisure-time physical activity levels on coronary risk and health-related quality of life among military police officers from the State of Santa Catarina, Brazil. The sample included 165 military police officers who fulfilled the study’s inclusion criteria. The International Physical Activity Questionnaire and the Short Form Health Survey were used, in addition to a spreadsheet of socio-demographic, occupational and anthropometric data. Statistical analyses were performed using descriptive analysis followed by Spearman Correlation and multiple linear regression analysis using the backward method. The waist-to-height ratio was identified as a risk factor low health-related quality of life. In addition, the conicity index, fat percentage, years of service in the military police, minutes of work per day and leisure-time physical activity levels were identified as risk factors for coronary disease among police officers. These findings suggest that the Military Police Department should adopt an institutional policy that allows police officers to practice regular physical activity in order to maintain and improve their physical fitness, health, job performance, and quality of life.

  7. Objective and Subjective Factors as Predictors of Post-Traumatic Stress Symptoms in Parents of Children with Cancer – A Longitudinal Study

    PubMed Central

    Lindahl Norberg, Annika; Pöder, Ulrika; Ljungman, Gustaf; von Essen, Louise

    2012-01-01

    Background Parents of children with cancer report post-traumatic stress symptoms (PTSS) years after the child's successful treatment is completed. The aim of the present study was to analyze a number of objective and subjective childhood cancer-related factors as predictors of parental PTSS. Methods Data were collected from 224 parents during and after their child's cancer treatment. Data sources include self-report questionnaires and medical records. Results In a multivariate hierarchical model death of the child, parent's perception of child psychological distress and total symptom burden predicted higher levels of PTSS. In addition, immigrants and unemployed parents reported higher levels of PTSS. The following factors did not predict PTSS: parent gender, family income, previous trauma, child's prognosis, treatment intensity, non-fatal relapse, and parent's satisfaction with the child's care. Conclusions Although medical complications can be temporarily stressful, a parent's perception of the child's distress is a more powerful predictor of parental PTSS. The vulnerability of unemployed parents and immigrants should be acknowledged. In addition, findings highlight that the death of a child is as traumatic as could be expected. PMID:22567141

  8. Accurate determination of sulfur in gasoline and related fuel samples using isotope dilution ICP-MS with direct sample injection and microwave-assisted digestion.

    PubMed

    Heilmann, Jens; Boulyga, Sergei F; Heumann, Klaus G

    2004-09-01

    Inductively coupled plasma isotope-dilution mass spectrometry (ICP-IDMS) with direct injection of isotope-diluted samples into the plasma, using a direct injection high-efficiency nebulizer (DIHEN), was applied for accurate sulfur determinations in sulfur-free premium gasoline, gas oil, diesel fuel, and heating oil. For direct injection a micro-emulsion consisting of the corresponding organic sample and an aqueous 34S-enriched spike solution with additions of tetrahydronaphthalene and Triton X-100, was prepared. The ICP-MS parameters were optimized with respect to high sulfur ion intensities, low mass-bias values, and high precision of 32S/34S ratio measurements. For validation of the DIHEN-ICP-IDMS method two certified gas oil reference materials (BCR 107 and BCR 672) were analyzed. For comparison a wet-chemical ICP-IDMS method was applied with microwave-assisted digestion using decomposition of samples in a closed quartz vessel inserted into a normal microwave system. The results from both ICP-IDMS methods agree well with the certified values of the reference materials and also with each other for analyses of other samples. However, the standard deviation of DIHEN-ICP-IDMS was about a factor of two higher (5-6% RSD at concentration levels above 100 mircog g(-1)) compared with those of wet-chemical ICP-IDMS, mainly due to inhomogeneities of the micro-emulsion, which causes additional plasma instabilities. Detection limits of 4 and 18 microg g(-1) were obtained for ICP-IDMS in connection with microwave-assisted digestion and DIHEN-ICP-IDMS, respectively, with a sulfur background of the used Milli-Q water as the main limiting factor for both methods.

  9. Characterizing mammographic images by using generic texture features

    PubMed Central

    2012-01-01

    Introduction Although mammographic density is an established risk factor for breast cancer, its use is limited in clinical practice because of a lack of automated and standardized measurement methods. The aims of this study were to evaluate a variety of automated texture features in mammograms as risk factors for breast cancer and to compare them with the percentage mammographic density (PMD) by using a case-control study design. Methods A case-control study including 864 cases and 418 controls was analyzed automatically. Four hundred seventy features were explored as possible risk factors for breast cancer. These included statistical features, moment-based features, spectral-energy features, and form-based features. An elaborate variable selection process using logistic regression analyses was performed to identify those features that were associated with case-control status. In addition, PMD was assessed and included in the regression model. Results Of the 470 image-analysis features explored, 46 remained in the final logistic regression model. An area under the curve of 0.79, with an odds ratio per standard deviation change of 2.88 (95% CI, 2.28 to 3.65), was obtained with validation data. Adding the PMD did not improve the final model. Conclusions Using texture features to predict the risk of breast cancer appears feasible. PMD did not show any additional value in this study. With regard to the features assessed, most of the analysis tools appeared to reflect mammographic density, although some features did not correlate with PMD. It remains to be investigated in larger case-control studies whether these features can contribute to increased prediction accuracy. PMID:22490545

  10. An application of a hybrid MCDM method for the evaluation of entrepreneurial intensity among the SMEs: a case study.

    PubMed

    Rostamzadeh, Reza; Ismail, Kamariah; Bodaghi Khajeh Noubar, Hossein

    2014-01-01

    This study presents one of the first attempts to focus on critical success factors influencing the entrepreneurial intensity of Malaysian small and medium sized enterprises (SMEs) as they attempt to expand internationally. The aim of this paper is to evaluate and prioritize the entrepreneurial intensity among the SMEs using multicriteria decision (MCDM) techniques. In this research FAHP is used for finding the weights of criteria and subcriteria. Then for the final ranking of the companies, VIKOR (in Serbian: VlseKriterijumska Optimizacija I Kompromisno Resenje) method was used. Also, as an additional tool, TOPSIS technique, is used to see the differences of two methods applied over the same data. 5 main criteria and 14 subcriteria were developed and implemented in the real-world cases. As the results showed, two ranking methods provided different ranking. Furthermore, the final findings of the research based on VIKOR and TOPSIS indicated that the firms A3 and A4 received the first rank, respectively. In addition, the firm A4 was known as the most entrepreneurial company. This research has been done in the manufacturing sector, but it could be also extended to the service sector for measurement.

  11. An Application of a Hybrid MCDM Method for the Evaluation of Entrepreneurial Intensity among the SMEs: A Case Study

    PubMed Central

    Ismail, Kamariah; Bodaghi Khajeh Noubar, Hossein

    2014-01-01

    This study presents one of the first attempts to focus on critical success factors influencing the entrepreneurial intensity of Malaysian small and medium sized enterprises (SMEs) as they attempt to expand internationally. The aim of this paper is to evaluate and prioritize the entrepreneurial intensity among the SMEs using multicriteria decision (MCDM) techniques. In this research FAHP is used for finding the weights of criteria and subcriteria. Then for the final ranking of the companies, VIKOR (in Serbian: VlseKriterijumska Optimizacija I Kompromisno Resenje) method was used. Also, as an additional tool, TOPSIS technique, is used to see the differences of two methods applied over the same data. 5 main criteria and 14 subcriteria were developed and implemented in the real-world cases. As the results showed, two ranking methods provided different ranking. Furthermore, the final findings of the research based on VIKOR and TOPSIS indicated that the firms A3 and A4 received the first rank, respectively. In addition, the firm A4 was known as the most entrepreneurial company. This research has been done in the manufacturing sector, but it could be also extended to the service sector for measurement. PMID:25197707

  12. Urea, the most abundant component in urine, cross-reacts with a commercial 8-OH-dG ELISA kit and contributes to overestimation of urinary 8-OH-dG.

    PubMed

    Song, Ming-Fen; Li, Yun-Shan; Ootsuyama, Yuko; Kasai, Hiroshi; Kawai, Kazuaki; Ohta, Masanori; Eguchi, Yasumasa; Yamato, Hiroshi; Matsumoto, Yuki; Yoshida, Rie; Ogawa, Yasutaka

    2009-07-01

    Urinary 8-OH-dG is commonly analyzed as a marker of oxidative stress. For its analysis, ELISA and HPLC methods are generally used, although discrepancies in the data obtained by these methods have often been discussed. To clarify this problem, we fractionated human urine by reverse-phase HPLC and assayed each fraction by the ELISA method. In addition to the 8-OH-dG fraction, a positive reaction was observed in the first eluted fraction. The components in this fraction were examined by the ELISA. Urea was found to be the responsible component in this fraction. Urea is present in high concentrations in the urine of mice, rats, and humans, and its level is influenced by many factors. Therefore, certain improvements, such as a correction based on urea content or urease treatment, are required for the accurate analysis of urinary 8-OH-dG by the ELISA method. In addition, performance of the ELISA at 4 degrees C reduced the recognition of urea considerably and improved the 8-OH-dG analysis.

  13. Reviewing the anaerobic digestion and co-digestion process of food waste from the perspectives on biogas production performance and environmental impacts.

    PubMed

    Chiu, Sam L H; Lo, Irene M C

    2016-12-01

    In this paper, factors that affect biogas production in the anaerobic digestion (AD) and anaerobic co-digestion (coAD) processes of food waste are reviewed with the aim to improve biogas production performance. These factors include the composition of substrates in food waste coAD as well as pre-treatment methods and anaerobic reactor system designs in both food waste AD and coAD. Due to the characteristics of the substrates used, the biogas production performance varies as different effects are exhibited on nutrient balance, inhibitory substance dilution, and trace metal element supplement. Various types of pre-treatment methods such as mechanical, chemical, thermal, and biological methods are discussed to improve the rate-limiting hydrolytic step in the digestion processes. The operation parameters of a reactor system are also reviewed with consideration of the characteristics of the substrates. Since the environmental awareness and concerns for waste management systems have been increasing, this paper also addresses possible environmental impacts of AD and coAD in food waste treatment and recommends feasible methods to reduce the impacts. In addition, uncertainties in the life cycle assessment (LCA) studies are also discussed.

  14. A novel method for structure-based prediction of ion channel conductance properties.

    PubMed Central

    Smart, O S; Breed, J; Smith, G R; Sansom, M S

    1997-01-01

    A rapid and easy-to-use method of predicting the conductance of an ion channel from its three-dimensional structure is presented. The method combines the pore dimensions of the channel as measured in the HOLE program with an Ohmic model of conductance. An empirically based correction factor is then applied. The method yielded good results for six experimental channel structures (none of which were included in the training set) with predictions accurate to within an average factor of 1.62 to the true values. The predictive r2 was equal to 0.90, which is indicative of a good predictive ability. The procedure is used to validate model structures of alamethicin and phospholamban. Two genuine predictions for the conductance of channels with known structure but without reported conductances are given. A modification of the procedure that calculates the expected results for the effect of the addition of nonelectrolyte polymers on conductance is set out. Results for a cholera toxin B-subunit crystal structure agree well with the measured values. The difficulty in interpreting such studies is discussed, with the conclusion that measurements on channels of known structure are required. Images FIGURE 1 FIGURE 3 FIGURE 4 FIGURE 6 FIGURE 10 PMID:9138559

  15. Adolescent Dispositions for Antisocial Behavior in Context: The Roles of Neighborhood Dangerousness and Parental Knowledge

    PubMed Central

    Trentacosta, Christopher J.; Hyde, Luke W.; Shaw, Daniel S.; Cheong, JeeWon

    2010-01-01

    This study examined an ecological perspective on the development of antisocial behavior during adolescence, examining direct, additive, and interactive effects of child and both parenting and community factors in relation to youth problem behavior. To address this goal, early adolescent dispositional qualities were examined as predictors of boys' antisocial behavior within the context of parents' knowledge of adolescent activities and neighborhood dangerousness. Antisocial behavior was examined using a multi-method latent construct that included self-reported delinquency, symptoms of conduct disorder, and court petitions in a sample of 289 boys from lower socioeconomic status backgrounds who were followed longitudinally from early childhood through adolescence. Results demonstrated direct and additive findings for child prosociality, daring, and negative emotionality that were qualified by interactions between daring and neighborhood dangerousness, and between prosociality and parental knowledge. The findings have implications for preventive intervention approaches that address the interplay of dispositional and contextual factors to prevent delinquent behavior in adolescence. PMID:19685953

  16. Empirical Calibration of the P-Factor for Cepheid Radii Determined Using the IR Baade-Wesselink Method

    NASA Astrophysics Data System (ADS)

    Joner, Michael D.; Laney, C. D.

    2012-05-01

    We have used 41 galactic Cepheids for which parallax or cluster/association distances are available, and for which pulsation parallaxes can be calculated, to calibrate the p-factor to be used in K-band Baade-Wesselink radius calculations. Our sample includes the 10 Cepheids from Benedict et al. (2007), and three additional Cepheids with Hipparcos parallaxes derived from van Leeuwen et al. (2007). Turner and Burke (2002) list cluster distances for 33 Cepheids for which radii have been or (in a few cases) can be calculated. Revised cluster distances from Turner (2010), Turner and Majaess (2008, 2012), and Majaess and Turner (2011, 2012a, 2012b) have been used where possible. Radii have been calculated using the methods described in Laney and Stobie (1995) and converted to K-band absolute magnitudes using the methods described in van Leeuwen et al. (2007), Feast et al. (2008), and Laney and Joner (2009). The resulting pulsation parallaxes have been used to estimate the p-factor for each Cepheid. These new results stand in contradiction to those derived by Storm et al. (2011), but are in good agreement with theoretical predictions by Nardetto et al. (2009) and with interferometric estimates of the p-factor, as summarized in Groenewegen (2007). We acknowledge the Brigham Young University College of Physical and Mathematical Sciences for continued support of research done using the facilities and personnel at the West Mountain Observatory. This support is connected with NSF/AST grant #0618209.

  17. [Attitudes of freshman medical students towards education in communication skills].

    PubMed

    Tóth, Ildikó; Bán, Ildikó; Füzesi, Zsuzsanna; Kesztyüs, Márk; Nagy, Lajos

    2011-09-18

    In their institute authors teach medical communication skills in three languages (Hungarian, English and German) for medical students in the first year of their studies. In order to improve teaching methods, authors wanted to explore the attitudes of students towards the communication skills learning. For this purpose authors applied the Communication Skills Attitudes Scale created by Rees et al., which is an internationally accepted and well adaptable instrument. In this survey authors wanted to validate the Hungarian and German version of the Communication Skills Attitudes Scale. In addition, their aim was to analyze possible differences between the attitudes of each of the three medical teaching programs. Questionnaires were filled anonymously at the beginning of the practices. Principal component analysis with varimax rotation was performed to evaluate the attitudes using the SPSS 10.5 version for analysis. Authors created a model consisting of 7 factors. Factors were the following: 1: respect and interpersonal skills; 2: learning; 3: importance of communication within medical profession; 4: excuse; 5: counter; 6: exam; 7: overconfidence. It was found that students had mainly positive attitudes. Except the learning factor, all other factors showed significant differences between the three medical teaching programs. although students had mainly positive attitudes toward learning communication skills, there were negative attitudes which can be partly modified by improving the teaching methods. However, results may create a proper base for further research to help improving communication skills teaching methods of the authors.

  18. Suicidal behaviour across the African continent: a review of the literature

    PubMed Central

    2014-01-01

    Background Suicide is a major cause of premature mortality worldwide, but data on its epidemiology in Africa, the world’s second most populous continent, are limited. Methods We systematically reviewed published literature on suicidal behaviour in African countries. We searched PubMed, Web of Knowledge, PsycINFO, African Index Medicus, Eastern Mediterranean Index Medicus and African Journals OnLine and carried out citation searches of key articles. We crudely estimated the incidence of suicide and suicide attempts in Africa based on country-specific data and compared these with published estimates. We also describe common features of suicide and suicide attempts across the studies, including information related to age, sex, methods used and risk factors. Results Regional or national suicide incidence data were available for less than one third (16/53) of African countries containing approximately 60% of Africa’s population; suicide attempt data were available for <20% of countries (7/53). Crude estimates suggest there are over 34,000 (inter-quartile range 13,141 to 63,757) suicides per year in Africa, with an overall incidence rate of 3.2 per 100,000 population. The recent Global Burden of Disease (GBD) estimate of 49,558 deaths is somewhat higher, but falls within the inter-quartile range of our estimate. Suicide rates in men are typically at least three times higher than in women. The most frequently used methods of suicide are hanging and pesticide poisoning. Reported risk factors are similar for suicide and suicide attempts and include interpersonal difficulties, mental and physical health problems, socioeconomic problems and drug and alcohol use/abuse. Qualitative studies are needed to identify additional culturally relevant risk factors and to understand how risk factors may be connected to suicidal behaviour in different socio-cultural contexts. Conclusions Our estimate is somewhat lower than GBD, but still clearly indicates suicidal behaviour is an important public health problem in Africa. More regional studies, in both urban and rural areas, are needed to more accurately estimate the burden of suicidal behaviour across the continent. Qualitative studies are required in addition to quantitative studies. PMID:24927746

  19. Optimizing some 3-stage W-methods for the time integration of PDEs

    NASA Astrophysics Data System (ADS)

    Gonzalez-Pinto, S.; Hernandez-Abreu, D.; Perez-Rodriguez, S.

    2017-07-01

    The optimization of some W-methods for the time integration of time-dependent PDEs in several spatial variables is considered. In [2, Theorem 1] several three-parametric families of three-stage W-methods for the integration of IVPs in ODEs were studied. Besides, the optimization of several specific methods for PDEs when the Approximate Matrix Factorization Splitting (AMF) is used to define the approximate Jacobian matrix (W ≈ fy(yn)) was carried out. Also, some convergence and stability properties were presented [2]. The derived methods were optimized on the base that the underlying explicit Runge-Kutta method is the one having the largest Monotonicity interval among the thee-stage order three Runge-Kutta methods [1]. Here, we propose an optimization of the methods by imposing some additional order condition [7] to keep order three for parabolic PDE problems [6] but at the price of reducing substantially the length of the nonlinear Monotonicity interval of the underlying explicit Runge-Kutta method.

  20. Analyzing Recent Coronary Heart Disease Mortality Trends in Tunisia between 1997 and 2009

    PubMed Central

    Saidi, Olfa; Ben Mansour, Nadia; O’Flaherty, Martin; Capewell, Simon; Critchley, Julia A.; Romdhane, Habiba Ben

    2013-01-01

    Background In Tunisia, Cardiovascular Diseases are the leading causes of death (30%), 70% of those are coronary heart disease (CHD) deaths and population studies have demonstrated that major risk factor levels are increasing. Objective To explain recent CHD trends in Tunisia between 1997 and 2009. Methods Data Sources: Published and unpublished data were identified by extensive searches, complemented with specifically designed surveys. Analysis Data were integrated and analyzed using the previously validated IMPACT CHD policy model. Data items included: (i)number of CHD patients in specific groups (including acute coronary syndromes, congestive heart failure and chronic angina)(ii) uptake of specific medical and surgical treatments, and(iii) population trends in major cardiovascular risk factors (smoking, total cholesterol, systolic blood pressure (SBP), body mass index (BMI), diabetes and physical inactivity). Results CHD mortality rates increased by 11.8% for men and 23.8% for women, resulting in 680 additional CHD deaths in 2009 compared with the 1997 baseline, after adjusting for population change. Almost all (98%) of this rise was explained by risk factor increases, though men and women differed. A large rise in total cholesterol level in men (0.73 mmol/L) generated 440 additional deaths. In women, a fall (−0.43 mmol/L), apparently avoided about 95 deaths. For SBP a rise in men (4 mmHg) generated 270 additional deaths. In women, a 2 mmHg fall avoided 65 deaths. BMI and diabetes increased substantially resulting respectively in 105 and 75 additional deaths. Increased treatment uptake prevented about 450 deaths in 2009. The most important contributions came from secondary prevention following Acute Myocardial Infarction (AMI) (95 fewer deaths), initial AMI treatments (90), antihypertensive medications (80) and unstable angina (75). Conclusions Recent trends in CHD mortality mainly reflected increases in major modifiable risk factors, notably SBP and cholesterol, BMI and diabetes. Current prevention strategies are mainly focused on treatments but should become more comprehensive. PMID:23658808

  1. Comparison of different methods to include recycling in LCAs of aluminium cans and disposable polystyrene cups.

    PubMed

    van der Harst, Eugenie; Potting, José; Kroeze, Carolien

    2016-02-01

    Many methods have been reported and used to include recycling in life cycle assessments (LCAs). This paper evaluates six widely used methods: three substitution methods (i.e. substitution based on equal quality, a correction factor, and alternative material), allocation based on the number of recycling loops, the recycled-content method, and the equal-share method. These six methods were first compared, with an assumed hypothetical 100% recycling rate, for an aluminium can and a disposable polystyrene (PS) cup. The substitution and recycled-content method were next applied with actual rates for recycling, incineration and landfilling for both product systems in selected countries. The six methods differ in their approaches to credit recycling. The three substitution methods stimulate the recyclability of the product and assign credits for the obtained recycled material. The choice to either apply a correction factor, or to account for alternative substituted material has a considerable influence on the LCA results, and is debatable. Nevertheless, we prefer incorporating quality reduction of the recycled material by either a correction factor or an alternative substituted material over simply ignoring quality loss. The allocation-on-number-of-recycling-loops method focusses on the life expectancy of material itself, rather than on a specific separate product. The recycled-content method stimulates the use of recycled material, i.e. credits the use of recycled material in products and ignores the recyclability of the products. The equal-share method is a compromise between the substitution methods and the recycled-content method. The results for the aluminium can follow the underlying philosophies of the methods. The results for the PS cup are additionally influenced by the correction factor or credits for the alternative material accounting for the drop in PS quality, the waste treatment management (recycling rate, incineration rate, landfilling rate), and the source of avoided electricity in case of waste incineration. The results for the PS cup, which are less dominated by production of virgin material than aluminium can, furthermore depend on the environmental impact categories. This stresses the importance to consider other impact categories besides the most commonly used global warming impact. The multitude of available methods complicates the choice of an appropriate method for the LCA practitioner. New guidelines keep appearing and industries also suggest their own preferred method. Unambiguous ISO guidelines, particularly related to sensitivity analysis, would be a great step forward in making more robust LCAs. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Estimation of Spatiotemporal Sensitivity Using Band-limited Signals with No Additional Acquisitions for k-t Parallel Imaging.

    PubMed

    Takeshima, Hidenori; Saitoh, Kanako; Nitta, Shuhei; Shiodera, Taichiro; Takeguchi, Tomoyuki; Bannae, Shuhei; Kuhara, Shigehide

    2018-03-13

    Dynamic MR techniques, such as cardiac cine imaging, benefit from shorter acquisition times. The goal of the present study was to develop a method that achieves short acquisition times, while maintaining a cost-effective reconstruction, for dynamic MRI. k - t sensitivity encoding (SENSE) was identified as the base method to be enhanced meeting these two requirements. The proposed method achieves a reduction in acquisition time by estimating the spatiotemporal (x - f) sensitivity without requiring the acquisition of the alias-free signals, typical of the k - t SENSE technique. The cost-effective reconstruction, in turn, is achieved by a computationally efficient estimation of the x - f sensitivity from the band-limited signals of the aliased inputs. Such band-limited signals are suitable for sensitivity estimation because the strongly aliased signals have been removed. For the same reduction factor 4, the net reduction factor 4 for the proposed method was significantly higher than the factor 2.29 achieved by k - t SENSE. The processing time is reduced from 4.1 s for k - t SENSE to 1.7 s for the proposed method. The image quality obtained using the proposed method proved to be superior (mean squared error [MSE] ± standard deviation [SD] = 6.85 ± 2.73) compared to the k - t SENSE case (MSE ± SD = 12.73 ± 3.60) for the vertical long-axis (VLA) view, as well as other views. In the present study, k - t SENSE was identified as a suitable base method to be improved achieving both short acquisition times and a cost-effective reconstruction. To enhance these characteristics of base method, a novel implementation is proposed, estimating the x - f sensitivity without the need for an explicit scan of the reference signals. Experimental results showed that the acquisition, computational times and image quality for the proposed method were improved compared to the standard k - t SENSE method.

  3. Evaluation of the CPU time for solving the radiative transfer equation with high-order resolution schemes applying the normalized weighting-factor method

    NASA Astrophysics Data System (ADS)

    Xamán, J.; Zavala-Guillén, I.; Hernández-López, I.; Uriarte-Flores, J.; Hernández-Pérez, I.; Macías-Melo, E. V.; Aguilar-Castro, K. M.

    2018-03-01

    In this paper, we evaluated the convergence rate (CPU time) of a new mathematical formulation for the numerical solution of the radiative transfer equation (RTE) with several High-Order (HO) and High-Resolution (HR) schemes. In computational fluid dynamics, this procedure is known as the Normalized Weighting-Factor (NWF) method and it is adopted here. The NWF method is used to incorporate the high-order resolution schemes in the discretized RTE. The NWF method is compared, in terms of computer time needed to obtain a converged solution, with the widely used deferred-correction (DC) technique for the calculations of a two-dimensional cavity with emitting-absorbing-scattering gray media using the discrete ordinates method. Six parameters, viz. the grid size, the order of quadrature, the absorption coefficient, the emissivity of the boundary surface, the under-relaxation factor, and the scattering albedo are considered to evaluate ten schemes. The results showed that using the DC method, in general, the scheme that had the lowest CPU time is the SOU. In contrast, with the results of theDC procedure the CPU time for DIAMOND and QUICK schemes using the NWF method is shown to be, between the 3.8 and 23.1% faster and 12.6 and 56.1% faster, respectively. However, the other schemes are more time consuming when theNWFis used instead of the DC method. Additionally, a second test case was presented and the results showed that depending on the problem under consideration, the NWF procedure may be computationally faster or slower that the DC method. As an example, the CPU time for QUICK and SMART schemes are 61.8 and 203.7%, respectively, slower when the NWF formulation is used for the second test case. Finally, future researches to explore the computational cost of the NWF method in more complex problems are required.

  4. Modeling the lake eutrophication stochastic ecosystem and the research of its stability.

    PubMed

    Wang, Bo; Qi, Qianqian

    2018-06-01

    In the reality, the lake system will be disturbed by stochastic factors including the external and internal factors. By adding the additive noise and the multiplicative noise to the right-hand sides of the model equation, the additive stochastic model and the multiplicative stochastic model are established respectively in order to reduce model errors induced by the absence of some physical processes. For both the two kinds of stochastic ecosystems, the authors studied the bifurcation characteristics with the FPK equation and the Lyapunov exponent method based on the Stratonovich-Khasminiskii stochastic average principle. Results show that, for the additive stochastic model, when control parameter (i.e., nutrient loading rate) falls into the interval [0.388644, 0.66003825], there exists bistability for the ecosystem and the additive noise intensities cannot make the bifurcation point drift. In the region of the bistability, the external stochastic disturbance which is one of the main triggers causing the lake eutrophication, may make the ecosystem unstable and induce a transition. When control parameter (nutrient loading rate) falls into the interval (0,  0.388644) and (0.66003825,  1.0), there only exists a stable equilibrium state and the additive noise intensity could not change it. For the multiplicative stochastic model, there exists more complex bifurcation performance and the multiplicative ecosystem will be broken by the multiplicative noise. Also, the multiplicative noise could reduce the extent of the bistable region, ultimately, the bistable region vanishes for sufficiently large noise. What's more, both the nutrient loading rate and the multiplicative noise will make the ecosystem have a regime shift. On the other hand, for the two kinds of stochastic ecosystems, the authors also discussed the evolution of the ecological variable in detail by using the Four-stage Runge-Kutta method of strong order γ=1.5. The numerical method was found to be capable of effectively explaining the regime shift theory and agreed with the realistic analyze. These conclusions also confirms the two paths for the system to move from one stable state to another proposed by Beisner et al. [3], which may help understand the occurrence mechanism related to the lake eutrophication from the view point of the stochastic model and mathematical analysis. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. Making sense of the "clean label" trends: A review of consumer food choice behavior and discussion of industry implications.

    PubMed

    Asioli, Daniele; Aschemann-Witzel, Jessica; Caputo, Vincenzina; Vecchio, Riccardo; Annunziata, Azzurra; Næs, Tormod; Varela, Paula

    2017-09-01

    Consumers in industrialized countries are nowadays much more interested in information about the production methods and components of the food products that they eat, than they had been 50years ago. Some production methods are perceived as less "natural" (i.e. conventional agriculture) while some food components are seen as "unhealthy" and "unfamiliar" (i.e. artificial additives). This phenomenon, often referred to as the "clean label" trend, has driven the food industry to communicate whether a certain ingredient or additive is not present or if the food has been produced using a more "natural" production method (i.e. organic agriculture). However, so far there is no common and objective definition of clean label. This review paper aims to fill the gap via three main objectives, which are to a) develop and suggest a definition that integrates various understandings of clean label into one single definition, b) identify the factors that drive consumers' choices through a review of recent studies on consumer perception of various food categories understood as clean label with the focus on organic, natural and 'free from' artificial additives/ingredients food products and c) discuss implications of the consumer demand for clean label food products for food manufacturers as well as policy makers. We suggest to define clean label, both in a broad sense, where consumers evaluate the cleanliness of product by assumption and through inference looking at the front-of-pack label and in a strict sense, where consumers evaluate the cleanliness of product by inspection and through inference looking at the back-of-pack label. Results show that while 'health' is a major consumer motive, a broad diversity of drivers influence the clean label trend with particular relevance of intrinsic or extrinsic product characteristics and socio-cultural factors. However, 'free from' artificial additives/ingredients food products tend to differ from organic and natural products. Food manufacturers should take the diversity of these drivers into account in developing new products and communication about the latter. For policy makers, it is important to work towards a more homogenous understanding and application of the term of clean label and identify a uniform definition or regulation for 'free from' artificial additives/ingredients food products, as well as work towards decreasing consumer misconceptions. Finally, multiple future research avenues are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Understanding the Relation between Anorexia Nervosa and Bulimia Nervosa in a Swedish National Twin Sample

    PubMed Central

    Bulik, Cynthia M; Thornton, Laura; Root, Tammy L.; Pisetsky, Emily M.; Lichtenstein, Paul; Pedersen, Nancy L.

    2010-01-01

    Background We present a bivariate twin analysis of anorexia nervosa (AN) and bulimia nervosa (BN) to determine the extent to which shared genetic and environmental factors contribute to liability to these disorders. Method Focusing on females from the Swedish Twin study of Adults: Genes and Environment (STAGE) (N=7000), we calculated heritability estimates for narrow and broad AN and BN and estimated their genetic correlation. Results In the full model, the heritability estimate for narrow AN was (a2 = .57; 95% CI: .00, .81) and for narrow BN (a2 = .62; 95% CI: .08, .70) with the remaining variance accounted for by unique environmental factors. Shared environmental factors estimates were (c2 = .00; 95% CI: .00, .67) for AN and (c2 = .00; 95% CI: .00, .40) for BN. Moderate additive genetic (.46) and unique environmental (.42) correlations between AN and BN were observed. Heritability estimates for broad AN were lower (a2 = .29; 95% CI: .04, .43) than for narrow AN, but estimates for broad BN were similar to narrow BN. The genetic correlation for broad AN and BN was .79 and the unique environmental correlation was .44. Conclusions We highlight the contribution of additive genetic factors to both narrow and broad AN and BN and demonstrate a moderate overlap of both genetic and unique environmental factors that influence the two conditions. Common concurrent and sequential comorbidity of AN and BN can in part be accounted for by shared genetic and environmental influences on liability although independent factors also operative. PMID:19828139

  7. Outcome predictors of intra-articular glucocorticoid treatment for knee synovitis in patients with rheumatoid arthritis – a prospective cohort study

    PubMed Central

    2014-01-01

    Introduction Intra-articular glucocorticoid treatment (IAGC) is widely used for symptom relief in arthritis. However, knowledge of factors predicting treatment outcome is limited. The aim of the present study was to identify response predictors of IAGC for knee synovitis in patients with rheumatoid arthritis (RA). Methods In this study 121 RA patients with synovitis of the knee were treated with intra-articular injections of 20 mg triamcinolone hexacetonide. They were followed for six months and the rate of clinical relapse was studied. Non-responders (relapse within 6 months) and responders were compared regarding patient characteristics and knee joint damage as determined by the Larsen-Dale index. In addition, matched samples of serum and synovial fluid were analysed for factors reflecting the inflammatory process (C-reactive protein, interleukin 6, tumour necrosis factor alpha, vascular endothelial growth factor), joint tissue turnover (cartilage oligomeric matrix protein, metalloproteinase 3), and autoimmunity (antinuclear antibodies, antibodies against citrullinated peptides, rheumatoid factor). Results During the observation period, 48 knees relapsed (40%). Non-responders had more radiographic joint damage than responders (P = 0.002) and the pre-treatment vascular endothelial growth factor (VEGF) level in synovial fluid was significantly higher in non-responders (P = 0.002). Conclusions Joint destruction is associated with poor outcome of IAGC for knee synovitis in RA. In addition, higher levels of VEGF in synovial fluid are found in non-responders, suggesting that locally produced VEGF is a biomarker for recurrence of synovial hyperplasia and the risk for arthritis relapse. PMID:24950951

  8. Influencing factors of the 6-min walk distance in adult Arab populations: a literature review.

    PubMed

    Joobeur, Samah; Rouatbi, Sonia; Latiri, Imed; Sfaxi, Raoudha; Ben Saad, Helmi

    2016-05-01

    Background Walk tests, especially the 6-min walk-test (6MWT), are commonly used in order to evaluate submaximal exercise capacity. The primary outcome of the 6MWT is the 6-min walk-distance (6MWD). Numerous demographic, physiological and anthropometric factors can influence the 6MWD in healthy adults. Objective The purpose of the present review is to highlight and discuss the 6MWD influencing factors in healthy of the healthy adult Arab populations. Methods It is a review including a literature search, from 1970 to September 31th 2015 using the PubMed, the Science Direct databases and the World Wide Web on Google search engine. Reference lists of retrieved English/French articles were searched for any additional references. Results Six studies, conducted in Tunisia (n=2), Saudi Arabia (n=3) and Algeria (n=1) were included. All studies were conducted according to the 2002-American-thoracic-society guidelines for the 6MWT. In addition to anthropometric data (sex, age, height, weight, body mass index, lean mass), the following data were recognized as 6MWD influencing factors: schooling and socioeconomic levels, urban origin, parity, physical activity score or status, metabolic equivalent task for moderate activity, spirometric data, end-walk heart-rate, resting diastolic blood pressure, dyspnoea Borg value and niqab-wearing. Conclusion The 6MWD influencing factors in adult Arab populations are numerous and include some specific predictors such as parity, physical activity level and niqab-wearing.

  9. Multiclass pesticide determination in olives and their processing factors in olive oil: comparison of different olive oil extraction systems.

    PubMed

    Amvrazi, Elpiniki G; Albanis, Triantafyllos A

    2008-07-23

    The processing factors (pesticide concentration found in olive oil/pesticide concentration found in olives) of azinphos methyl, chlorpyrifos, lambda-cyhalothrin, deltamethrin, diazinon, dimethoate, endosulfan, and fenthion were determined in olive oil production process in various laboratory-scale olive oil extractions based on three- or two-phase centrifugation systems in comparison with samples collected during olive oil extractions in conventional olive mills located at different olive oil production areas in Greece. Pesticide analyses were performed using a multiresidue method developed in our laboratory for the determination of different insecticides and herbicides in olive oil by solid-phase extraction techniques coupled to gas chromatography detection (electron capture detection and nitrogen phosphorus detection), optimized, and validated for olive fruits sample preparation. Processing factors were found to vary among the different pesticides studied. Water addition in the oil extraction procedure (as in a three-phase centrifugation system) was found to decrease the processing factors of dimethoate, alpha-endosulfan, diazinon, and chlorpyrifos, whereas those of fenthion, azinphos methyl, beta-endosulfan, lambda-cyhalothrin, and deltamethrin residues were not affected. The water content of olives processed was found to proportionally affect pesticide processing factors. Fenthion sulfoxide and endosulfan sulfate were the major metabolites of fenthion and endosulfan, respectively, that were detected in laboratory-produced olive oils, but only the concentration of fenthion sulfoxide was found to increase with the increase of water addition in the olive oil extraction process.

  10. The Aftercare and School Observation System (ASOS): Reliability and Component Structure.

    PubMed

    Ingoldsby, Erin M; Shelleby, Elizabeth C; Lane, Tonya; Shaw, Daniel S; Dishion, Thomas J; Wilson, Melvin N

    2013-10-01

    This study examines the psychometric properties and component structure of a newly developed observational system, the Aftercare and School Observation System (ASOS). Participants included 468 children drawn from a larger longitudinal intervention study. The system was utilized to assess participant children in school lunchrooms and recess and various afterschool environments. Exploratory factor analyses examined whether a core set of component constructs assessing qualities of children's relationships, caregiver involvement and monitoring, and experiences in school and aftercare contexts that have been linked to children's behavior problems would emerge. Construct validity was assessed by examining associations between ASOS constructs and questionnaire measures assessing children's behavior problems and relationship qualities in school and aftercare settings. Across both settings, two factors showed very similar empirical structures and item loadings, reflecting the constructs of a negative/aggressive context and caregiver positive involvement, with one additional unique factor from the school setting reflecting the extent to which caregiver methods used resulted in less negative behavior and two additional unique factors from the aftercare setting reflecting positivity in the child's interactions and general environment and negativity in the child's interactions and setting. Modest correlations between ASOS factors and aftercare provider and teacher ratings of behavior problems, adult-child relationships, and a rating of school climate contributed to our interpretation that the ASOS scores capture meaningful features of children's experiences in these settings. This study represents the first step of establishing that the ASOS reliably and validly captures risk and protective relationships and experiences in extra-familial settings.

  11. Relationship between pre-natal factors, the perinatal environment, motor development in the first year of life and the timing of first deciduous tooth emergence.

    PubMed

    Żądzińska, Elżbieta; Sitek, Aneta; Rosset, Iwona

    2016-01-01

    The emergence of deciduous teeth, despite being genetically determined, shows significant correlation with the pre-natal environment, maternal factors, method of infant feeding and also family socioeconomic status. However, reported results are often contradictory and rarely concern healthy, full-term children. The objective of this study was to evaluate the influence of pre-natal and maternal factors as well as the method of infant feeding on the timing of first deciduous tooth emergence in healthy, full-term infants and to examine the relationship between the psychomotor development rate and the age at first tooth. The database contained 480 records for healthy, term-born children (272 boys and 208 girls born at 37-42 weeks of gestation) aged 9-54 months. Multiple regression analysis and multi-factor analysis of variance were used to identify significant explanatory variables for the age at first tooth. The onset of deciduous tooth emergence is negatively correlated with birth weight and maternal smoking during pregnancy and positively correlated with breastfeeding and the age at which the child begins to sit up unaided. These factors have an additive effect on the age at first tooth. An earlier onset of tooth emergence in children exposed to maternal smoking during pregnancy seems to provide further evidence for disturbed foetal development in a smoke-induced hypoxic environment.

  12. Advanced glycation end products, physico-chemical and sensory characteristics of cooked lamb loins affected by cooking method and addition of flavour precursors.

    PubMed

    Roldan, Mar; Loebner, Jürgen; Degen, Julia; Henle, Thomas; Antequera, Teresa; Ruiz-Carrascal, Jorge

    2015-02-01

    The influence of the addition of a flavour enhancer solution (FES) (d-glucose, d-ribose, l-cysteine and thiamin) and of sous-vide cooking or roasting on moisture, cooking loss, instrumental colour, sensory characteristics and formation of Maillard reaction (MR) compounds in lamb loins was studied. FES reduced cooking loss and increased water content in sous-vide samples. FES and cooking method showed a marked effect on browning development, both on the meat surface and within. FES led to tougher and chewier texture in sous-vide cooked lamb, and enhanced flavour scores of sous-vide samples more markedly than in roasted ones. FES added meat showed higher contents of furosine; 1,2-dicarbonyl compounds and 5-hydroxymethylfurfural did not reach detectable levels. N-ε-carboxymethyllysine amounts were rather low and not influenced by the studied factors. Cooked meat seems to be a minor dietary source of MR products, regardless the presence of reducing sugars and the cooking method. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Separation of attogram terpenes by the capillary zone electrophoresis with fluorometric detection.

    PubMed

    Kubesová, Anna; Horká, Marie; Růžička, Filip; Slais, Karel; Glatz, Zdeněk

    2010-11-12

    An original method based on capillary zone electrophoresis with fluorimetric detection has been developed for the determination of terpenic compounds. The method is based on the separation of a terpenes dynamically labeled by the non-ionogenic tenside poly(ethylene glycol) pyrenebutanoate, which was used previously for the labeling of biopolymers. The background electrolytes were composed of taurine-Tris buffer (pH 8.4). In addition to the non-ionogenic tenside aceton and poly(ethylene glycol) were used as the additives. The capillary zone electrophoresis with fluorometric detection at the excitation wavelength 335 nm and the emission wavelength 463 nm was successfully applied to the analysis of tonalid, cholesterol, vitamin A, ergosterol, estrone and farnesol at level of 10(-17) mol L(-1). Farnesol, is produced by Candida albicans as an extracellular quorum-sensing molecule that influences expression of a number of virulence factors, especially morphogenesis and biofilm formation. It enables this yeast to cause serious nosocomial infections. The sensitivity of this method was demonstrated on the separation of farnesol directly from the cultivation medium. Copyright © 2010 Elsevier B.V. All rights reserved.

  14. Item Difficulty in the Evaluation of Computer-Based Instruction: An Example from Neuroanatomy

    PubMed Central

    Chariker, Julia H.; Naaz, Farah; Pani, John R.

    2012-01-01

    This article reports large item effects in a study of computer-based learning of neuroanatomy. Outcome measures of the efficiency of learning, transfer of learning, and generalization of knowledge diverged by a wide margin across test items, with certain sets of items emerging as particularly difficult to master. In addition, the outcomes of comparisons between instructional methods changed with the difficulty of the items to be learned. More challenging items better differentiated between instructional methods. This set of results is important for two reasons. First, it suggests that instruction may be more efficient if sets of consistently difficult items are the targets of instructional methods particularly suited to them. Second, there is wide variation in the published literature regarding the outcomes of empirical evaluations of computer-based instruction. As a consequence, many questions arise as to the factors that may affect such evaluations. The present paper demonstrates that the level of challenge in the material that is presented to learners is an important factor to consider in the evaluation of a computer-based instructional system. PMID:22231801

  15. Non-contact method for directing electrotaxis

    NASA Astrophysics Data System (ADS)

    Ahirwar, Dinesh K.; Nasser, Mohd W.; Jones, Travis H.; Sequin, Emily K.; West, Joseph D.; Henthorne, Timothy L.; Javor, Joshua; Kaushik, Aniruddha M.; Ganju, Ramesh K.; Subramaniam, Vish V.

    2015-06-01

    We present a method to induce electric fields and drive electrotaxis (galvanotaxis) without the need for electrodes to be in contact with the media containing the cell cultures. We report experimental results using a modification of the transmembrane assay, demonstrating the hindrance of migration of breast cancer cells (SCP2) when an induced a.c. electric field is present in the appropriate direction (i.e. in the direction of migration). Of significance is that migration of these cells is hindered at electric field strengths many orders of magnitude (5 to 6) below those previously reported for d.c. electrotaxis, and even in the presence of a chemokine (SDF-1α) or a growth factor (EGF). Induced a.c. electric fields applied in the direction of migration are also shown to hinder motility of non-transformed human mammary epithelial cells (MCF10A) in the presence of the growth factor EGF. In addition, we also show how our method can be applied to other cell migration assays (scratch assay), and by changing the coil design and holder, that it is also compatible with commercially available multi-well culture plates.

  16. Item difficulty in the evaluation of computer-based instruction: an example from neuroanatomy.

    PubMed

    Chariker, Julia H; Naaz, Farah; Pani, John R

    2012-01-01

    This article reports large item effects in a study of computer-based learning of neuroanatomy. Outcome measures of the efficiency of learning, transfer of learning, and generalization of knowledge diverged by a wide margin across test items, with certain sets of items emerging as particularly difficult to master. In addition, the outcomes of comparisons between instructional methods changed with the difficulty of the items to be learned. More challenging items better differentiated between instructional methods. This set of results is important for two reasons. First, it suggests that instruction may be more efficient if sets of consistently difficult items are the targets of instructional methods particularly suited to them. Second, there is wide variation in the published literature regarding the outcomes of empirical evaluations of computer-based instruction. As a consequence, many questions arise as to the factors that may affect such evaluations. The present article demonstrates that the level of challenge in the material that is presented to learners is an important factor to consider in the evaluation of a computer-based instructional system. Copyright © 2011 American Association of Anatomists.

  17. Multiple and exact soliton solutions of the perturbed Korteweg-de Vries equation of long surface waves in a convective fluid via Painlevé analysis, factorization, and simplest equation methods.

    PubMed

    Selima, Ehab S; Yao, Xiaohua; Wazwaz, Abdul-Majid

    2017-06-01

    In this research, the surface waves of a horizontal fluid layer open to air under gravity field and vertical temperature gradient effects are studied. The governing equations of this model are reformulated and converted to a nonlinear evolution equation, the perturbed Korteweg-de Vries (pKdV) equation. We investigate the latter equation, which includes dispersion, diffusion, and instability effects, in order to examine the evolution of long surface waves in a convective fluid. Dispersion relation of the pKdV equation and its properties are discussed. The Painlevé analysis is applied not only to check the integrability of the pKdV equation but also to establish the Bäcklund transformation form. In addition, traveling wave solutions and a general form of the multiple-soliton solutions of the pKdV equation are obtained via Bäcklund transformation, the simplest equation method using Bernoulli, Riccati, and Burgers' equations as simplest equations, and the factorization method.

  18. Neural network approach to proximity effect corrections in electron-beam lithography

    NASA Astrophysics Data System (ADS)

    Frye, Robert C.; Cummings, Kevin D.; Rietman, Edward A.

    1990-05-01

    The proximity effect, caused by electron beam backscattering during resist exposure, is an important concern in writing submicron features. It can be compensated by appropriate local changes in the incident beam dose, but computation of the optimal correction usually requires a prohibitively long time. We present an example of such a computation on a small test pattern, which we performed by an iterative method. We then used this solution as a training set for an adaptive neural network. After training, the network computed the same correction as the iterative method, but in a much shorter time. Correcting the image with a software based neural network resulted in a decrease in the computation time by a factor of 30, and a hardware based network enhanced the computation speed by more than a factor of 1000. Both methods had an acceptably small error of 0.5% compared to the results of the iterative computation. Additionally, we verified that the neural network correctly generalized the solution of the problem to include patterns not contained in its training set.

  19. Road Lane Detection Robust to Shadows Based on a Fuzzy System Using a Visible Light Camera Sensor

    PubMed Central

    Hoang, Toan Minh; Baek, Na Rae; Cho, Se Woon; Kim, Ki Wan; Park, Kang Ryoung

    2017-01-01

    Recently, autonomous vehicles, particularly self-driving cars, have received significant attention owing to rapid advancements in sensor and computation technologies. In addition to traffic sign recognition, road lane detection is one of the most important factors used in lane departure warning systems and autonomous vehicles for maintaining the safety of semi-autonomous and fully autonomous systems. Unlike traffic signs, road lanes are easily damaged by both internal and external factors such as road quality, occlusion (traffic on the road), weather conditions, and illumination (shadows from objects such as cars, trees, and buildings). Obtaining clear road lane markings for recognition processing is a difficult challenge. Therefore, we propose a method to overcome various illumination problems, particularly severe shadows, by using fuzzy system and line segment detector algorithms to obtain better results for detecting road lanes by a visible light camera sensor. Experimental results from three open databases, Caltech dataset, Santiago Lanes dataset (SLD), and Road Marking dataset, showed that our method outperformed conventional lane detection methods. PMID:29143764

  20. Region-specific S-wave attenuation for earthquakes in northwestern Iran

    NASA Astrophysics Data System (ADS)

    Heidari, Reza; Mirzaei, Noorbakhsh

    2017-11-01

    In this study, continuous wavelet transform is applied to estimate the frequency-dependent quality factor of shear waves, Q S , in northwestern Iran. The dataset used in this study includes velocigrams of more than 50 events with magnitudes between 4.0 and 6.5, which have occurred in the study area. The CWT-based method shows a high-resolution technique for the estimation of S-wave frequency-dependent attenuation. The quality factor values are determined in the form of a power law as Q S ( f) = (147 ± 16) f 0.71 ± 0.02 and (126 ± 12) f 0.73 ± 0.02 for vertical and horizontal components, respectively, where f is between 0.9 and 12 Hz. Furthermore, in order to verify the reliability of the suggested Q S estimator method, an additional test is performed by using accelerograms of Ahar-Varzaghan dual earthquakes on August 11, 2012, of moment magnitudes 6.4 and 6.3 and their aftershocks. Results indicate that the estimated Q S values from CWT-based method are not very sensitive to the numbers and types of waveforms used (velocity or acceleration).

Top