Sample records for comparative method findings

  1. COMPARING A NEW ALGORITHM WITH THE CLASSIC METHODS FOR ESTIMATING THE NUMBER OF FACTORS. (R826238)

    EPA Science Inventory

    This paper presents and compares a new algorithm for finding the number of factors in a data analytic model. After we describe the new method, called NUMFACT, we compare it with standard methods for finding the number of factors to use in a model. The standard methods that we ...

  2. COMPARING A NEW ALGORITHM WITH THE CLASSIC METHODS FOR ESTIMATING THE NUMBER OF FACTORS. (R825173)

    EPA Science Inventory

    Abstract

    This paper presents and compares a new algorithm for finding the number of factors in a data analytic model. After we describe the new method, called NUMFACT, we compare it with standard methods for finding the number of factors to use in a model. The standard...

  3. FINDING A METHOD FOR THE MADNESS: A COMPARATIVE ANALYSIS OF STRATEGIC DESIGN METHODOLOGIES

    DTIC Science & Technology

    2017-06-01

    FINDING A METHOD FOR THE MADNESS: A COMPARATIVE ANALYSIS OF STRATEGIC DESIGN METHODOLOGIES BY AMANDA DONNELLY A THESIS...work develops a comparative model for strategic design methodologies, focusing on the primary elements of vision, time, process, communication and...collaboration, and risk assessment. My analysis dissects and compares three potential design methodologies including, net assessment, scenarios and

  4. Evaluating the Usability of Authoring Environments for Serious Games.

    PubMed

    Slootmaker, Aad; Hummel, Hans; Koper, Rob

    2017-08-01

    Background . The EMERGO method and online platform enable the development and delivery of scenario-based serious games that foster students to acquire professional competence. One of the main goals of the platform is to provide a user-friendly authoring environment for creating virtual environments where students can perform authentic tasks. Aim . We present the findings of an in-depth qualitative case study of the platform's authoring environment and compare our findings on usability with those found for comparable environments in literature. Method . We carried out semi-structured interviews, with two experienced game developers who have authored a game for higher education, and a literature review of comparable environments. Findings . The analysis shows that the usability of the authoring environment is problematic, especially regarding understandability and learnability , which is in line with findings of comparable environments. Other findings are that authoring is well integrated with the EMERGO method and that functionality and reliability of the authoring environment are valued. Practical implications . The lessons learned are presented in the form of general guidelines to improve the understandability and learnability of authoring environments for serious games .

  5. Evaluating the Usability of Authoring Environments for Serious Games

    PubMed Central

    Slootmaker, Aad; Hummel, Hans; Koper, Rob

    2017-01-01

    Background. The EMERGO method and online platform enable the development and delivery of scenario-based serious games that foster students to acquire professional competence. One of the main goals of the platform is to provide a user-friendly authoring environment for creating virtual environments where students can perform authentic tasks. Aim. We present the findings of an in-depth qualitative case study of the platform’s authoring environment and compare our findings on usability with those found for comparable environments in literature. Method. We carried out semi-structured interviews, with two experienced game developers who have authored a game for higher education, and a literature review of comparable environments. Findings. The analysis shows that the usability of the authoring environment is problematic, especially regarding understandability and learnability, which is in line with findings of comparable environments. Other findings are that authoring is well integrated with the EMERGO method and that functionality and reliability of the authoring environment are valued. Practical implications. The lessons learned are presented in the form of general guidelines to improve the understandability and learnability of authoring environments for serious games. PMID:29081638

  6. Local-global alignment for finding 3D similarities in protein structures

    DOEpatents

    Zemla, Adam T [Brentwood, CA

    2011-09-20

    A method of finding 3D similarities in protein structures of a first molecule and a second molecule. The method comprises providing preselected information regarding the first molecule and the second molecule. Comparing the first molecule and the second molecule using Longest Continuous Segments (LCS) analysis. Comparing the first molecule and the second molecule using Global Distance Test (GDT) analysis. Comparing the first molecule and the second molecule using Local Global Alignment Scoring function (LGA_S) analysis. Verifying constructed alignment and repeating the steps to find the regions of 3D similarities in protein structures.

  7. Localization of tumors in various organs, using edge detection algorithms

    NASA Astrophysics Data System (ADS)

    López Vélez, Felipe

    2015-09-01

    The edge of an image is a set of points organized in a curved line, where in each of these points the brightness of the image changes abruptly, or has discontinuities, in order to find these edges there will be five different mathematical methods to be used and later on compared with its peers, this is with the aim of finding which of the methods is the one that can find the edges of any given image. In this paper these five methods will be used for medical purposes in order to find which one is capable of finding the edges of a scanned image more accurately than the others. The problem consists in analyzing the following two biomedicals images. One of them represents a brain tumor and the other one a liver tumor. These images will be analyzed with the help of the five methods described and the results will be compared in order to determine the best method to be used. It was decided to use different algorithms of edge detection in order to obtain the results shown below; Bessel algorithm, Morse algorithm, Hermite algorithm, Weibull algorithm and Sobel algorithm. After analyzing the appliance of each of the methods to both images it's impossible to determine the most accurate method for tumor detection due to the fact that in each case the best method changed, i.e., for the brain tumor image it can be noticed that the Morse method was the best at finding the edges of the image but for the liver tumor image it was the Hermite method. Making further observations it is found that Hermite and Morse have for these two cases the lowest standard deviations, concluding that these two are the most accurate method to find the edges in analysis of biomedical images.

  8. Minimum Description Length Block Finder, a Method to Identify Haplotype Blocks and to Compare the Strength of Block Boundaries

    PubMed Central

    Mannila, H.; Koivisto, M.; Perola, M.; Varilo, T.; Hennah, W.; Ekelund, J.; Lukk, M.; Peltonen, L.; Ukkonen, E.

    2003-01-01

    We describe a new probabilistic method for finding haplotype blocks that is based on the use of the minimum description length (MDL) principle. We give a rigorous definition of the quality of a segmentation of a genomic region into blocks and describe a dynamic programming algorithm for finding the optimal segmentation with respect to this measure. We also describe a method for finding the probability of a block boundary for each pair of adjacent markers: this gives a tool for evaluating the significance of each block boundary. We have applied the method to the published data of Daly and colleagues. The results expose some problems that exist in the current methods for the evaluation of the significance of predicted block boundaries. Our method, MDL block finder, can be used to compare block borders in different sample sets, and we demonstrate this by applying the MDL-based method to define the block structure in chromosomes from population isolates. PMID:12761696

  9. Minimum description length block finder, a method to identify haplotype blocks and to compare the strength of block boundaries.

    PubMed

    Mannila, H; Koivisto, M; Perola, M; Varilo, T; Hennah, W; Ekelund, J; Lukk, M; Peltonen, L; Ukkonen, E

    2003-07-01

    We describe a new probabilistic method for finding haplotype blocks that is based on the use of the minimum description length (MDL) principle. We give a rigorous definition of the quality of a segmentation of a genomic region into blocks and describe a dynamic programming algorithm for finding the optimal segmentation with respect to this measure. We also describe a method for finding the probability of a block boundary for each pair of adjacent markers: this gives a tool for evaluating the significance of each block boundary. We have applied the method to the published data of Daly and colleagues. The results expose some problems that exist in the current methods for the evaluation of the significance of predicted block boundaries. Our method, MDL block finder, can be used to compare block borders in different sample sets, and we demonstrate this by applying the MDL-based method to define the block structure in chromosomes from population isolates.

  10. Robotic path-finding in inverse treatment planning for stereotactic radiosurgery with continuous dose delivery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vandewouw, Marlee M., E-mail: marleev@mie.utoronto

    Purpose: Continuous dose delivery in radiation therapy treatments has been shown to decrease total treatment time while improving the dose conformity and distribution homogeneity over the conventional step-and-shoot approach. The authors develop an inverse treatment planning method for Gamma Knife® Perfexion™ that continuously delivers dose along a path in the target. Methods: The authors’ method is comprised of two steps: find a path within the target, then solve a mixed integer optimization model to find the optimal collimator configurations and durations along the selected path. Robotic path-finding techniques, specifically, simultaneous localization and mapping (SLAM) using an extended Kalman filter, aremore » used to obtain a path that travels sufficiently close to selected isocentre locations. SLAM is novelly extended to explore a 3D, discrete environment, which is the target discretized into voxels. Further novel extensions are incorporated into the steering mechanism to account for target geometry. Results: The SLAM method was tested on seven clinical cases and compared to clinical, Hamiltonian path continuous delivery, and inverse step-and-shoot treatment plans. The SLAM approach improved dose metrics compared to the clinical plans and Hamiltonian path continuous delivery plans. Beam-on times improved over clinical plans, and had mixed performance compared to Hamiltonian path continuous plans. The SLAM method is also shown to be robust to path selection inaccuracies, isocentre selection, and dose distribution. Conclusions: The SLAM method for continuous delivery provides decreased total treatment time and increased treatment quality compared to both clinical and inverse step-and-shoot plans, and outperforms existing path methods in treatment quality. It also accounts for uncertainty in treatment planning by accommodating inaccuracies.« less

  11. Comparative Analysis of Registered Nurses' and Nursing Students' Attitudes and Use of Nonpharmacologic Methods of Pain Management.

    PubMed

    Stewart, Malcolm; Cox-Davenport, Rebecca A

    2015-08-01

    Despite the benefits that nonpharmacologic methods of pain management have to offer, nurses cite barriers that inhibit their use in practice. The purpose of this research study was to compare the perceptions of prelicensed student nurses (SNs) and registered nurses (RNs) toward nonpharmacologic methods of pain management. A sample size of 64 students and 49 RNs was recruited. Each participant completed a questionnaire about their use and perceptions nonpharmacologic pain control methods. Sixty-nine percent of RNs reported a stronger belief that nonpharmacologic methods gave relief to their patients compared with 59% of SNs (p = .028). Seventy-five percent of student nurses felt they had adequate education about nonpharmacologic pain modalities compared with 51% of RN who felt less than adequately educated (p = .016). These findings highlight the need for education about nonpharmacologic approaches to pain management. Applications of these findings may decrease barriers to the use of nonpharmacologic methods of pain management. Copyright © 2015 American Society for Pain Management Nursing. Published by Elsevier Inc. All rights reserved.

  12. A simple method for determining stress intensity factors for a crack in bi-material interface

    NASA Astrophysics Data System (ADS)

    Morioka, Yuta

    Because of violently oscillating nature of stress and displacement fields near the crack tip, it is difficult to obtain stress intensity factors for a crack between two dis-similar media. For a crack in a homogeneous medium, it is a common practice to find stress intensity factors through strain energy release rates. However, individual strain energy release rates do not exist for bi-material interface crack. Hence it is necessary to find alternative methods to evaluate stress intensity factors. Several methods have been proposed in the past. However they involve mathematical complexity and sometimes require additional finite element analysis. The purpose of this research is to develop a simple method to find stress intensity factors in bi-material interface cracks. A finite element based projection method is proposed in the research. It is shown that the projection method yields very accurate stress intensity factors for a crack in isotropic and anisotropic bi-material interfaces. The projection method is also compared to displacement ratio method and energy method proposed by other authors. Through comparison it is found that projection method is much simpler to apply with its accuracy comparable to that of displacement ratio method.

  13. Fused methods for visual saliency estimation

    NASA Astrophysics Data System (ADS)

    Danko, Amanda S.; Lyu, Siwei

    2015-02-01

    In this work, we present a new model of visual saliency by combing results from existing methods, improving upon their performance and accuracy. By fusing pre-attentive and context-aware methods, we highlight the abilities of state-of-the-art models while compensating for their deficiencies. We put this theory to the test in a series of experiments, comparatively evaluating the visual saliency maps and employing them for content-based image retrieval and thumbnail generation. We find that on average our model yields definitive improvements upon recall and f-measure metrics with comparable precisions. In addition, we find that all image searches using our fused method return more correct images and additionally rank them higher than the searches using the original methods alone.

  14. A LSQR-type method provides a computationally efficient automated optimal choice of regularization parameter in diffuse optical tomography.

    PubMed

    Prakash, Jaya; Yalavarthy, Phaneendra K

    2013-03-01

    Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time.

  15. Finding Information in (Very Large) Digital Libraries: A Deep Log Approach to Determining Differences in Use According to Method of Access

    ERIC Educational Resources Information Center

    Nicholas, David; Huntington, Paul; Jamali, Hamid R.; Tenopir, Carol

    2006-01-01

    This report presents the early findings of an exploratory deep log analysis (DLA) of journal usage on OhioLINK, reporting on the information seeking methods of users. Compared to the users who sought information by browsing journals, those who conducted searches to find information and articles tended to record longer session times and viewed more…

  16. The Comparability of Focus Group and Survey Results: Three Case Studies.

    ERIC Educational Resources Information Center

    Ward, Victoria M.; And Others

    1991-01-01

    Focus group findings were compared with survey findings for three studies in which both methods were used. Studies conducted on voluntary sterilization in Guatemala, Honduras, and Zaire with over 2,000 subjects confirm that focus groups yield information similar to that obtained from surveys and are useful in program planning. (SLD)

  17. Comparative study on the measurement of learning outcomes after powerpoint presentation and problem based learning with discussion in family medicine amongst fifth year medical students.

    PubMed

    Khobragade, Sujata; Abas, Adinegara Lutfi; Khobragade, Yadneshwar Sudam

    2016-01-01

    Learning outcomes after traditional teaching methods were compared with problem-based learning (PBL) among fifth year medical students. Six students participated each in traditional teaching and PBL methods, respectively. Traditional teaching method involved PowerPoint (PPT) presentation and PBL included study on case scenario and discussion. Both methods were effective in improving performance of students. Postteaching, we did not find significant differences in learning outcomes between these two teaching methods. (1) Study was conducted with an intention to find out which method of learning is more effective; traditional or PBL. (2) To assess the level of knowledge and understanding in anemia/zoonotic diseases as against diabetes/hypertension. All the students posted from February 3, 2014, to March 14, 2014, participated in this study. Six students were asked to prepare and present a lecture (PPT) and subsequent week other six students were asked to present PBL. Both groups presented different topics. Since it was a pre- and post-test, same students were taken as control. To maintain uniformity and to avoid bias due cultural diversity, language etc., same questions were administered. After taking verbal consent, all 34 students were given pretest on anemia and zoonotic diseases. Then lecture (PPT) by six students on the same topic was given it followed by posttest questionnaire. Subsequent week pretest was conducted on hypertension and diabetes. Then case scenario presentation and discussion (PBL) was done by different six students followed by posttest. Both the methods were compared. Analysis was done manually and standard error of means and students t -test was used to find out statistical significance. We found statistically significant improvement in performance of students after PPT presentation as well as PBL. Both methods are equally effective. However, Pretest results of students in anemia and zoonotic diseases (Group A) were poor compared to pretest results of students in hypertension and diabetes (Group B). The students who participated in presentation did not influence their performance as they were covering a small part of the topic and there were no differences in their marks compared to other students. We did not find significant differences in outcome after teaching between PBL and traditional methods. Performances of students were poor in anemia and zoonotic diseases which need remedial teaching. Assessment may influence retention ability and performance.

  18. Programmed Instruction in Secondary Education: A Meta-Analysis of the Impact of Class Size on Its Effectiveness.

    ERIC Educational Resources Information Center

    Boden, Andrea; Archwamety, Teara; McFarland, Max

    This review used meta-analytic techniques to integrate findings from 30 independent studies that compared programmed instruction to conventional methods of instruction at the secondary level. The meta-analysis demonstrated that programmed instruction resulted in higher achievement when compared to conventional methods of instruction (average…

  19. Finding consistent patterns: A nonparametric approach for identifying differential expression in RNA-Seq data

    PubMed Central

    Li, Jun; Tibshirani, Robert

    2015-01-01

    We discuss the identification of features that are associated with an outcome in RNA-Sequencing (RNA-Seq) and other sequencing-based comparative genomic experiments. RNA-Seq data takes the form of counts, so models based on the normal distribution are generally unsuitable. The problem is especially challenging because different sequencing experiments may generate quite different total numbers of reads, or ‘sequencing depths’. Existing methods for this problem are based on Poisson or negative binomial models: they are useful but can be heavily influenced by ‘outliers’ in the data. We introduce a simple, nonparametric method with resampling to account for the different sequencing depths. The new method is more robust than parametric methods. It can be applied to data with quantitative, survival, two-class or multiple-class outcomes. We compare our proposed method to Poisson and negative binomial-based methods in simulated and real data sets, and find that our method discovers more consistent patterns than competing methods. PMID:22127579

  20. Consistent characterization of semiconductor saturable absorber mirrors with single-pulse and pump-probe spectroscopy.

    PubMed

    Fleischhaker, R; Krauss, N; Schättiger, F; Dekorsy, T

    2013-03-25

    We study the comparability of the two most important measurement methods used for the characterization of semiconductor saturable absorber mirrors (SESAMs). For both methods, single-pulse spectroscopy (SPS) and pump-probe spectroscopy (PPS), we analyze in detail the time-dependent saturation dynamics inside a SESAM. Based on this analysis, we find that fluence-dependent PPS at complete spatial overlap and zero time delay is equivalent to SPS. We confirm our findings experimentally by comparing data from SPS and PPS of two samples. We show how to interpret this data consistently and we give explanations for possible deviations.

  1. Detecting the borders between coding and non-coding DNA regions in prokaryotes based on recursive segmentation and nucleotide doublets statistics

    PubMed Central

    2012-01-01

    Background Detecting the borders between coding and non-coding regions is an essential step in the genome annotation. And information entropy measures are useful for describing the signals in genome sequence. However, the accuracies of previous methods of finding borders based on entropy segmentation method still need to be improved. Methods In this study, we first applied a new recursive entropic segmentation method on DNA sequences to get preliminary significant cuts. A 22-symbol alphabet is used to capture the differential composition of nucleotide doublets and stop codon patterns along three phases in both DNA strands. This process requires no prior training datasets. Results Comparing with the previous segmentation methods, the experimental results on three bacteria genomes, Rickettsia prowazekii, Borrelia burgdorferi and E.coli, show that our approach improves the accuracy for finding the borders between coding and non-coding regions in DNA sequences. Conclusions This paper presents a new segmentation method in prokaryotes based on Jensen-Rényi divergence with a 22-symbol alphabet. For three bacteria genomes, comparing to A12_JR method, our method raised the accuracy of finding the borders between protein coding and non-coding regions in DNA sequences. PMID:23282225

  2. Compare Complication of Classic versus Patent Hemostasis in Transradial Coronary Angiography

    PubMed Central

    Roghani, Farshad; Tajik, Mohammad Nasim; Khosravi, Alireza

    2017-01-01

    Background: Coronary artery disease (CAD) is multifactorial disease, in which thrombotic occlusion and calcification occur usually. New strategies have been made for diagnosis and treatment of CAD, such as transradial catheterization. Hemostasis could be done in two approaches: traditional and patent. Our aim is to find the best approach with lowest complication. Materials and Methods: In a comparative study, 120 patients were recruited and divided randomly into two subgroups, including traditional group (60 patients; 24 females, 36 males; mean age: 64.35 ± 10.56 years) and patent group (60 patients; 28 females, 32 males; mean age: 60.15 ± 8.92 years). All demographic data including age, gender, body mass index, and CAD-related risk factors (smoking, diabetes, hypertension) and technical data including the number of catheters, procedure duration, and hemostatic compression time and clinical outcomes (radial artery occlusion [RAO], hematoma, bleeding) were collected. Data were analyzed by SPSS version 16. Results: Our findings revealed that the incidence of RAO was significantly lower in patent groups compared with traditional group (P = 0.041). Furthermore, the difference incidence of RAO was higher in early occlusion compare with late one (P = 0.041). Moreover, there were significant relationship between some factors in patients of traditional group with occlusion (gender [P = 0.038], age [P = 0.031], diabetes mellitus [P = 0.043], hemostatic compression time [P = 0.036]) as well as in patent group (age [P = 0.009], hypertension [P = 0.035]). Conclusion: Our findings showed that RAO, especially type early is significantly lower in patent method compared classic method; and patent hemostasis is the safest method and good alternative for classical method. PMID:29387670

  3. Perceptions and Effects of Classroom Capture Software on Course Performance among Selected Online Community College Mathematics Students

    ERIC Educational Resources Information Center

    Smith, Rachel Naomi

    2017-01-01

    The purpose of this mixed methods research study was two-fold. First, I compared the findings of the success rates of online mathematics students with the perceived effects of classroom capture software in hopes to find convergence. Second, I used multiple methods in different phases of the study to expand the breadth and range of the effects of…

  4. Compare Complication of Classic versus Patent Hemostasis in Transradial Coronary Angiography.

    PubMed

    Roghani, Farshad; Tajik, Mohammad Nasim; Khosravi, Alireza

    2017-01-01

    Coronary artery disease (CAD) is multifactorial disease, in which thrombotic occlusion and calcification occur usually. New strategies have been made for diagnosis and treatment of CAD, such as transradial catheterization. Hemostasis could be done in two approaches: traditional and patent. Our aim is to find the best approach with lowest complication. In a comparative study, 120 patients were recruited and divided randomly into two subgroups, including traditional group (60 patients; 24 females, 36 males; mean age: 64.35 ± 10.56 years) and patent group (60 patients; 28 females, 32 males; mean age: 60.15 ± 8.92 years). All demographic data including age, gender, body mass index, and CAD-related risk factors (smoking, diabetes, hypertension) and technical data including the number of catheters, procedure duration, and hemostatic compression time and clinical outcomes (radial artery occlusion [RAO], hematoma, bleeding) were collected. Data were analyzed by SPSS version 16. Our findings revealed that the incidence of RAO was significantly lower in patent groups compared with traditional group ( P = 0.041). Furthermore, the difference incidence of RAO was higher in early occlusion compare with late one ( P = 0.041). Moreover, there were significant relationship between some factors in patients of traditional group with occlusion (gender [ P = 0.038], age [ P = 0.031], diabetes mellitus [ P = 0.043], hemostatic compression time [ P = 0.036]) as well as in patent group (age [ P = 0.009], hypertension [ P = 0.035]). Our findings showed that RAO, especially type early is significantly lower in patent method compared classic method; and patent hemostasis is the safest method and good alternative for classical method.

  5. Comparison of νμ->νe Oscillation calculations with matter effects

    NASA Astrophysics Data System (ADS)

    Gordon, Michael; Toki, Walter

    2013-04-01

    An introduction to neutrino oscillations in vacuum is presented, followed by a survey of various techniques for obtaining either exact or approximate expressions for νμ->νe oscillations in matter. The method devised by Mann, Kafka, Schneps, and Altinok produces an exact expression for the oscillation by determining explicitely the evolution operator. The method used by Freund yields an approximate oscillation probability by diagonalizing the Hamiltonian, finding the eigenvalues and eigenvectors, and then using those to find modified mixing angles with the matter effect taken into account. The method developed by Arafune, Koike, and Sato uses an alternate method to find an approximation of the evolution operator. These methods are compared to each other using parameters from both the T2K and LBNE experiments.

  6. Applications of the JARS method to study levee sites in southern Texas and southern New Mexico

    USGS Publications Warehouse

    Ivanov, J.; Miller, R.D.; Xia, J.; Dunbar, J.B.

    2007-01-01

    We apply the joint analysis of refractions with surface waves (JARS) method to several sites and compare its results to traditional refraction-tomography methods in efforts of finding a more realistic solution to the inverse refraction-traveltime problem. The JARS method uses a reference model, derived from surface-wave shear-wave velocity estimates, as a constraint. In all of the cases JARS estimates appear more realistic than those from the conventional refraction-tomography methods. As a result, we consider, the JARS algorithm as the preferred method for finding solutions to the inverse refraction-tomography problems. ?? 2007 Society of Exploration Geophysicists.

  7. Interference correction by extracting the information of interference dominant regions: Application to near-infrared spectra

    NASA Astrophysics Data System (ADS)

    Bi, Yiming; Tang, Liang; Shan, Peng; Xie, Qiong; Hu, Yong; Peng, Silong; Tan, Jie; Li, Changwen

    2014-08-01

    Interference such as baseline drift and light scattering can degrade the model predictability in multivariate analysis of near-infrared (NIR) spectra. Usually interference can be represented by an additive and a multiplicative factor. In order to eliminate these interferences, correction parameters are needed to be estimated from spectra. However, the spectra are often mixed of physical light scattering effects and chemical light absorbance effects, making it difficult for parameter estimation. Herein, a novel algorithm was proposed to find a spectral region automatically that the interesting chemical absorbance and noise are low, that is, finding an interference dominant region (IDR). Based on the definition of IDR, a two-step method was proposed to find the optimal IDR and the corresponding correction parameters estimated from IDR. Finally, the correction was performed to the full spectral range using previously obtained parameters for the calibration set and test set, respectively. The method can be applied to multi target systems with one IDR suitable for all targeted analytes. Tested on two benchmark data sets of near-infrared spectra, the performance of the proposed method provided considerable improvement compared with full spectral estimation methods and comparable with other state-of-art methods.

  8. Optimal projection method determination by Logdet Divergence and perturbed von-Neumann Divergence.

    PubMed

    Jiang, Hao; Ching, Wai-Ki; Qiu, Yushan; Cheng, Xiao-Qing

    2017-12-14

    Positive semi-definiteness is a critical property in kernel methods for Support Vector Machine (SVM) by which efficient solutions can be guaranteed through convex quadratic programming. However, a lot of similarity functions in applications do not produce positive semi-definite kernels. We propose projection method by constructing projection matrix on indefinite kernels. As a generalization of the spectrum method (denoising method and flipping method), the projection method shows better or comparable performance comparing to the corresponding indefinite kernel methods on a number of real world data sets. Under the Bregman matrix divergence theory, we can find suggested optimal λ in projection method using unconstrained optimization in kernel learning. In this paper we focus on optimal λ determination, in the pursuit of precise optimal λ determination method in unconstrained optimization framework. We developed a perturbed von-Neumann divergence to measure kernel relationships. We compared optimal λ determination with Logdet Divergence and perturbed von-Neumann Divergence, aiming at finding better λ in projection method. Results on a number of real world data sets show that projection method with optimal λ by Logdet divergence demonstrate near optimal performance. And the perturbed von-Neumann Divergence can help determine a relatively better optimal projection method. Projection method ia easy to use for dealing with indefinite kernels. And the parameter embedded in the method can be determined through unconstrained optimization under Bregman matrix divergence theory. This may provide a new way in kernel SVMs for varied objectives.

  9. Finding gene clusters for a replicated time course study

    PubMed Central

    2014-01-01

    Background Finding genes that share similar expression patterns across samples is an important question that is frequently asked in high-throughput microarray studies. Traditional clustering algorithms such as K-means clustering and hierarchical clustering base gene clustering directly on the observed measurements and do not take into account the specific experimental design under which the microarray data were collected. A new model-based clustering method, the clustering of regression models method, takes into account the specific design of the microarray study and bases the clustering on how genes are related to sample covariates. It can find useful gene clusters for studies from complicated study designs such as replicated time course studies. Findings In this paper, we applied the clustering of regression models method to data from a time course study of yeast on two genotypes, wild type and YOX1 mutant, each with two technical replicates, and compared the clustering results with K-means clustering. We identified gene clusters that have similar expression patterns in wild type yeast, two of which were missed by K-means clustering. We further identified gene clusters whose expression patterns were changed in YOX1 mutant yeast compared to wild type yeast. Conclusions The clustering of regression models method can be a valuable tool for identifying genes that are coordinately transcribed by a common mechanism. PMID:24460656

  10. Musical Practices and Methods in Music Lessons: A Comparative Study of Estonian and Finnish General Music Education

    ERIC Educational Resources Information Center

    Sepp, Anu; Ruokonen, Inkeri; Ruismäki, Heikki

    2015-01-01

    This article reveals the results of a comparative study of Estonian and Finnish general music education. The aim was to find out what music teaching practices and approaches/methods were mostly used, what music education perspectives supported those practices. The data were collected using questionnaires and the results of 107 Estonian and 50…

  11. Morbidity and chronic pain following different techniques of caesarean section: A comparative study.

    PubMed

    Belci, D; Di Renzo, G C; Stark, M; Đurić, J; Zoričić, D; Belci, M; Peteh, L L

    2015-01-01

    Research examining long-term outcomes after childbirth performed with different techniques of caesarean section have been limited and do not provide information on morbidity and neuropathic pain. The study compares two groups of patients submitted to the 'Traditional' method using Pfannenstiel incision and patients submitted to the 'Misgav Ladach' method ≥ 5 years after the operation. We find better long-term postoperative results in the patients that were treated with the Misgav Ladach method compared with the Traditional method. The results were statistically better regarding the intensity of pain, presence of neuropathic and chronic pain and the level of satisfaction about cosmetic appearance of the scar.

  12. Text-in-context: a method for extracting findings in mixed-methods mixed research synthesis studies.

    PubMed

    Sandelowski, Margarete; Leeman, Jennifer; Knafl, Kathleen; Crandell, Jamie L

    2013-06-01

    Our purpose in this paper is to propose a new method for extracting findings from research reports included in mixed-methods mixed research synthesis studies. International initiatives in the domains of systematic review and evidence synthesis have been focused on broadening the conceptualization of evidence, increased methodological inclusiveness and the production of evidence syntheses that will be accessible to and usable by a wider range of consumers. Initiatives in the general mixed-methods research field have been focused on developing truly integrative approaches to data analysis and interpretation. The data extraction challenges described here were encountered, and the method proposed for addressing these challenges was developed, in the first year of the ongoing (2011-2016) study: Mixed-Methods Synthesis of Research on Childhood Chronic Conditions and Family. To preserve the text-in-context of findings in research reports, we describe a method whereby findings are transformed into portable statements that anchor results to relevant information about sample, source of information, time, comparative reference point, magnitude and significance and study-specific conceptions of phenomena. The data extraction method featured here was developed specifically to accommodate mixed-methods mixed research synthesis studies conducted in nursing and other health sciences, but reviewers might find it useful in other kinds of research synthesis studies. This data extraction method itself constitutes a type of integration to preserve the methodological context of findings when statements are read individually and in comparison to each other. © 2012 Blackwell Publishing Ltd.

  13. Accurate electronic and chemical properties of 3d transition metal oxides using a calculated linear response U and a DFT + U(V) method.

    PubMed

    Xu, Zhongnan; Joshi, Yogesh V; Raman, Sumathy; Kitchin, John R

    2015-04-14

    We validate the usage of the calculated, linear response Hubbard U for evaluating accurate electronic and chemical properties of bulk 3d transition metal oxides. We find calculated values of U lead to improved band gaps. For the evaluation of accurate reaction energies, we first identify and eliminate contributions to the reaction energies of bulk systems due only to changes in U and construct a thermodynamic cycle that references the total energies of unique U systems to a common point using a DFT + U(V) method, which we recast from a recently introduced DFT + U(R) method for molecular systems. We then introduce a semi-empirical method based on weighted DFT/DFT + U cohesive energies to calculate bulk oxidation energies of transition metal oxides using density functional theory and linear response calculated U values. We validate this method by calculating 14 reactions energies involving V, Cr, Mn, Fe, and Co oxides. We find up to an 85% reduction of the mean average error (MAE) compared to energies calculated with the Perdew-Burke-Ernzerhof functional. When our method is compared with DFT + U with empirically derived U values and the HSE06 hybrid functional, we find up to 65% and 39% reductions in the MAE, respectively.

  14. Excited-State Effective Masses in Lattice QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    George Fleming, Saul Cohen, Huey-Wen Lin

    2009-10-01

    We apply black-box methods, i.e. where the performance of the method does not depend upon initial guesses, to extract excited-state energies from Euclidean-time hadron correlation functions. In particular, we extend the widely used effective-mass method to incorporate multiple correlation functions and produce effective mass estimates for multiple excited states. In general, these excited-state effective masses will be determined by finding the roots of some polynomial. We demonstrate the method using sample lattice data to determine excited-state energies of the nucleon and compare the results to other energy-level finding techniques.

  15. Spotting the difference in molecular dynamics simulations of biomolecules

    NASA Astrophysics Data System (ADS)

    Sakuraba, Shun; Kono, Hidetoshi

    2016-08-01

    Comparing two trajectories from molecular simulations conducted under different conditions is not a trivial task. In this study, we apply a method called Linear Discriminant Analysis with ITERative procedure (LDA-ITER) to compare two molecular simulation results by finding the appropriate projection vectors. Because LDA-ITER attempts to determine a projection such that the projections of the two trajectories do not overlap, the comparison does not suffer from a strong anisotropy, which is an issue in protein dynamics. LDA-ITER is applied to two test cases: the T4 lysozyme protein simulation with or without a point mutation and the allosteric protein PDZ2 domain of hPTP1E with or without a ligand. The projection determined by the method agrees with the experimental data and previous simulations. The proposed procedure, which complements existing methods, is a versatile analytical method that is specialized to find the "difference" between two trajectories.

  16. A Gradient Taguchi Method for Engineering Optimization

    NASA Astrophysics Data System (ADS)

    Hwang, Shun-Fa; Wu, Jen-Chih; He, Rong-Song

    2017-10-01

    To balance the robustness and the convergence speed of optimization, a novel hybrid algorithm consisting of Taguchi method and the steepest descent method is proposed in this work. Taguchi method using orthogonal arrays could quickly find the optimum combination of the levels of various factors, even when the number of level and/or factor is quite large. This algorithm is applied to the inverse determination of elastic constants of three composite plates by combining numerical method and vibration testing. For these problems, the proposed algorithm could find better elastic constants in less computation cost. Therefore, the proposed algorithm has nice robustness and fast convergence speed as compared to some hybrid genetic algorithms.

  17. A Detailed Comparison of Multidimensional Boltzmann Neutrino Transport Methods in Core-collapse Supernovae

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richers, Sherwood; Nagakura, Hiroki; Ott, Christian D.

    The mechanism driving core-collapse supernovae is sensitive to the interplay between matter and neutrino radiation. However, neutrino radiation transport is very difficult to simulate, and several radiation transport methods of varying levels of approximation are available. We carefully compare for the first time in multiple spatial dimensions the discrete ordinates (DO) code of Nagakura, Yamada, and Sumiyoshi and the Monte Carlo (MC) code Sedonu, under the assumptions of a static fluid background, flat spacetime, elastic scattering, and full special relativity. We find remarkably good agreement in all spectral, angular, and fluid interaction quantities, lending confidence to both methods. The DOmore » method excels in determining the heating and cooling rates in the optically thick region. The MC method predicts sharper angular features due to the effectively infinite angular resolution, but struggles to drive down noise in quantities where subtractive cancellation is prevalent, such as the net gain in the protoneutron star and off-diagonal components of the Eddington tensor. We also find that errors in the angular moments of the distribution functions induced by neglecting velocity dependence are subdominant to those from limited momentum-space resolution. We briefly compare directly computed second angular moments to those predicted by popular algebraic two-moment closures, and we find that the errors from the approximate closures are comparable to the difference between the DO and MC methods. Included in this work is an improved Sedonu code, which now implements a fully special relativistic, time-independent version of the grid-agnostic MC random walk approximation.« less

  18. A Detailed Comparison of Multidimensional Boltzmann Neutrino Transport Methods in Core-collapse Supernovae

    DOE PAGES

    Richers, Sherwood; Nagakura, Hiroki; Ott, Christian D.; ...

    2017-10-03

    The mechanism driving core-collapse supernovae is sensitive to the interplay between matter and neutrino radiation. However, neutrino radiation transport is very difficult to simulate, and several radiation transport methods of varying levels of approximation are available. In this paper, we carefully compare for the first time in multiple spatial dimensions the discrete ordinates (DO) code of Nagakura, Yamada, and Sumiyoshi and the Monte Carlo (MC) code Sedonu, under the assumptions of a static fluid background, flat spacetime, elastic scattering, and full special relativity. We find remarkably good agreement in all spectral, angular, and fluid interaction quantities, lending confidence to bothmore » methods. The DO method excels in determining the heating and cooling rates in the optically thick region. The MC method predicts sharper angular features due to the effectively infinite angular resolution, but struggles to drive down noise in quantities where subtractive cancellation is prevalent, such as the net gain in the protoneutron star and off-diagonal components of the Eddington tensor. We also find that errors in the angular moments of the distribution functions induced by neglecting velocity dependence are subdominant to those from limited momentum-space resolution. We briefly compare directly computed second angular moments to those predicted by popular algebraic two-moment closures, and we find that the errors from the approximate closures are comparable to the difference between the DO and MC methods. Finally, included in this work is an improved Sedonu code, which now implements a fully special relativistic, time-independent version of the grid-agnostic MC random walk approximation.« less

  19. A Detailed Comparison of Multidimensional Boltzmann Neutrino Transport Methods in Core-collapse Supernovae

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richers, Sherwood; Nagakura, Hiroki; Ott, Christian D.

    The mechanism driving core-collapse supernovae is sensitive to the interplay between matter and neutrino radiation. However, neutrino radiation transport is very difficult to simulate, and several radiation transport methods of varying levels of approximation are available. In this paper, we carefully compare for the first time in multiple spatial dimensions the discrete ordinates (DO) code of Nagakura, Yamada, and Sumiyoshi and the Monte Carlo (MC) code Sedonu, under the assumptions of a static fluid background, flat spacetime, elastic scattering, and full special relativity. We find remarkably good agreement in all spectral, angular, and fluid interaction quantities, lending confidence to bothmore » methods. The DO method excels in determining the heating and cooling rates in the optically thick region. The MC method predicts sharper angular features due to the effectively infinite angular resolution, but struggles to drive down noise in quantities where subtractive cancellation is prevalent, such as the net gain in the protoneutron star and off-diagonal components of the Eddington tensor. We also find that errors in the angular moments of the distribution functions induced by neglecting velocity dependence are subdominant to those from limited momentum-space resolution. We briefly compare directly computed second angular moments to those predicted by popular algebraic two-moment closures, and we find that the errors from the approximate closures are comparable to the difference between the DO and MC methods. Finally, included in this work is an improved Sedonu code, which now implements a fully special relativistic, time-independent version of the grid-agnostic MC random walk approximation.« less

  20. A Detailed Comparison of Multidimensional Boltzmann Neutrino Transport Methods in Core-collapse Supernovae

    NASA Astrophysics Data System (ADS)

    Richers, Sherwood; Nagakura, Hiroki; Ott, Christian D.; Dolence, Joshua; Sumiyoshi, Kohsuke; Yamada, Shoichi

    2017-10-01

    The mechanism driving core-collapse supernovae is sensitive to the interplay between matter and neutrino radiation. However, neutrino radiation transport is very difficult to simulate, and several radiation transport methods of varying levels of approximation are available. We carefully compare for the first time in multiple spatial dimensions the discrete ordinates (DO) code of Nagakura, Yamada, and Sumiyoshi and the Monte Carlo (MC) code Sedonu, under the assumptions of a static fluid background, flat spacetime, elastic scattering, and full special relativity. We find remarkably good agreement in all spectral, angular, and fluid interaction quantities, lending confidence to both methods. The DO method excels in determining the heating and cooling rates in the optically thick region. The MC method predicts sharper angular features due to the effectively infinite angular resolution, but struggles to drive down noise in quantities where subtractive cancellation is prevalent, such as the net gain in the protoneutron star and off-diagonal components of the Eddington tensor. We also find that errors in the angular moments of the distribution functions induced by neglecting velocity dependence are subdominant to those from limited momentum-space resolution. We briefly compare directly computed second angular moments to those predicted by popular algebraic two-moment closures, and we find that the errors from the approximate closures are comparable to the difference between the DO and MC methods. Included in this work is an improved Sedonu code, which now implements a fully special relativistic, time-independent version of the grid-agnostic MC random walk approximation.

  1. Satellite-derived methane hotspot emission estimates using a fast data-driven method

    NASA Astrophysics Data System (ADS)

    Buchwitz, Michael; Schneising, Oliver; Reuter, Maximilian; Heymann, Jens; Krautwurst, Sven; Bovensmann, Heinrich; Burrows, John P.; Boesch, Hartmut; Parker, Robert J.; Somkuti, Peter; Detmers, Rob G.; Hasekamp, Otto P.; Aben, Ilse; Butz, André; Frankenberg, Christian; Turner, Alexander J.

    2017-05-01

    Methane is an important atmospheric greenhouse gas and an adequate understanding of its emission sources is needed for climate change assessments, predictions, and the development and verification of emission mitigation strategies. Satellite retrievals of near-surface-sensitive column-averaged dry-air mole fractions of atmospheric methane, i.e. XCH4, can be used to quantify methane emissions. Maps of time-averaged satellite-derived XCH4 show regionally elevated methane over several methane source regions. In order to obtain methane emissions of these source regions we use a simple and fast data-driven method to estimate annual methane emissions and corresponding 1σ uncertainties directly from maps of annually averaged satellite XCH4. From theoretical considerations we expect that our method tends to underestimate emissions. When applying our method to high-resolution atmospheric methane simulations, we typically find agreement within the uncertainty range of our method (often 100 %) but also find that our method tends to underestimate emissions by typically about 40 %. To what extent these findings are model dependent needs to be assessed. We apply our method to an ensemble of satellite XCH4 data products consisting of two products from SCIAMACHY/ENVISAT and two products from TANSO-FTS/GOSAT covering the time period 2003-2014. We obtain annual emissions of four source areas: Four Corners in the south-western USA, the southern part of Central Valley, California, Azerbaijan, and Turkmenistan. We find that our estimated emissions are in good agreement with independently derived estimates for Four Corners and Azerbaijan. For the Central Valley and Turkmenistan our estimated annual emissions are higher compared to the EDGAR v4.2 anthropogenic emission inventory. For Turkmenistan we find on average about 50 % higher emissions with our annual emission uncertainty estimates overlapping with the EDGAR emissions. For the region around Bakersfield in the Central Valley we find a factor of 5-8 higher emissions compared to EDGAR, albeit with large uncertainty. Major methane emission sources in this region are oil/gas and livestock. Our findings corroborate recently published studies based on aircraft and satellite measurements and new bottom-up estimates reporting significantly underestimated methane emissions of oil/gas and/or livestock in this area in EDGAR.

  2. Identifying a maximum tolerated contour in two-dimensional dose-finding

    PubMed Central

    Wages, Nolan A.

    2016-01-01

    The majority of Phase I methods for multi-agent trials have focused on identifying a single maximum tolerated dose combination (MTDC) among those being investigated. Some published methods in the area have been based on the notion that there is no unique MTDC, and that the set of dose combinations with acceptable toxicity forms an equivalence contour in two dimensions. Therefore, it may be of interest to find multiple MTDC's for further testing for efficacy in a Phase II setting. In this paper, we present a new dose-finding method that extends the continual reassessment method to account for the location of multiple MTDC's. Operating characteristics are demonstrated through simulation studies, and are compared to existing methodology. Some brief discussion of implementation and available software is also provided. PMID:26910586

  3. The comparative method of language acquisition research: a Mayan case study.

    PubMed

    Pye, Clifton; Pfeiler, Barbara

    2014-03-01

    This article demonstrates how the Comparative Method can be applied to cross-linguistic research on language acquisition. The Comparative Method provides a systematic procedure for organizing and interpreting acquisition data from different languages. The Comparative Method controls for cross-linguistic differences at all levels of the grammar and is especially useful in drawing attention to variation in contexts of use across languages. This article uses the Comparative Method to analyze the acquisition of verb suffixes in two Mayan languages: K'iche' and Yucatec. Mayan status suffixes simultaneously mark distinctions in verb transitivity, verb class, mood, and clause position. Two-year-old children acquiring K'iche' and Yucatec Maya accurately produce the status suffixes on verbs, in marked distinction to the verbal prefixes for aspect and agreement. We find evidence that the contexts of use for the suffixes differentially promote the children's production of cognate status suffixes in K'iche' and Yucatec.

  4. Utility of Postmortem Autopsy via Whole-Body Imaging: Initial Observations Comparing MDCT and 3.0T MRI Findings with Autopsy Findings

    PubMed Central

    Cha, Jang Gyu; Kim, Dong Hun; Kim, Dae Ho; Paik, Sang Hyun; Park, Jai Soung; Park, Seong Jin; Lee, Hae Kyung; Hong, Hyun Sook; Choi, Duek Lin; Chung, Nak Eun; Lee, Bong Woo; Seo, Joong Seok

    2010-01-01

    Objective We prospectively compared whole-body multidetector computed tomography (MDCT) and 3.0T magnetic resonance (MR) images with autopsy findings. Materials and Methods Five cadavers were subjected to whole-body, 16-channel MDCT and 3.0T MR imaging within two hours before an autopsy. A radiologist classified the MDCT and 3.0T MRI findings into major and minor findings, which were compared with autopsy findings. Results Most of the imaging findings, pertaining to head and neck, heart and vascular, chest, abdomen, spine, and musculoskeletal lesions, corresponded to autopsy findings. The causes of death that were determined on the bases of MDCT and 3.0T MRI findings were consistent with the autopsy findings in four of five cases. CT was useful in diagnosing fatal hemorrhage and pneumothorax, as well as determining the shapes and characteristics of the fractures and the direction of external force. MRI was effective in evaluating and tracing the route of a metallic object, soft tissue lesions, chronicity of hemorrhage, and bone bruises. Conclusion A postmortem MDCT combined with MRI is a potentially powerful tool, providing noninvasive and objective measurements for forensic investigations. PMID:20592923

  5. Tracing the cosmic web

    NASA Astrophysics Data System (ADS)

    Libeskind, Noam I.; van de Weygaert, Rien; Cautun, Marius; Falck, Bridget; Tempel, Elmo; Abel, Tom; Alpaslan, Mehmet; Aragón-Calvo, Miguel A.; Forero-Romero, Jaime E.; Gonzalez, Roberto; Gottlöber, Stefan; Hahn, Oliver; Hellwing, Wojciech A.; Hoffman, Yehuda; Jones, Bernard J. T.; Kitaura, Francisco; Knebe, Alexander; Manti, Serena; Neyrinck, Mark; Nuza, Sebastián E.; Padilla, Nelson; Platen, Erwin; Ramachandra, Nesar; Robotham, Aaron; Saar, Enn; Shandarin, Sergei; Steinmetz, Matthias; Stoica, Radu S.; Sousbie, Thierry; Yepes, Gustavo

    2018-01-01

    The cosmic web is one of the most striking features of the distribution of galaxies and dark matter on the largest scales in the Universe. It is composed of dense regions packed full of galaxies, long filamentary bridges, flattened sheets and vast low-density voids. The study of the cosmic web has focused primarily on the identification of such features, and on understanding the environmental effects on galaxy formation and halo assembly. As such, a variety of different methods have been devised to classify the cosmic web - depending on the data at hand, be it numerical simulations, large sky surveys or other. In this paper, we bring 12 of these methods together and apply them to the same data set in order to understand how they compare. In general, these cosmic-web classifiers have been designed with different cosmological goals in mind, and to study different questions. Therefore, one would not a priori expect agreement between different techniques; however, many of these methods do converge on the identification of specific features. In this paper, we study the agreements and disparities of the different methods. For example, each method finds that knots inhabit higher density regions than filaments, etc. and that voids have the lowest densities. For a given web environment, we find a substantial overlap in the density range assigned by each web classification scheme. We also compare classifications on a halo-by-halo basis; for example, we find that 9 of 12 methods classify around a third of group-mass haloes (i.e. Mhalo ∼ 1013.5 h-1 M⊙) as being in filaments. Lastly, so that any future cosmic-web classification scheme can be compared to the 12 methods used here, we have made all the data used in this paper public.

  6. Family adjustment across cultural groups in autistic spectrum disorders.

    PubMed

    Lobar, Sandra L

    2014-01-01

    This pilot ethnomethodological study examined perceptions of parents/caregivers of children diagnosed with autistic spectrum disorders concerning actions, norms, understandings, and assumptions related to adjustment to this chronic illness. The sample included 14 caregivers (75% Hispanic of various ethnic groups). Maximum variation sampling was used to compare participants on variables that were inductively derived via constant comparative methods of analysis. The following action categories emerged: "Seeking Diagnosis," "Engaging in Routines to Control behavior," "Finding Therapies (Types of Therapies)," "Finding School Accommodations," "Educating Others," "Rising to Challenges," and "Finding the Role of Spiritual and Religious Belief."

  7. Text-in-Context: A Method for Extracting Findings in Mixed-Methods Mixed Research Synthesis Studies

    PubMed Central

    Leeman, Jennifer; Knafl, Kathleen; Crandell, Jamie L.

    2012-01-01

    Aim Our purpose in this paper is to propose a new method for extracting findings from research reports included in mixed-methods mixed research synthesis studies. Background International initiatives in the domains of systematic review and evidence synthesis have been focused on broadening the conceptualization of evidence, increased methodological inclusiveness and the production of evidence syntheses that will be accessible to and usable by a wider range of consumers. Initiatives in the general mixed-methods research field have been focused on developing truly integrative approaches to data analysis and interpretation. Data source The data extraction challenges described here were encountered and the method proposed for addressing these challenges was developed, in the first year of the ongoing (2011–2016) study: Mixed-Methods Synthesis of Research on Childhood Chronic Conditions and Family. Discussion To preserve the text-in-context of findings in research reports, we describe a method whereby findings are transformed into portable statements that anchor results to relevant information about sample, source of information, time, comparative reference point, magnitude and significance and study-specific conceptions of phenomena. Implications for nursing The data extraction method featured here was developed specifically to accommodate mixed-methods mixed research synthesis studies conducted in nursing and other health sciences, but reviewers might find it useful in other kinds of research synthesis studies. Conclusion This data extraction method itself constitutes a type of integration to preserve the methodological context of findings when statements are read individually and in comparison to each other. PMID:22924808

  8. Horizontal decomposition of data table for finding one reduct

    NASA Astrophysics Data System (ADS)

    Hońko, Piotr

    2018-04-01

    Attribute reduction, being one of the most essential tasks in rough set theory, is a challenge for data that does not fit in the available memory. This paper proposes new definitions of attribute reduction using horizontal data decomposition. Algorithms for computing superreduct and subsequently exact reducts of a data table are developed and experimentally verified. In the proposed approach, the size of subtables obtained during the decomposition can be arbitrarily small. Reducts of the subtables are computed independently from one another using any heuristic method for finding one reduct. Compared with standard attribute reduction methods, the proposed approach can produce superreducts that usually inconsiderably differ from an exact reduct. The approach needs comparable time and much less memory to reduce the attribute set. The method proposed for removing unnecessary attributes from superreducts executes relatively fast for bigger databases.

  9. An efficient graph theory based method to identify every minimal reaction set in a metabolic network

    PubMed Central

    2014-01-01

    Background Development of cells with minimal metabolic functionality is gaining importance due to their efficiency in producing chemicals and fuels. Existing computational methods to identify minimal reaction sets in metabolic networks are computationally expensive. Further, they identify only one of the several possible minimal reaction sets. Results In this paper, we propose an efficient graph theory based recursive optimization approach to identify all minimal reaction sets. Graph theoretical insights offer systematic methods to not only reduce the number of variables in math programming and increase its computational efficiency, but also provide efficient ways to find multiple optimal solutions. The efficacy of the proposed approach is demonstrated using case studies from Escherichia coli and Saccharomyces cerevisiae. In case study 1, the proposed method identified three minimal reaction sets each containing 38 reactions in Escherichia coli central metabolic network with 77 reactions. Analysis of these three minimal reaction sets revealed that one of them is more suitable for developing minimal metabolism cell compared to other two due to practically achievable internal flux distribution. In case study 2, the proposed method identified 256 minimal reaction sets from the Saccharomyces cerevisiae genome scale metabolic network with 620 reactions. The proposed method required only 4.5 hours to identify all the 256 minimal reaction sets and has shown a significant reduction (approximately 80%) in the solution time when compared to the existing methods for finding minimal reaction set. Conclusions Identification of all minimal reactions sets in metabolic networks is essential since different minimal reaction sets have different properties that effect the bioprocess development. The proposed method correctly identified all minimal reaction sets in a both the case studies. The proposed method is computationally efficient compared to other methods for finding minimal reaction sets and useful to employ with genome-scale metabolic networks. PMID:24594118

  10. Phase Asymmetries in Normophonic Speakers: Visual Judgments and Objective Findings

    ERIC Educational Resources Information Center

    Bonilha, Heather Shaw; Deliyski, Dimitar D.; Gerlach, Terri Treman

    2008-01-01

    Purpose: To ascertain the amount of phase asymmetry of the vocal fold vibration in normophonic speakers via visualization techniques and compare findings for habitual and pressed phonations. Method: Fifty-two normophonic speakers underwent stroboscopy and high-speed videoendoscopy (HSV). The HSV images were further processed into 4 visual…

  11. Finding the Optimal Guidance for Enhancing Anchored Instruction

    ERIC Educational Resources Information Center

    Zydney, Janet Mannheimer; Bathke, Arne; Hasselbring, Ted S.

    2014-01-01

    This study investigated the effect of different methods of guidance with anchored instruction on students' mathematical problem-solving performance. The purpose of this research was to iteratively design a learning environment to find the optimal level of guidance. Two iterations of the software were compared. The first iteration used explicit…

  12. An Examination of Parametric and Nonparametric Dimensionality Assessment Methods with Exploratory and Confirmatory Mode

    ERIC Educational Resources Information Center

    Kogar, Hakan

    2018-01-01

    The aim of the present research study was to compare the findings from the nonparametric MSA, DIMTEST and DETECT and the parametric dimensionality determining methods in various simulation conditions by utilizing exploratory and confirmatory methods. For this purpose, various simulation conditions were established based on number of dimensions,…

  13. Leaching of indium from obsolete liquid crystal displays: Comparing grinding with electrical disintegration in context of LCA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dodbiba, Gjergj, E-mail: dodbiba@sys.t.u-tokyo.ac.jp; Nagai, Hiroki; Wang Lipang

    2012-10-15

    Highlights: Black-Right-Pointing-Pointer Two pre-treatment methods, prior to leaching of indium from obsolete LCD modules, were described. Black-Right-Pointing-Pointer Conventional grinding and electrical disintegration have been evaluated and compared in the context of LCA. Black-Right-Pointing-Pointer Experimental data on the leaching capacity for indium and the electricity consumption of equipment were inputted into the LCA model in order to compare the environmental performance of each method. Black-Right-Pointing-Pointer An estimate for the environmental performance was calculated as the sum of six impact categories. Black-Right-Pointing-Pointer Electrical disintegration method outperforms conventional grinding in all impact categories. - Abstract: In order to develop an effective recycling systemmore » for obsolete Liquid Crystal Displays (LCDs), which would enable both the leaching of indium (In) and the recovery of a pure glass fraction for recycling, an effective liberation or size-reduction method would be an important pre-treatment step. Therefore, in this study, two different types of liberation methods: (1) conventional grinding, and (2) electrical disintegration have been tested and evaluated in the context of Life Cycle Assessment (LCA). In other words, the above-mentioned methods were compared in order to find out the one that ensures the highest leaching capacity for indium, as well as the lowest environmental burden. One of the main findings of this study was that the electrical disintegration was the most effective liberation method, since it fully liberated the indium containing-layer, ensuring a leaching capacity of 968.5 mg-In/kg-LCD. In turn, the estimate for the environmental burden was approximately five times smaller when compared with the conventional grinding.« less

  14. The magnetic particle in a box: Analytic and micromagnetic analysis of probe-localized spin wave modes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adur, Rohan, E-mail: adur@physics.osu.edu; Du, Chunhui; Manuilov, Sergei A.

    2015-05-07

    The dipole field from a probe magnet can be used to localize a discrete spectrum of standing spin wave modes in a continuous ferromagnetic thin film without lithographic modification to the film. Obtaining the resonance field for a localized mode is not trivial due to the effect of the confined and inhomogeneous magnetization precession. We compare the results of micromagnetic and analytic methods to find the resonance field of localized modes in a ferromagnetic thin film, and investigate the accuracy of these methods by comparing with a numerical minimization technique that assumes Bessel function modes with pinned boundary conditions. Wemore » find that the micromagnetic technique, while computationally more intensive, reveals that the true magnetization profiles of localized modes are similar to Bessel functions with gradually decaying dynamic magnetization at the mode edges. We also find that an analytic solution, which is simple to implement and computationally much faster than other methods, accurately describes the resonance field of localized modes when exchange fields are negligible, and demonstrating the accessibility of localized mode analysis.« less

  15. Quantizing the Toda lattice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siddharthan, R.; Shastry, B.S.

    In this work we study the quantum Toda lattice, developing the asymptotic Bethe ansatz method first used by Sutherland. Despite its known limitations we find, on comparing with Gutzwiller{close_quote}s exact method, that it works well in this particular problem and in fact becomes exact as {h_bar} grows large. We calculate ground state and excitation energies for finite-sized lattices, identify excitations as phonons and solitons on the basis of their quantum numbers, and find their dispersions. These are similar to the classical dispersions for small {h_bar}, and remain similar all the way up to {h_bar}=1, but then deviate substantially as wemore » go farther into the quantum regime. On comparing the sound velocities for various {h_bar} obtained thus with that predicted by conformal theory we conclude that the Bethe ansatz gives the energies per particle accurate to O(1/N{sup 2}). On that assumption we can find correlation functions. Thus the Bethe ansatz method can be used to yield much more than the thermodynamic properties which previous authors have calculated. {copyright} {ital 1997} {ital The American Physical Society}« less

  16. Comparison of methods for the detection of gravitational waves from unknown neutron stars

    NASA Astrophysics Data System (ADS)

    Walsh, S.; Pitkin, M.; Oliver, M.; D'Antonio, S.; Dergachev, V.; Królak, A.; Astone, P.; Bejger, M.; Di Giovanni, M.; Dorosh, O.; Frasca, S.; Leaci, P.; Mastrogiovanni, S.; Miller, A.; Palomba, C.; Papa, M. A.; Piccinni, O. J.; Riles, K.; Sauter, O.; Sintes, A. M.

    2016-12-01

    Rapidly rotating neutron stars are promising sources of continuous gravitational wave radiation for the LIGO and Virgo interferometers. The majority of neutron stars in our galaxy have not been identified with electromagnetic observations. All-sky searches for isolated neutron stars offer the potential to detect gravitational waves from these unidentified sources. The parameter space of these blind all-sky searches, which also cover a large range of frequencies and frequency derivatives, presents a significant computational challenge. Different methods have been designed to perform these searches within acceptable computational limits. Here we describe the first benchmark in a project to compare the search methods currently available for the detection of unknown isolated neutron stars. The five methods compared here are individually referred to as the PowerFlux, sky Hough, frequency Hough, Einstein@Home, and time domain F -statistic methods. We employ a mock data challenge to compare the ability of each search method to recover signals simulated assuming a standard signal model. We find similar performance among the four quick-look search methods, while the more computationally intensive search method, Einstein@Home, achieves up to a factor of two higher sensitivity. We find that the absence of a second derivative frequency in the search parameter space does not degrade search sensitivity for signals with physically plausible second derivative frequencies. We also report on the parameter estimation accuracy of each search method, and the stability of the sensitivity in frequency and frequency derivative and in the presence of detector noise.

  17. How Effective Are Incident-Reporting Systems for Improving Patient Safety? A Systematic Literature Review

    PubMed Central

    Stavropoulou, Charitini; Doherty, Carole; Tosey, Paul

    2015-01-01

    Context Incident-reporting systems (IRSs) are used to gather information about patient safety incidents. Despite the financial burden they imply, however, little is known about their effectiveness. This article systematically reviews the effectiveness of IRSs as a method of improving patient safety through organizational learning. Methods Our systematic literature review identified 2 groups of studies: (1) those comparing the effectiveness of IRSs with other methods of error reporting and (2) those examining the effectiveness of IRSs on settings, structures, and outcomes in regard to improving patient safety. We used thematic analysis to compare the effectiveness of IRSs with other methods and to synthesize what was effective, where, and why. Then, to assess the evidence concerning the ability of IRSs to facilitate organizational learning, we analyzed studies using the concepts of single-loop and double-loop learning. Findings In total, we identified 43 studies, 8 that compared IRSs with other methods and 35 that explored the effectiveness of IRSs on settings, structures, and outcomes. We did not find strong evidence that IRSs performed better than other methods. We did find some evidence of single-loop learning, that is, changes to clinical settings or processes as a consequence of learning from IRSs, but little evidence of either improvements in outcomes or changes in the latent managerial factors involved in error production. In addition, there was insubstantial evidence of IRSs enabling double-loop learning, that is, a cultural change or a change in mind-set. Conclusions The results indicate that IRSs could be more effective if the criteria for what counts as an incident were explicit, they were owned and led by clinical teams rather than centralized hospital departments, and they were embedded within organizations as part of wider safety programs. PMID:26626987

  18. Using mixed methods to identify and answer clinically relevant research questions.

    PubMed

    Shneerson, Catherine L; Gale, Nicola K

    2015-06-01

    The need for mixed methods research in answering health care questions is becoming increasingly recognized because of the complexity of factors that affect health outcomes. In this article, we argue for the value of using a qualitatively driven mixed method approach for identifying and answering clinically relevant research questions. This argument is illustrated by findings from a study on the self-management practices of cancer survivors and the exploration of one particular clinically relevant finding about higher uptake of self-management in cancer survivors who had received chemotherapy treatment compared with those who have not. A cross-sectional study generated findings that formed the basis for the qualitative study, by informing the purposive sampling strategy and generating new qualitative research questions. Using a quantitative research component to supplement a qualitative study can enhance the generalizability and clinical relevance of the findings and produce detailed, contextualized, and rich answers to research questions that would be unachievable through quantitative or qualitative methods alone. © The Author(s) 2015.

  19. Natural Language-based Machine Learning Models for the Annotation of Clinical Radiology Reports.

    PubMed

    Zech, John; Pain, Margaret; Titano, Joseph; Badgeley, Marcus; Schefflein, Javin; Su, Andres; Costa, Anthony; Bederson, Joshua; Lehar, Joseph; Oermann, Eric Karl

    2018-05-01

    Purpose To compare different methods for generating features from radiology reports and to develop a method to automatically identify findings in these reports. Materials and Methods In this study, 96 303 head computed tomography (CT) reports were obtained. The linguistic complexity of these reports was compared with that of alternative corpora. Head CT reports were preprocessed, and machine-analyzable features were constructed by using bag-of-words (BOW), word embedding, and Latent Dirichlet allocation-based approaches. Ultimately, 1004 head CT reports were manually labeled for findings of interest by physicians, and a subset of these were deemed critical findings. Lasso logistic regression was used to train models for physician-assigned labels on 602 of 1004 head CT reports (60%) using the constructed features, and the performance of these models was validated on a held-out 402 of 1004 reports (40%). Models were scored by area under the receiver operating characteristic curve (AUC), and aggregate AUC statistics were reported for (a) all labels, (b) critical labels, and (c) the presence of any critical finding in a report. Sensitivity, specificity, accuracy, and F1 score were reported for the best performing model's (a) predictions of all labels and (b) identification of reports containing critical findings. Results The best-performing model (BOW with unigrams, bigrams, and trigrams plus average word embeddings vector) had a held-out AUC of 0.966 for identifying the presence of any critical head CT finding and an average 0.957 AUC across all head CT findings. Sensitivity and specificity for identifying the presence of any critical finding were 92.59% (175 of 189) and 89.67% (191 of 213), respectively. Average sensitivity and specificity across all findings were 90.25% (1898 of 2103) and 91.72% (18 351 of 20 007), respectively. Simpler BOW methods achieved results competitive with those of more sophisticated approaches, with an average AUC for presence of any critical finding of 0.951 for unigram BOW versus 0.966 for the best-performing model. The Yule I of the head CT corpus was 34, markedly lower than that of the Reuters corpus (at 103) or I2B2 discharge summaries (at 271), indicating lower linguistic complexity. Conclusion Automated methods can be used to identify findings in radiology reports. The success of this approach benefits from the standardized language of these reports. With this method, a large labeled corpus can be generated for applications such as deep learning. © RSNA, 2018 Online supplemental material is available for this article.

  20. Extending the Fellegi-Sunter probabilistic record linkage method for approximate field comparators.

    PubMed

    DuVall, Scott L; Kerber, Richard A; Thomas, Alun

    2010-02-01

    Probabilistic record linkage is a method commonly used to determine whether demographic records refer to the same person. The Fellegi-Sunter method is a probabilistic approach that uses field weights based on log likelihood ratios to determine record similarity. This paper introduces an extension of the Fellegi-Sunter method that incorporates approximate field comparators in the calculation of field weights. The data warehouse of a large academic medical center was used as a case study. The approximate comparator extension was compared with the Fellegi-Sunter method in its ability to find duplicate records previously identified in the data warehouse using different demographic fields and matching cutoffs. The approximate comparator extension misclassified 25% fewer pairs and had a larger Welch's T statistic than the Fellegi-Sunter method for all field sets and matching cutoffs. The accuracy gain provided by the approximate comparator extension grew as less information was provided and as the matching cutoff increased. Given the ubiquity of linkage in both clinical and research settings, the incremental improvement of the extension has the potential to make a considerable impact.

  1. Impact of Cooperative Learning on Naval Air Traffic Controller Training.

    ERIC Educational Resources Information Center

    Holubec, Edythe; And Others

    1993-01-01

    Reports on a study of the impact of cooperative learning techniques, compared with traditional Navy instructional methods, on Navy air traffic controller trainees. Finds that cooperative learning methods improved higher level reasoning skills and resulted in no failures among the trainees. (CFR)

  2. Elongation measurement using 1-dimensional image correlation method

    NASA Astrophysics Data System (ADS)

    Phongwisit, Phachara; Kamoldilok, Surachart; Buranasiri, Prathan

    2016-11-01

    Aim of this paper was to study, setup, and calibrate an elongation measurement by using 1- Dimensional Image Correlation method (1-DIC). To confirm our method and setup correctness, we need calibration with other methods. In this paper, we used a small spring as a sample to find a result in terms of spring constant. With a fundamental of Image Correlation method, images of formed and deformed samples were compared to understand the difference between deformed process. By comparing the location of reference point on both image's pixel, the spring's elongation were calculated. Then, the results have been compared with the spring constants, which were found from Hooke's law. The percentage of 5 percent error has been found. This DIC method, then, would be applied to measure the elongation of some different kinds of small fiber samples.

  3. An MLE method for finding LKB NTCP model parameters using Monte Carlo uncertainty estimates

    NASA Astrophysics Data System (ADS)

    Carolan, Martin; Oborn, Brad; Foo, Kerwyn; Haworth, Annette; Gulliford, Sarah; Ebert, Martin

    2014-03-01

    The aims of this work were to establish a program to fit NTCP models to clinical data with multiple toxicity endpoints, to test the method using a realistic test dataset, to compare three methods for estimating confidence intervals for the fitted parameters and to characterise the speed and performance of the program.

  4. Comparison of SVM RBF-NN and DT for crop and weed identification based on spectral measurement over corn fields

    USDA-ARS?s Scientific Manuscript database

    It is important to find an appropriate pattern-recognition method for in-field plant identification based on spectral measurement in order to classify the crop and weeds accurately. In this study, the method of Support Vector Machine (SVM) was evaluated and compared with two other methods, Decision ...

  5. Monte Carlo Evaluation of a New Track-Finding Method for the VENUS Muon Detector

    NASA Astrophysics Data System (ADS)

    Asano, Yuzo; Hatanaka, Makoto; Koseki, Tadashi; Mori, Shigeki; Shirakata, Masashi; Yamamoto, Kazumichi

    1989-10-01

    A new method of finding a track is devised for the VENUS muon detector composed of eight-cell drift-tube modules, each cell having a rectangular cross section of 5× 7 cm2. The new method, in which fourth-order equations are solved by the Ferarri-Cardano method, is especially powerful for a track having a large incident angle with respect to the line normal to the anode-wire plane of a drift tube, compared to the presently used method in which a track is determined by the intersecting points of an equi-drift-distance circle and the anode-wire plane. Cosmic-ray test data for the forward-backward part muon detector support these simulation results.

  6. The Role of Involvement and Emotional Well-Being for Preschool Children's Scientific Observation Competency in Biology

    ERIC Educational Resources Information Center

    Klemm, Janina; Neuhaus, Birgit J.

    2017-01-01

    Observation is one of the basic methods in science. It is not only an epistemological method itself, but also an important competence for other methods like experimenting or comparing. However, there is little knowledge about the relation with affective factors of this inquiry method. In our study, we would like to find out about the relations of…

  7. Comparative analysis of whole mount processing and systematic sampling of radical prostatectomy specimens: pathological outcomes and risk of biochemical recurrence.

    PubMed

    Salem, Shady; Chang, Sam S; Clark, Peter E; Davis, Rodney; Herrell, S Duke; Kordan, Yakup; Wills, Marcia L; Shappell, Scott B; Baumgartner, Roxelyn; Phillips, Sharon; Smith, Joseph A; Cookson, Michael S; Barocas, Daniel A

    2010-10-01

    Whole mount processing is more resource intensive than routine systematic sampling of radical retropubic prostatectomy specimens. We compared whole mount and systematic sampling for detecting pathological outcomes, and compared the prognostic value of pathological findings across pathological methods. We included men (608 whole mount and 525 systematic sampling samples) with no prior treatment who underwent radical retropubic prostatectomy at Vanderbilt University Medical Center between January 2000 and June 2008. We used univariate and multivariate analysis to compare the pathological outcome detection rate between pathological methods. Kaplan-Meier curves and the log rank test were used to compare the prognostic value of pathological findings across pathological methods. There were no significant differences between the whole mount and the systematic sampling groups in detecting extraprostatic extension (25% vs 30%), positive surgical margins (31% vs 31%), pathological Gleason score less than 7 (49% vs 43%), 7 (39% vs 43%) or greater than 7 (12% vs 13%), seminal vesicle invasion (8% vs 10%) or lymph node involvement (3% vs 5%). Tumor volume was higher in the systematic sampling group and whole mount detected more multiple surgical margins (each p <0.01). There were no significant differences in the likelihood of biochemical recurrence between the pathological methods when patients were stratified by pathological outcome. Except for estimated tumor volume and multiple margins whole mount and systematic sampling yield similar pathological information. Each method stratifies patients into comparable risk groups for biochemical recurrence. Thus, while whole mount is more resource intensive, it does not appear to result in improved detection of clinically important pathological outcomes or prognostication. Copyright © 2010 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  8. Plant species classification using flower images—A comparative study of local feature representations

    PubMed Central

    Seeland, Marco; Rzanny, Michael; Alaqraa, Nedal; Wäldchen, Jana; Mäder, Patrick

    2017-01-01

    Steady improvements of image description methods induced a growing interest in image-based plant species classification, a task vital to the study of biodiversity and ecological sensitivity. Various techniques have been proposed for general object classification over the past years and several of them have already been studied for plant species classification. However, results of these studies are selective in the evaluated steps of a classification pipeline, in the utilized datasets for evaluation, and in the compared baseline methods. No study is available that evaluates the main competing methods for building an image representation on the same datasets allowing for generalized findings regarding flower-based plant species classification. The aim of this paper is to comparatively evaluate methods, method combinations, and their parameters towards classification accuracy. The investigated methods span from detection, extraction, fusion, pooling, to encoding of local features for quantifying shape and color information of flower images. We selected the flower image datasets Oxford Flower 17 and Oxford Flower 102 as well as our own Jena Flower 30 dataset for our experiments. Findings show large differences among the various studied techniques and that their wisely chosen orchestration allows for high accuracies in species classification. We further found that true local feature detectors in combination with advanced encoding methods yield higher classification results at lower computational costs compared to commonly used dense sampling and spatial pooling methods. Color was found to be an indispensable feature for high classification results, especially while preserving spatial correspondence to gray-level features. In result, our study provides a comprehensive overview of competing techniques and the implications of their main parameters for flower-based plant species classification. PMID:28234999

  9. Using Virtual Social Networks for Case Finding in Clinical Studies: An Experiment from Adolescence, Brain, Cognition, and Diabetes Study.

    PubMed

    Pourabbasi, Ata; Farzami, Jalal; Shirvani, Mahbubeh-Sadat Ebrahimnegad; Shams, Amir Hossein; Larijani, Bagher

    2017-01-01

    One of the main usages of social networks in clinical studies is facilitating the process of sampling and case finding for scientists. The main focus of this study is on comparing two different methods of sampling through phone calls and using social network, for study purposes. One of the researchers started calling 214 families of children with diabetes during 90 days. After this period, phone calls stopped, and the team started communicating with families through telegram, a virtual social network for 30 days. The number of children who participated in the study was evaluated. Although the telegram method was 60 days shorter than the phone call method, researchers found that the number of participants from telegram (17.6%) did not have any significant differences compared with the ones being phone called (12.9%). Using social networks can be suggested as a beneficial method for local researchers who look for easier sampling methods, winning their samples' trust, following up with the procedure, and an easy-access database.

  10. A Proposal of Operational Risk Management Method Using FMEA for Drug Manufacturing Computerized System

    NASA Astrophysics Data System (ADS)

    Takahashi, Masakazu; Nanba, Reiji; Fukue, Yoshinori

    This paper proposes operational Risk Management (RM) method using Failure Mode and Effects Analysis (FMEA) for drug manufacturing computerlized system (DMCS). The quality of drug must not be influenced by failures and operational mistakes of DMCS. To avoid such situation, DMCS has to be conducted enough risk assessment and taken precautions. We propose operational RM method using FMEA for DMCS. To propose the method, we gathered and compared the FMEA results of DMCS, and develop a list that contains failure modes, failures and countermeasures. To apply this list, we can conduct RM in design phase, find failures, and conduct countermeasures efficiently. Additionally, we can find some failures that have not been found yet.

  11. Optimal knockout strategies in genome-scale metabolic networks using particle swarm optimization.

    PubMed

    Nair, Govind; Jungreuthmayer, Christian; Zanghellini, Jürgen

    2017-02-01

    Knockout strategies, particularly the concept of constrained minimal cut sets (cMCSs), are an important part of the arsenal of tools used in manipulating metabolic networks. Given a specific design, cMCSs can be calculated even in genome-scale networks. We would however like to find not only the optimal intervention strategy for a given design but the best possible design too. Our solution (PSOMCS) is to use particle swarm optimization (PSO) along with the direct calculation of cMCSs from the stoichiometric matrix to obtain optimal designs satisfying multiple objectives. To illustrate the working of PSOMCS, we apply it to a toy network. Next we show its superiority by comparing its performance against other comparable methods on a medium sized E. coli core metabolic network. PSOMCS not only finds solutions comparable to previously published results but also it is orders of magnitude faster. Finally, we use PSOMCS to predict knockouts satisfying multiple objectives in a genome-scale metabolic model of E. coli and compare it with OptKnock and RobustKnock. PSOMCS finds competitive knockout strategies and designs compared to other current methods and is in some cases significantly faster. It can be used in identifying knockouts which will force optimal desired behaviors in large and genome scale metabolic networks. It will be even more useful as larger metabolic models of industrially relevant organisms become available.

  12. Characterizing bars in low surface brightness disc galaxies

    NASA Astrophysics Data System (ADS)

    Peters, Wesley; Kuzio de Naray, Rachel

    2018-05-01

    In this paper, we use B-band, I-band, and 3.6 μm azimuthal light profiles of four low surface brightness galaxies (LSBs; UGC 628, F568-1, F568-3, F563-V2) to characterize three bar parameters: length, strength, and corotation radius. We employ three techniques to measure the radius of the bars, including a new method using the azimuthal light profiles. We find comparable bar radii between the I-band and 3.6 μm for all four galaxies when using our azimuthal light profile method, and that our bar lengths are comparable to those in high surface brightness galaxies (HSBs). In addition, we find the bar strengths for our galaxies to be smaller than those for HSBs. Finally, we use Fourier transforms of the B-band, I-band, and 3.6 μm images to characterize the bars as either `fast' or `slow' by measuring the corotation radius via phase profiles. When using the B- and I-band phase crossings, we find three of our galaxies have faster than expected relative bar pattern speeds for galaxies expected to be embedded in centrally dense cold dark matter haloes. When using the B-band and 3.6 μm phase crossings, we find more ambiguous results, although the relative bar pattern speeds are still faster than expected. Since we find a very slow bar in F563-V2, we are confident that we are able to differentiate between fast and slow bars. Finally, we find no relation between bar strength and relative bar pattern speed when comparing our LSBs to HSBs.

  13. Optical Coherence Tomography of Retinal Degeneration in Royal College of Surgeons Rats and Its Correlation with Morphology and Electroretinography

    PubMed Central

    Yamauchi, Kodai; Mounai, Natsuki; Tanabu, Reiko; Nakazawa, Mitsuru

    2016-01-01

    Purpose To evaluate the correlation between optical coherence tomography (OCT) and the histological, ultrastructural and electroretinography (ERG) findings of retinal degeneration in Royal College of Surgeons (RCS-/-) rats. Materials and Methods Using OCT, we qualitatively and quantitatively observed the continual retinal degeneration in RCS-/- rats, from postnatal (PN) day 17 until PN day 111. These findings were compared with the corresponding histological, electron microscopic, and ERG findings. We also compared them to OCT findings in wild type RCS+/+ rats, which were used as controls. Results After PN day 17, the hyperreflective band at the apical side of the photoreceptor layer became blurred. The inner segment (IS) ellipsoid zone then became obscured, and the photoreceptor IS and outer segment (OS) layers became diffusely hyperreflective after PN day 21. These changes correlated with histological and electron microscopic findings showing extracellular lamellar material that accumulated in the photoreceptor OS layer. After PN day 26, the outer nuclear layer became significantly thinner (P < 0.01) and hyperreflective compared with that in the controls; conversely, the photoreceptor IS and OS layers, as well as the inner retinal layers, became significantly thicker (P < 0.001 and P = 0.05, respectively). The apical hyperreflective band, as well as the IS ellipsoid zone, gradually disappeared between PN day 20 and PN day 30; concurrently, the ERG a- and b-wave amplitudes deteriorated. In contrast, the thicknesses of the combined retinal pigment epithelium and choroid did not differ significantly between RCS-/- and RCS+/+ rats. Conclusion Our results suggest that OCT demonstrates histologically validated photoreceptor degeneration in RCS rats, and that OCT findings partly correlate with ERG findings. We propose that OCT is a less invasive and useful method for evaluating photoreceptor degeneration in animal models of retinitis pigmentosa. PMID:27644042

  14. Student Evaluation of Instruction: Comparison between In-Class and Online Methods

    ERIC Educational Resources Information Center

    Capa-Aydin, Yesim

    2016-01-01

    This study compares student evaluations of instruction that were collected in-class with those gathered through an online survey. The two modes of administration were compared with respect to response rate, psychometric characteristics and mean ratings through different statistical analyses. Findings indicated that in-class evaluations produced a…

  15. Comparing Modes of Delivery: Classroom and On-Line (and Other) Learning.

    ERIC Educational Resources Information Center

    deLeon, Linda; Killian, Jerri

    2000-01-01

    Moving beyond question of whether on-line education is beneficial or harmful, explores conditions under which one or another of six instructional methods lecture, collaborative learning, experiential learning, learning contracts, televised courses, and Web-based learning work best. Finds specific methods more appropriate for some subject matters,…

  16. Characteristics of Completed Suicides: Implications of Differences among Methods.

    ERIC Educational Resources Information Center

    Fischer, Ellen P.; And Others

    1993-01-01

    Compared characteristics of suicides by jumping to those of suicides by hanging, ingestion, or shooting. Method used was significantly associated with sociodemographics, occupation, and mental health status, even after adjustment for individual access to suicide means. Findings provide evidence for hypothesis that controlling access to agent of…

  17. A study on Marangoni convection by the variational iteration method

    NASA Astrophysics Data System (ADS)

    Karaoǧlu, Onur; Oturanç, Galip

    2012-09-01

    In this paper, we will consider the use of the variational iteration method and Padé approximant for finding approximate solutions for a Marangoni convection induced flow over a free surface due to an imposed temperature gradient. The solutions are compared with the numerical (fourth-order Runge Kutta) solutions.

  18. A Comparison of Two Path Planners for Planetary Rovers

    NASA Technical Reports Server (NTRS)

    Tarokh, M.; Shiller, Z.; Hayati, S.

    1999-01-01

    The paper presents two path planners suitable for planetary rovers. The first is based on fuzzy description of the terrain, and genetic algorithm to find a traversable path in a rugged terrain. The second planner uses a global optimization method with a cost function that is the path distance divided by the velocity limit obtained from the consideration of the rover static and dynamic stability. A description of both methods is provided, and the results of paths produced are given which show the effectiveness of the path planners in finding near optimal paths. The features of the methods and their suitability and application for rover path planning are compared

  19. Emergent Literacy: A Comparison of Formal and Informal Assessment Methods.

    ERIC Educational Resources Information Center

    Harlin, Rebecca; Lipa, Sally

    1990-01-01

    Compares the effectiveness of the Concepts about Print (CAP) Test and the Metropolitan Readiness Test (MRT) in assessing the literacy development of both normal and at-risk primary students. Finds the CAP to be an effective predictor for at-risk children. Finds the MRT not worth the time, effort, and cost of administration. (RS)

  20. Should I Go Or Should I Stay? A Study of Factors Influencing Students' Decisions on Early Leaving

    ERIC Educational Resources Information Center

    Glogowska, Margaret; Young, Pat; Lockyer, Lesley

    2007-01-01

    The article reports on selected findings from a multi-method research project on student retention on a nursing programme. Although the research identified some factors specific to the experiences of students on the particular programme, this article focuses on findings and recommendations of generic interest. The article compares data from…

  1. Comparison of Online and Traditional Basic Life Support Renewal Training Methods for Registered Professional Nurses.

    PubMed

    Serwetnyk, Tara M; Filmore, Kristi; VonBacho, Stephanie; Cole, Robert; Miterko, Cindy; Smith, Caitlin; Smith, Charlene M

    2015-01-01

    Basic Life Support certification for nursing staff is achieved through various training methods. This study compared three American Heart Association training methods for nurses seeking Basic Life Support renewal: a traditional classroom approach and two online options. Findings indicate that online methods for Basic Life Support renewal deliver cost and time savings, while maintaining positive learning outcomes, satisfaction, and confidence level of participants.

  2. The Effect of Laboratory Training Model of Teaching and Traditional Method on Knowledge, Comprehension, Application, Skills-Components of Achievement, Total Achievement and Retention Level in Chemistry

    ERIC Educational Resources Information Center

    Badeleh, Alireza

    2011-01-01

    The present study aimed at finding the effectiveness of the Laboratory Training Model of Teaching (LTM) and comparing it with the traditional methods of teaching chemistry to seventh standard students. It strived to determine whether the (LTM) method in chemistry would be significantly more effective than the Traditional method in respect to the…

  3. Comparative methods for the analysis of gene-expression evolution: an example using yeast functional genomic data.

    PubMed

    Oakley, Todd H; Gu, Zhenglong; Abouheif, Ehab; Patel, Nipam H; Li, Wen-Hsiung

    2005-01-01

    Understanding the evolution of gene function is a primary challenge of modern evolutionary biology. Despite an expanding database from genomic and developmental studies, we are lacking quantitative methods for analyzing the evolution of some important measures of gene function, such as gene-expression patterns. Here, we introduce phylogenetic comparative methods to compare different models of gene-expression evolution in a maximum-likelihood framework. We find that expression of duplicated genes has evolved according to a nonphylogenetic model, where closely related genes are no more likely than more distantly related genes to share common expression patterns. These results are consistent with previous studies that found rapid evolution of gene expression during the history of yeast. The comparative methods presented here are general enough to test a wide range of evolutionary hypotheses using genomic-scale data from any organism.

  4. Improved particle position accuracy from off-axis holograms using a Chebyshev model.

    PubMed

    Öhman, Johan; Sjödahl, Mikael

    2018-01-01

    Side scattered light from micrometer-sized particles is recorded using an off-axis digital holographic setup. From holograms, a volume is reconstructed with information about both intensity and phase. Finding particle positions is non-trivial, since poor axial resolution elongates particles in the reconstruction. To overcome this problem, the reconstructed wavefront around a particle is used to find the axial position. The method is based on the change in the sign of the curvature around the true particle position plane. The wavefront curvature is directly linked to the phase response in the reconstruction. In this paper we propose a new method of estimating the curvature based on a parametric model. The model is based on Chebyshev polynomials and is fit to the phase anomaly and compared to a plane wave in the reconstructed volume. From the model coefficients, it is possible to find particle locations. Simulated results show increased performance in the presence of noise, compared to the use of finite difference methods. The standard deviation is decreased from 3-39 μm to 6-10 μm for varying noise levels. Experimental results show a corresponding improvement where the standard deviation is decreased from 18 μm to 13 μm.

  5. Magnetic resonance voiding cystography in the diagnosis of vesicoureteral reflux: comparative study with voiding cystourethrography.

    PubMed

    Lee, Sang Kwon; Chang, Yongmin; Park, Noh Hyuck; Kim, Young Hwan; Woo, Seongku

    2005-04-01

    To evaluate the feasibility of magnetic resonance voiding cystography (MRVC) compared with voiding cystourethrography (VCUG) for detecting and grading vesicoureteral reflux (VUR). MRVC was performed upon 20 children referred for investigation of reflux. Either coronal T1-weighted spin-echo (SE) or gradient-echo (GE) (fast multiplanar spoiled gradient-echo (FMPSPGR) or turbo fast low-angle-shot (FLASH)) images were obtained before and after transurethral administration of gadolinium solution, and immediately after voiding. The findings of MRVC were compared with those of VCUG and technetium-99m ((99m)Tc) dimercaptosuccinic acid (DMSA) single-photon emission computed tomography (SPECT) performed within 6 months of MRVC. VUR was detected in 23 ureterorenal units (16 VURs by both methods, 5 VURs by VCUG, and 2 VURs by MRVC). With VCUG as the standard of reference, the sensitivity of MRVC was 76.2%; the specificity, 90.0%; the positive predictive value, 88.9%; and the negative predictive value, 78.3%. There was concordance between two methods regarding the grade of reflux in all 16 ureterorenal units with VUR detected by both methods. Of 40 kidneys, MRVC detected findings of renal damage or reflux nephropathy in 13 kidneys, and (99m)Tc DMSA renal SPECT detected findings of reflux nephropathy in 17 kidneys. Although MRVC is shown to have less sensitivity for VUR than VCUG, MRVC may represent a method of choice offering a safer nonradiation test that can additionally evaluate the kidneys for changes related to reflux nephropathy. Copyright 2005 Wiley-Liss, Inc.

  6. A Subspace Semi-Definite programming-based Underestimation (SSDU) method for stochastic global optimization in protein docking*

    PubMed Central

    Nan, Feng; Moghadasi, Mohammad; Vakili, Pirooz; Vajda, Sandor; Kozakov, Dima; Ch. Paschalidis, Ioannis

    2015-01-01

    We propose a new stochastic global optimization method targeting protein docking problems. The method is based on finding a general convex polynomial underestimator to the binding energy function in a permissive subspace that possesses a funnel-like structure. We use Principal Component Analysis (PCA) to determine such permissive subspaces. The problem of finding the general convex polynomial underestimator is reduced into the problem of ensuring that a certain polynomial is a Sum-of-Squares (SOS), which can be done via semi-definite programming. The underestimator is then used to bias sampling of the energy function in order to recover a deep minimum. We show that the proposed method significantly improves the quality of docked conformations compared to existing methods. PMID:25914440

  7. Comparing generalized ensemble methods for sampling of systems with many degrees of freedom

    DOE PAGES

    Lincoff, James; Sasmal, Sukanya; Head-Gordon, Teresa

    2016-11-03

    Here, we compare two standard replica exchange methods using temperature and dielectric constant as the scaling variables for independent replicas against two new corresponding enhanced sampling methods based on non-equilibrium statistical cooling (temperature) or descreening (dielectric). We test the four methods on a rough 1D potential as well as for alanine dipeptide in water, for which their relatively small phase space allows for the ability to define quantitative convergence metrics. We show that both dielectric methods are inferior to the temperature enhanced sampling methods, and in turn show that temperature cool walking (TCW) systematically outperforms the standard temperature replica exchangemore » (TREx) method. We extend our comparisons of the TCW and TREx methods to the 5 residue met-enkephalin peptide, in which we evaluate the Kullback-Leibler divergence metric to show that the rate of convergence between two independent trajectories is faster for TCW compared to TREx. Finally we apply the temperature methods to the 42 residue amyloid-β peptide in which we find non-negligible differences in the disordered ensemble using TCW compared to the standard TREx. All four methods have been made available as software through the OpenMM Omnia software consortium.« less

  8. Comparing generalized ensemble methods for sampling of systems with many degrees of freedom.

    PubMed

    Lincoff, James; Sasmal, Sukanya; Head-Gordon, Teresa

    2016-11-07

    We compare two standard replica exchange methods using temperature and dielectric constant as the scaling variables for independent replicas against two new corresponding enhanced sampling methods based on non-equilibrium statistical cooling (temperature) or descreening (dielectric). We test the four methods on a rough 1D potential as well as for alanine dipeptide in water, for which their relatively small phase space allows for the ability to define quantitative convergence metrics. We show that both dielectric methods are inferior to the temperature enhanced sampling methods, and in turn show that temperature cool walking (TCW) systematically outperforms the standard temperature replica exchange (TREx) method. We extend our comparisons of the TCW and TREx methods to the 5 residue met-enkephalin peptide, in which we evaluate the Kullback-Leibler divergence metric to show that the rate of convergence between two independent trajectories is faster for TCW compared to TREx. Finally we apply the temperature methods to the 42 residue amyloid-β peptide in which we find non-negligible differences in the disordered ensemble using TCW compared to the standard TREx. All four methods have been made available as software through the OpenMM Omnia software consortium (http://www.omnia.md/).

  9. Recruitment bias in chronic pain research: whiplash as a model.

    PubMed

    Nijs, Jo; Inghelbrecht, Els; Daenen, Liesbeth; Hachimi-Idrissi, Said; Hens, Luc; Willems, Bert; Roussel, Nathalie; Cras, Patrick; Wouters, Kristien; Bernheim, Jan

    2011-11-01

    In science findings which cannot be extrapolated to other settings are of little value. Recruitment methods vary widely across chronic whiplash studies, but it remains unclear whether this generates recruitment bias. The present study aimed to examine whether the recruitment method accounts for differences in health status, social support, and personality traits in patients with chronic whiplash-associated disorders (WAD). Two different recruitment methods were compared: recruiting patients through a local whiplash patient support group (group 1) and local hospital emergency department (group 2). The participants (n=118) filled in a set of questionnaires: the Neck Disability Index, Medical Outcome Study Short-Form General Health Survey, Anamnestic Comparative Self-Assessment measure of overall well-being, Symptom Checklist-90, Dutch Personality Questionnaire, and the Social Support List. The recruitment method (either through the local emergency department or patient support group) accounted for the differences in insufficiency, somatization, disability, quality of life, self-satisfaction, and dominance (all p values <.01). The recruitment methods generated chronic WAD patients comparable for psychoneurotism, social support, self-sufficiency, (social) inadequacy, rigidity, and resentment (p>.01). The recruitment of chronic WAD patients solely through patient support groups generates bias with respect to the various aspects of health status and personality, but not social support. In order to enhance the external validity of study findings, chronic WAD studies should combine a variety of recruitment procedures.

  10. Augmenting Qualitative Text Analysis with Natural Language Processing: Methodological Study.

    PubMed

    Guetterman, Timothy C; Chang, Tammy; DeJonckheere, Melissa; Basu, Tanmay; Scruggs, Elizabeth; Vydiswaran, V G Vinod

    2018-06-29

    Qualitative research methods are increasingly being used across disciplines because of their ability to help investigators understand the perspectives of participants in their own words. However, qualitative analysis is a laborious and resource-intensive process. To achieve depth, researchers are limited to smaller sample sizes when analyzing text data. One potential method to address this concern is natural language processing (NLP). Qualitative text analysis involves researchers reading data, assigning code labels, and iteratively developing findings; NLP has the potential to automate part of this process. Unfortunately, little methodological research has been done to compare automatic coding using NLP techniques and qualitative coding, which is critical to establish the viability of NLP as a useful, rigorous analysis procedure. The purpose of this study was to compare the utility of a traditional qualitative text analysis, an NLP analysis, and an augmented approach that combines qualitative and NLP methods. We conducted a 2-arm cross-over experiment to compare qualitative and NLP approaches to analyze data generated through 2 text (short message service) message survey questions, one about prescription drugs and the other about police interactions, sent to youth aged 14-24 years. We randomly assigned a question to each of the 2 experienced qualitative analysis teams for independent coding and analysis before receiving NLP results. A third team separately conducted NLP analysis of the same 2 questions. We examined the results of our analyses to compare (1) the similarity of findings derived, (2) the quality of inferences generated, and (3) the time spent in analysis. The qualitative-only analysis for the drug question (n=58) yielded 4 major findings, whereas the NLP analysis yielded 3 findings that missed contextual elements. The qualitative and NLP-augmented analysis was the most comprehensive. For the police question (n=68), the qualitative-only analysis yielded 4 primary findings and the NLP-only analysis yielded 4 slightly different findings. Again, the augmented qualitative and NLP analysis was the most comprehensive and produced the highest quality inferences, increasing our depth of understanding (ie, details and frequencies). In terms of time, the NLP-only approach was quicker than the qualitative-only approach for the drug (120 vs 270 minutes) and police (40 vs 270 minutes) questions. An approach beginning with qualitative analysis followed by qualitative- or NLP-augmented analysis took longer time than that beginning with NLP for both drug (450 vs 240 minutes) and police (390 vs 220 minutes) questions. NLP provides both a foundation to code qualitatively more quickly and a method to validate qualitative findings. NLP methods were able to identify major themes found with traditional qualitative analysis but were not useful in identifying nuances. Traditional qualitative text analysis added important details and context. ©Timothy C Guetterman, Tammy Chang, Melissa DeJonckheere, Tanmay Basu, Elizabeth Scruggs, VG Vinod Vydiswaran. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 29.06.2018.

  11. The Shortlist Method for fast computation of the Earth Mover's Distance and finding optimal solutions to transportation problems.

    PubMed

    Gottschlich, Carsten; Schuhmacher, Dominic

    2014-01-01

    Finding solutions to the classical transportation problem is of great importance, since this optimization problem arises in many engineering and computer science applications. Especially the Earth Mover's Distance is used in a plethora of applications ranging from content-based image retrieval, shape matching, fingerprint recognition, object tracking and phishing web page detection to computing color differences in linguistics and biology. Our starting point is the well-known revised simplex algorithm, which iteratively improves a feasible solution to optimality. The Shortlist Method that we propose substantially reduces the number of candidates inspected for improving the solution, while at the same time balancing the number of pivots required. Tests on simulated benchmarks demonstrate a considerable reduction in computation time for the new method as compared to the usual revised simplex algorithm implemented with state-of-the-art initialization and pivot strategies. As a consequence, the Shortlist Method facilitates the computation of large scale transportation problems in viable time. In addition we describe a novel method for finding an initial feasible solution which we coin Modified Russell's Method.

  12. The Shortlist Method for Fast Computation of the Earth Mover's Distance and Finding Optimal Solutions to Transportation Problems

    PubMed Central

    Gottschlich, Carsten; Schuhmacher, Dominic

    2014-01-01

    Finding solutions to the classical transportation problem is of great importance, since this optimization problem arises in many engineering and computer science applications. Especially the Earth Mover's Distance is used in a plethora of applications ranging from content-based image retrieval, shape matching, fingerprint recognition, object tracking and phishing web page detection to computing color differences in linguistics and biology. Our starting point is the well-known revised simplex algorithm, which iteratively improves a feasible solution to optimality. The Shortlist Method that we propose substantially reduces the number of candidates inspected for improving the solution, while at the same time balancing the number of pivots required. Tests on simulated benchmarks demonstrate a considerable reduction in computation time for the new method as compared to the usual revised simplex algorithm implemented with state-of-the-art initialization and pivot strategies. As a consequence, the Shortlist Method facilitates the computation of large scale transportation problems in viable time. In addition we describe a novel method for finding an initial feasible solution which we coin Modified Russell's Method. PMID:25310106

  13. Individual versus superensemble forecasts of seasonal influenza outbreaks in the United States.

    PubMed

    Yamana, Teresa K; Kandula, Sasikiran; Shaman, Jeffrey

    2017-11-01

    Recent research has produced a number of methods for forecasting seasonal influenza outbreaks. However, differences among the predicted outcomes of competing forecast methods can limit their use in decision-making. Here, we present a method for reconciling these differences using Bayesian model averaging. We generated retrospective forecasts of peak timing, peak incidence, and total incidence for seasonal influenza outbreaks in 48 states and 95 cities using 21 distinct forecast methods, and combined these individual forecasts to create weighted-average superensemble forecasts. We compared the relative performance of these individual and superensemble forecast methods by geographic location, timing of forecast, and influenza season. We find that, overall, the superensemble forecasts are more accurate than any individual forecast method and less prone to producing a poor forecast. Furthermore, we find that these advantages increase when the superensemble weights are stratified according to the characteristics of the forecast or geographic location. These findings indicate that different competing influenza prediction systems can be combined into a single more accurate forecast product for operational delivery in real time.

  14. Individual versus superensemble forecasts of seasonal influenza outbreaks in the United States

    PubMed Central

    Kandula, Sasikiran; Shaman, Jeffrey

    2017-01-01

    Recent research has produced a number of methods for forecasting seasonal influenza outbreaks. However, differences among the predicted outcomes of competing forecast methods can limit their use in decision-making. Here, we present a method for reconciling these differences using Bayesian model averaging. We generated retrospective forecasts of peak timing, peak incidence, and total incidence for seasonal influenza outbreaks in 48 states and 95 cities using 21 distinct forecast methods, and combined these individual forecasts to create weighted-average superensemble forecasts. We compared the relative performance of these individual and superensemble forecast methods by geographic location, timing of forecast, and influenza season. We find that, overall, the superensemble forecasts are more accurate than any individual forecast method and less prone to producing a poor forecast. Furthermore, we find that these advantages increase when the superensemble weights are stratified according to the characteristics of the forecast or geographic location. These findings indicate that different competing influenza prediction systems can be combined into a single more accurate forecast product for operational delivery in real time. PMID:29107987

  15. Deliberate Self-Harm within an International Community Sample of Young People: Comparative Findings from the Child & Adolescent Self-Harm in Europe (CASE) Study

    ERIC Educational Resources Information Center

    Madge, Nicola; Hewitt, Anthea; Hawton, Keith; de Wilde, Erik Jan; Corcoran, Paul; Fekete, Sandor; van Heeringen, Kees; De Leo, Diego; Ystgaard, Mette

    2008-01-01

    Background: Deliberate self-harm among young people is an important focus of policy and practice internationally. Nonetheless, there is little reliable comparative international information on its extent or characteristics. We have conducted a seven-country comparative community study of deliberate self-harm among young people. Method: Over 30,000…

  16. Comparing Cognitive Interviewing and Online Probing: Do They Find Similar Results?

    ERIC Educational Resources Information Center

    Meitinger, Katharina; Behr, Dorothée

    2016-01-01

    This study compares the application of probing techniques in cognitive interviewing (CI) and online probing (OP). Even though the probing is similar, the methods differ regarding typical mode setting, sample size, level of interactivity, and goals. We analyzed probing answers to the International Social Survey Programme item battery on specific…

  17. A study on the application of topic models to motif finding algorithms.

    PubMed

    Basha Gutierrez, Josep; Nakai, Kenta

    2016-12-22

    Topic models are statistical algorithms which try to discover the structure of a set of documents according to the abstract topics contained in them. Here we try to apply this approach to the discovery of the structure of the transcription factor binding sites (TFBS) contained in a set of biological sequences, which is a fundamental problem in molecular biology research for the understanding of transcriptional regulation. Here we present two methods that make use of topic models for motif finding. First, we developed an algorithm in which first a set of biological sequences are treated as text documents, and the k-mers contained in them as words, to then build a correlated topic model (CTM) and iteratively reduce its perplexity. We also used the perplexity measurement of CTMs to improve our previous algorithm based on a genetic algorithm and several statistical coefficients. The algorithms were tested with 56 data sets from four different species and compared to 14 other methods by the use of several coefficients both at nucleotide and site level. The results of our first approach showed a performance comparable to the other methods studied, especially at site level and in sensitivity scores, in which it scored better than any of the 14 existing tools. In the case of our previous algorithm, the new approach with the addition of the perplexity measurement clearly outperformed all of the other methods in sensitivity, both at nucleotide and site level, and in overall performance at site level. The statistics obtained show that the performance of a motif finding method based on the use of a CTM is satisfying enough to conclude that the application of topic models is a valid method for developing motif finding algorithms. Moreover, the addition of topic models to a previously developed method dramatically increased its performance, suggesting that this combined algorithm can be a useful tool to successfully predict motifs in different kinds of sets of DNA sequences.

  18. Promotion Factors For Enlisted Infantry Marines

    DTIC Science & Technology

    2017-06-01

    description , billet accomplishments, mission accomplishment, individual character, leadership, intellect and wisdom, fulfillment of evaluation , RS...staff sergeant. To assess which ranks proportionally promote more high-quality Marines, we compare two performance evaluation methods: proficiency and...adverse fitness reports. From the two performance evaluation methods we find that the Marine Corps promotes proportionally more high-quality Marines

  19. Convenience Samples and Caregiving Research: How Generalizable Are the Findings?

    ERIC Educational Resources Information Center

    Pruchno, Rachel A.; Brill, Jonathan E.; Shands, Yvonne; Gordon, Judith R.; Genderson, Maureen Wilson; Rose, Miriam; Cartwright, Francine

    2008-01-01

    Purpose: We contrast characteristics of respondents recruited using convenience strategies with those of respondents recruited by random digit dial (RDD) methods. We compare sample variances, means, and interrelationships among variables generated from the convenience and RDD samples. Design and Methods: Women aged 50 to 64 who work full time and…

  20. An Empirical Review of Research Methodologies and Methods in Creativity Studies (2003-2012)

    ERIC Educational Resources Information Center

    Long, Haiying

    2014-01-01

    Based on the data collected from 5 prestigious creativity journals, research methodologies and methods of 612 empirical studies on creativity, published between 2003 and 2012, were reviewed and compared to those in gifted education. Major findings included: (a) Creativity research was predominantly quantitative and psychometrics and experiment…

  1. A Comparison of Isotonic, Isokinetic, and Plyometric Training Methods for Vertical Jump Improvement.

    ERIC Educational Resources Information Center

    Miller, Christine D.

    This annotated bibliography documents three training methods used to develop vertical jumping ability and power: isotonic, isokinetics, and plyometric training. Research findings on all three forms of training are summarized and compared. A synthesis of conclusions drawn from the annotated writings is presented. The report includes a glossary of…

  2. Maladjustment of Bully-Victims: Validation with Three Identification Methods

    ERIC Educational Resources Information Center

    Yang, An; Li, Xiang; Salmivalli, Christina

    2016-01-01

    Although knowledge on the psychosocial (mal)adjustment of bully-victims, children who bully others and are victimised by others, has been increasing, the findings have been principally gained utilising a single method to identify bully-victims. The present study examined the psychosocial adjustment of bully-victims (as compared with pure bullies…

  3. An Experimental Comparison of Two Methods Of Teaching Numerical Control Manual Programming Concepts; Visual Media Versus Hands-On Equipment.

    ERIC Educational Resources Information Center

    Biekert, Russell

    Accompanying the rapid changes in technology has been a greater dependence on automation and numerical control, which has resulted in the need to find ways of preparing programers for industrial machines using numerical control. To compare the hands-on equipment method and a visual media method of teaching numerical control, an experimental and a…

  4. Modeling adverse event counts in phase I clinical trials of a cytotoxic agent.

    PubMed

    Muenz, Daniel G; Braun, Thomas M; Taylor, Jeremy Mg

    2018-05-01

    Background/Aims The goal of phase I clinical trials for cytotoxic agents is to find the maximum dose with an acceptable risk of severe toxicity. The most common designs for these dose-finding trials use a binary outcome indicating whether a patient had a dose-limiting toxicity. However, a patient may experience multiple toxicities, with each toxicity assigned an ordinal severity score. The binary response is then obtained by dichotomizing a patient's richer set of data. We contribute to the growing literature on new models to exploit this richer toxicity data, with the goal of improving the efficiency in estimating the maximum tolerated dose. Methods We develop three new, related models that make use of the total number of dose-limiting and low-level toxicities a patient experiences. We use these models to estimate the probability of having at least one dose-limiting toxicity as a function of dose. In a simulation study, we evaluate how often our models select the true maximum tolerated dose, and we compare our models with the continual reassessment method, which uses binary data. Results Across a variety of simulation settings, we find that our models compare well against the continual reassessment method in terms of selecting the true optimal dose. In particular, one of our models which uses dose-limiting and low-level toxicity counts beats or ties the other models, including the continual reassessment method, in all scenarios except the one in which the true optimal dose is the highest dose available. We also find that our models, when not selecting the true optimal dose, tend to err by picking lower, safer doses, while the continual reassessment method errs more toward toxic doses. Conclusion Using dose-limiting and low-level toxicity counts, which are easily obtained from data already routinely collected, is a promising way to improve the efficiency in finding the true maximum tolerated dose in phase I trials.

  5. Introducing conjoint analysis method into delayed lotteries studies: its validity and time stability are higher than in adjusting.

    PubMed

    Białek, Michał; Markiewicz, Łukasz; Sawicki, Przemysław

    2015-01-01

    The delayed lotteries are much more common in everyday life than are pure lotteries. Usually, we need to wait to find out the outcome of the risky decision (e.g., investing in a stock market, engaging in a relationship). However, most research has studied the time discounting and probability discounting in isolation using the methodologies designed specifically to track changes in one parameter. Most commonly used method is adjusting, but its reported validity and time stability in research on discounting are suboptimal. The goal of this study was to introduce the novel method for analyzing delayed lotteries-conjoint analysis-which hypothetically is more suitable for analyzing individual preferences in this area. A set of two studies compared the conjoint analysis with adjusting. The results suggest that individual parameters of discounting strength estimated with conjoint have higher predictive value (Study 1 and 2), and they are more stable over time (Study 2) compared to adjusting. We discuss these findings, despite the exploratory character of reported studies, by suggesting that future research on delayed lotteries should be cross-validated using both methods.

  6. Finding Dantzig Selectors with a Proximity Operator based Fixed-point Algorithm

    DTIC Science & Technology

    2014-11-01

    experiments showed that this method usually outperforms the method in [2] in terms of CPU time while producing solutions of comparable quality. The... method proposed in [19]. To alleviate the difficulty caused by the subprob- lem without a closed form solution , a linearized ADM was proposed for the...a closed form solution , but the β-related subproblem does not and is solved approximately by using the nonmonotone gradient method in [18]. The

  7. Do Disadvantaged Students Get Less Effective Teaching? Key Findings from Recent Institute of Education Sciences Studies. NCEE Evaluation Brief. Technical Appendix. NCEE 2014-4010

    ERIC Educational Resources Information Center

    Max, Jeffrey; Glazerman, Steven

    2014-01-01

    This document represents the technical appendix intended to accompany "Do Disadvantaged Students Get Less Effective Teaching? Key Findings from Recent Institute of Education Sciences Studies. NCEE Evaluation Brief. NCEE 2014-4010." Contents include: (1) Summary of Related, Non-Peer-Reviewed Studies; (2) Methods for Comparing Findings…

  8. A Model and Method of Evaluative Accounts: Development Impact of the National Literacy Mission (NLM of India).

    ERIC Educational Resources Information Center

    Bhola, H. S.

    2002-01-01

    Studied the developmental impact of the National Literacy Mission of India, providing an evaluative account based on 97 evaluation studies. Compared findings with those from a 27-study synthesis of studies of effects of adult literacy efforts in Africa. Findings show the impact of literacy on the development of nations. (SLD)

  9. Machining and characterization of self-reinforced polymers

    NASA Astrophysics Data System (ADS)

    Deepa, A.; Padmanabhan, K.; Kuppan, P.

    2017-11-01

    This Paper focuses on obtaining the mechanical properties and the effect of the different machining techniques on self-reinforced composites sample and to derive the best machining method with remarkable properties. Each sample was tested by the Tensile and Flexural tests, fabricated using hot compaction test and those loads were calculated. These composites are machined using conventional methods because of lack of advanced machinery in most of the industries. The advanced non-conventional methods like Abrasive water jet machining were used. These machining techniques are used to get the better output for the composite materials with good mechanical properties compared to conventional methods. But the use of non-conventional methods causes the changes in the work piece, tool properties and more economical compared to the conventional methods. Finding out the best method ideal for the designing of these Self Reinforced Composites with and without defects and the use of Scanning Electron Microscope (SEM) analysis for the comparing the microstructure of the PP and PE samples concludes our process.

  10. VizieR Online Data Catalog: Bayesian method for detecting stellar flares (Pitkin+, 2014)

    NASA Astrophysics Data System (ADS)

    Pitkin, M.; Williams, D.; Fletcher, L.; Grant, S. D. T.

    2015-05-01

    We present a Bayesian-odds-ratio-based algorithm for detecting stellar flares in light-curve data. We assume flares are described by a model in which there is a rapid rise with a half-Gaussian profile, followed by an exponential decay. Our signal model also contains a polynomial background model required to fit underlying light-curve variations in the data, which could otherwise partially mimic a flare. We characterize the false alarm probability and efficiency of this method under the assumption that any unmodelled noise in the data is Gaussian, and compare it with a simpler thresholding method based on that used in Walkowicz et al. We find our method has a significant increase in detection efficiency for low signal-to-noise ratio (S/N) flares. For a conservative false alarm probability our method can detect 95 per cent of flares with S/N less than 20, as compared to S/N of 25 for the simpler method. We also test how well the assumption of Gaussian noise holds by applying the method to a selection of 'quiet' Kepler stars. As an example we have applied our method to a selection of stars in Kepler Quarter 1 data. The method finds 687 flaring stars with a total of 1873 flares after vetos have been applied. For these flares we have made preliminary characterizations of their durations and and S/N. (1 data file).

  11. A Bayesian method for detecting stellar flares

    NASA Astrophysics Data System (ADS)

    Pitkin, M.; Williams, D.; Fletcher, L.; Grant, S. D. T.

    2014-12-01

    We present a Bayesian-odds-ratio-based algorithm for detecting stellar flares in light-curve data. We assume flares are described by a model in which there is a rapid rise with a half-Gaussian profile, followed by an exponential decay. Our signal model also contains a polynomial background model required to fit underlying light-curve variations in the data, which could otherwise partially mimic a flare. We characterize the false alarm probability and efficiency of this method under the assumption that any unmodelled noise in the data is Gaussian, and compare it with a simpler thresholding method based on that used in Walkowicz et al. We find our method has a significant increase in detection efficiency for low signal-to-noise ratio (S/N) flares. For a conservative false alarm probability our method can detect 95 per cent of flares with S/N less than 20, as compared to S/N of 25 for the simpler method. We also test how well the assumption of Gaussian noise holds by applying the method to a selection of `quiet' Kepler stars. As an example we have applied our method to a selection of stars in Kepler Quarter 1 data. The method finds 687 flaring stars with a total of 1873 flares after vetos have been applied. For these flares we have made preliminary characterizations of their durations and and S/N.

  12. Solving large sparse eigenvalue problems on supercomputers

    NASA Technical Reports Server (NTRS)

    Philippe, Bernard; Saad, Youcef

    1988-01-01

    An important problem in scientific computing consists in finding a few eigenvalues and corresponding eigenvectors of a very large and sparse matrix. The most popular methods to solve these problems are based on projection techniques on appropriate subspaces. The main attraction of these methods is that they only require the use of the matrix in the form of matrix by vector multiplications. The implementations on supercomputers of two such methods for symmetric matrices, namely Lanczos' method and Davidson's method are compared. Since one of the most important operations in these two methods is the multiplication of vectors by the sparse matrix, methods of performing this operation efficiently are discussed. The advantages and the disadvantages of each method are compared and implementation aspects are discussed. Numerical experiments on a one processor CRAY 2 and CRAY X-MP are reported. Possible parallel implementations are also discussed.

  13. Comparative study of Gram stain, potassium hydroxide smear, culture and nested PCR in the diagnosis of fungal keratitis.

    PubMed

    Badiee, Parisa; Nejabat, Mahmood; Alborzi, Abdolvahab; Keshavarz, Fatemeh; Shakiba, Elaheh

    2010-01-01

    This study seeks to evaluate the efficacy and practicality of the molecular method, compared to the standard microbiological techniques for diagnosing fungal keratitis (FK). Patients with eye findings suspected of FK were enrolled for cornea sampling. Scrapings from the affected areas of the infected corneas were obtained and were divided into two parts: one for smears and cultures, and the other for nested PCR analysis. Of the 38 eyes, 28 were judged to have fungal infections based on clinical and positive findings in the culture, smear and responses to antifungal treatment. Potassium hydroxide, Gram staining, culture and nested PCR results (either positive or negative) matched in 76.3, 42.1, 68.4 and 81.6%, respectively. PCR is a sensitive method but due to the lack of sophisticated facilities in routine laboratory procedures, it can serve only complementarily and cannot replace conventional methods. Copyright © 2010 S. Karger AG, Basel.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azadi, Sam, E-mail: s.azadi@ucl.ac.uk; Cohen, R. E.

    We report an accurate study of interactions between benzene molecules using variational quantum Monte Carlo (VMC) and diffusion quantum Monte Carlo (DMC) methods. We compare these results with density functional theory using different van der Waals functionals. In our quantum Monte Carlo (QMC) calculations, we use accurate correlated trial wave functions including three-body Jastrow factors and backflow transformations. We consider two benzene molecules in the parallel displaced geometry, and find that by highly optimizing the wave function and introducing more dynamical correlation into the wave function, we compute the weak chemical binding energy between aromatic rings accurately. We find optimalmore » VMC and DMC binding energies of −2.3(4) and −2.7(3) kcal/mol, respectively. The best estimate of the coupled-cluster theory through perturbative triplets/complete basis set limit is −2.65(2) kcal/mol [Miliordos et al., J. Phys. Chem. A 118, 7568 (2014)]. Our results indicate that QMC methods give chemical accuracy for weakly bound van der Waals molecular interactions, comparable to results from the best quantum chemistry methods.« less

  15. Alignment methods: strategies, challenges, benchmarking, and comparative overview.

    PubMed

    Löytynoja, Ari

    2012-01-01

    Comparative evolutionary analyses of molecular sequences are solely based on the identities and differences detected between homologous characters. Errors in this homology statement, that is errors in the alignment of the sequences, are likely to lead to errors in the downstream analyses. Sequence alignment and phylogenetic inference are tightly connected and many popular alignment programs use the phylogeny to divide the alignment problem into smaller tasks. They then neglect the phylogenetic tree, however, and produce alignments that are not evolutionarily meaningful. The use of phylogeny-aware methods reduces the error but the resulting alignments, with evolutionarily correct representation of homology, can challenge the existing practices and methods for viewing and visualising the sequences. The inter-dependency of alignment and phylogeny can be resolved by joint estimation of the two; methods based on statistical models allow for inferring the alignment parameters from the data and correctly take into account the uncertainty of the solution but remain computationally challenging. Widely used alignment methods are based on heuristic algorithms and unlikely to find globally optimal solutions. The whole concept of one correct alignment for the sequences is questionable, however, as there typically exist vast numbers of alternative, roughly equally good alignments that should also be considered. This uncertainty is hidden by many popular alignment programs and is rarely correctly taken into account in the downstream analyses. The quest for finding and improving the alignment solution is complicated by the lack of suitable measures of alignment goodness. The difficulty of comparing alternative solutions also affects benchmarks of alignment methods and the results strongly depend on the measure used. As the effects of alignment error cannot be predicted, comparing the alignments' performance in downstream analyses is recommended.

  16. Seismic waveform inversion best practices: regional, global and exploration test cases

    NASA Astrophysics Data System (ADS)

    Modrak, Ryan; Tromp, Jeroen

    2016-09-01

    Reaching the global minimum of a waveform misfit function requires careful choices about the nonlinear optimization, preconditioning and regularization methods underlying an inversion. Because waveform inversion problems are susceptible to erratic convergence associated with strong nonlinearity, one or two test cases are not enough to reliably inform such decisions. We identify best practices, instead, using four seismic near-surface problems, one regional problem and two global problems. To make meaningful quantitative comparisons between methods, we carry out hundreds of inversions, varying one aspect of the implementation at a time. Comparing nonlinear optimization algorithms, we find that limited-memory BFGS provides computational savings over nonlinear conjugate gradient methods in a wide range of test cases. Comparing preconditioners, we show that a new diagonal scaling derived from the adjoint of the forward operator provides better performance than two conventional preconditioning schemes. Comparing regularization strategies, we find that projection, convolution, Tikhonov regularization and total variation regularization are effective in different contexts. Besides questions of one strategy or another, reliability and efficiency in waveform inversion depend on close numerical attention and care. Implementation details involving the line search and restart conditions have a strong effect on computational cost, regardless of the chosen nonlinear optimization algorithm.

  17. Electron scattering intensities and Patterson functions of Skyrmions

    NASA Astrophysics Data System (ADS)

    Karliner, M.; King, C.; Manton, N. S.

    2016-06-01

    The scattering of electrons off nuclei is one of the best methods of probing nuclear structure. In this paper we focus on electron scattering off nuclei with spin and isospin zero within the Skyrme model. We consider two distinct methods and simplify our calculations by use of the Born approximation. The first method is to calculate the form factor of the spherically averaged Skyrmion charge density; the second uses the Patterson function to calculate the scattering intensity off randomly oriented Skyrmions, and spherically averages at the end. We compare our findings with experimental scattering data. We also find approximate analytical formulae for the first zero and first stationary point of a form factor.

  18. Comparative proteomic assessment of matrisome enrichment methodologies

    PubMed Central

    Krasny, Lukas; Paul, Angela; Wai, Patty; Howard, Beatrice A.; Natrajan, Rachael C.; Huang, Paul H.

    2016-01-01

    The matrisome is a complex and heterogeneous collection of extracellular matrix (ECM) and ECM-associated proteins that play important roles in tissue development and homeostasis. While several strategies for matrisome enrichment have been developed, it is currently unknown how the performance of these different methodologies compares in the proteomic identification of matrisome components across multiple tissue types. In the present study, we perform a comparative proteomic assessment of two widely used decellularisation protocols and two extraction methods to characterise the matrisome in four murine organs (heart, mammary gland, lung and liver). We undertook a systematic evaluation of the performance of the individual methods on protein yield, matrisome enrichment capability and the ability to isolate core matrisome and matrisome-associated components. Our data find that sodium dodecyl sulphate (SDS) decellularisation leads to the highest matrisome enrichment efficiency, while the extraction protocol that comprises chemical and trypsin digestion of the ECM fraction consistently identifies the highest number of matrisomal proteins across all types of tissue examined. Matrisome enrichment had a clear benefit over non-enriched tissue for the comprehensive identification of matrisomal components in murine liver and heart. Strikingly, we find that all four matrisome enrichment methods led to significant losses in the soluble matrisome-associated proteins across all organs. Our findings highlight the multiple factors (including tissue type, matrisome class of interest and desired enrichment purity) that influence the choice of enrichment methodology, and we anticipate that these data will serve as a useful guide for the design of future proteomic studies of the matrisome. PMID:27589945

  19. Application of the enhanced homotopy perturbation method to solve the fractional-order Bagley-Torvik differential equation

    NASA Astrophysics Data System (ADS)

    Zolfaghari, M.; Ghaderi, R.; Sheikhol Eslami, A.; Ranjbar, A.; Hosseinnia, S. H.; Momani, S.; Sadati, J.

    2009-10-01

    The enhanced homotopy perturbation method (EHPM) is applied for finding improved approximate solutions of the well-known Bagley-Torvik equation for three different cases. The main characteristic of the EHPM is using a stabilized linear part, which guarantees the stability and convergence of the overall solution. The results are finally compared with the Adams-Bashforth-Moulton numerical method, the Adomian decomposition method (ADM) and the fractional differential transform method (FDTM) to verify the performance of the EHPM.

  20. Homotopy decomposition method for solving one-dimensional time-fractional diffusion equation

    NASA Astrophysics Data System (ADS)

    Abuasad, Salah; Hashim, Ishak

    2018-04-01

    In this paper, we present the homotopy decomposition method with a modified definition of beta fractional derivative for the first time to find exact solution of one-dimensional time-fractional diffusion equation. In this method, the solution takes the form of a convergent series with easily computable terms. The exact solution obtained by the proposed method is compared with the exact solution obtained by using fractional variational homotopy perturbation iteration method via a modified Riemann-Liouville derivative.

  1. Dose‐finding methods for Phase I clinical trials using pharmacokinetics in small populations

    PubMed Central

    Zohar, Sarah; Lentz, Frederike; Alberti, Corinne; Friede, Tim; Stallard, Nigel; Comets, Emmanuelle

    2017-01-01

    The aim of phase I clinical trials is to obtain reliable information on safety, tolerability, pharmacokinetics (PK), and mechanism of action of drugs with the objective of determining the maximum tolerated dose (MTD). In most phase I studies, dose‐finding and PK analysis are done separately and no attempt is made to combine them during dose allocation. In cases such as rare diseases, paediatrics, and studies in a biomarker‐defined subgroup of a defined population, the available population size will limit the number of possible clinical trials that can be conducted. Combining dose‐finding and PK analyses to allow better estimation of the dose‐toxicity curve should then be considered. In this work, we propose, study, and compare methods to incorporate PK measures in the dose allocation process during a phase I clinical trial. These methods do this in different ways, including using PK observations as a covariate, as the dependent variable or in a hierarchical model. We conducted a large simulation study that showed that adding PK measurements as a covariate only does not improve the efficiency of dose‐finding trials either in terms of the number of observed dose limiting toxicities or the probability of correct dose selection. However, incorporating PK measures does allow better estimation of the dose‐toxicity curve while maintaining the performance in terms of MTD selection compared to dose‐finding designs that do not incorporate PK information. In conclusion, using PK information in the dose allocation process enriches the knowledge of the dose‐toxicity relationship, facilitating better dose recommendation for subsequent trials. PMID:28321893

  2. Use of activity theory-based need finding for biomedical device development.

    PubMed

    Rismani, Shalaleh; Ratto, Matt; Machiel Van der Loos, H F

    2016-08-01

    Identifying the appropriate needs for biomedical device design is challenging, especially for less structured environments. The paper proposes an alternate need-finding method based on Cultural Historical Activity Theory and expanded to explicitly examine the role of devices within a socioeconomic system. This is compared to a conventional need-finding technique in a preliminary study with engineering student teams. The initial results show that the Activity Theory-based technique allows teams to gain deeper insights into their needs space.

  3. A novel method for overlapping community detection using Multi-objective optimization

    NASA Astrophysics Data System (ADS)

    Ebrahimi, Morteza; Shahmoradi, Mohammad Reza; Heshmati, Zainabolhoda; Salehi, Mostafa

    2018-09-01

    The problem of community detection as one of the most important applications of network science can be addressed effectively by multi-objective optimization. In this paper, we aim to present a novel efficient method based on this approach. Also, in this study the idea of using all Pareto fronts to detect overlapping communities is introduced. The proposed method has two main advantages compared to other multi-objective optimization based approaches. The first advantage is scalability, and the second is the ability to find overlapping communities. Despite most of the works, the proposed method is able to find overlapping communities effectively. The new algorithm works by extracting appropriate communities from all the Pareto optimal solutions, instead of choosing the one optimal solution. Empirical experiments on different features of separated and overlapping communities, on both synthetic and real networks show that the proposed method performs better in comparison with other methods.

  4. Testing and Validation of Computational Methods for Mass Spectrometry.

    PubMed

    Gatto, Laurent; Hansen, Kasper D; Hoopmann, Michael R; Hermjakob, Henning; Kohlbacher, Oliver; Beyer, Andreas

    2016-03-04

    High-throughput methods based on mass spectrometry (proteomics, metabolomics, lipidomics, etc.) produce a wealth of data that cannot be analyzed without computational methods. The impact of the choice of method on the overall result of a biological study is often underappreciated, but different methods can result in very different biological findings. It is thus essential to evaluate and compare the correctness and relative performance of computational methods. The volume of the data as well as the complexity of the algorithms render unbiased comparisons challenging. This paper discusses some problems and challenges in testing and validation of computational methods. We discuss the different types of data (simulated and experimental validation data) as well as different metrics to compare methods. We also introduce a new public repository for mass spectrometric reference data sets ( http://compms.org/RefData ) that contains a collection of publicly available data sets for performance evaluation for a wide range of different methods.

  5. Differences between Presentation Methods in Working Memory Procedures: A Matter of Working Memory Consolidation

    PubMed Central

    Ricker, Timothy J.; Cowan, Nelson

    2014-01-01

    Understanding forgetting from working memory, the memory used in ongoing cognitive processing, is critical to understanding human cognition. In the last decade a number of conflicting findings have been reported regarding the role of time in forgetting from working memory. This has led to a debate concerning whether longer retention intervals necessarily result in more forgetting. An obstacle to directly comparing conflicting reports is a divergence in methodology across studies. Studies which find no forgetting as a function of retention-interval duration tend to use sequential presentation of memory items, while studies which find forgetting as a function of retention-interval duration tend to use simultaneous presentation of memory items. Here, we manipulate the duration of retention and the presentation method of memory items, presenting items either sequentially or simultaneously. We find that these differing presentation methods can lead to different rates of forgetting because they tend to differ in the time available for consolidation into working memory. The experiments detailed here show that equating the time available for working memory consolidation equates the rates of forgetting across presentation methods. We discuss the meaning of this finding in the interpretation of previous forgetting studies and in the construction of working memory models. PMID:24059859

  6. Timing performance comparison of digital methods in positron emission tomography

    NASA Astrophysics Data System (ADS)

    Aykac, Mehmet; Hong, Inki; Cho, Sanghee

    2010-11-01

    Accurate timing information is essential in positron emission tomography (PET). Recent improvements in high speed electronics made digital methods more attractive to find alternative solutions to create a time mark for an event. Two new digital methods (mean PMT pulse model, MPPM, and median filtered zero crossing method, MFZCM) were introduced in this work and compared to traditional methods such as digital leading edge (LE) and digital constant fraction discrimination (CFD). In addition, the performances of all four digital methods were compared to analog based LE and CFD. The time resolution values for MPPM and MFZCM were measured below 300 ps at 1.6 GS/s and above that was similar to the analog based coincidence timing results. In addition, the two digital methods were insensitive to the changes in threshold setting that might give some improvement in system dead time.

  7. Effects of the closing speed of stapler jaws on bovine pancreases.

    PubMed

    Chikamoto, Akira; Hashimoto, Daisuke; Ikuta, Yoshiaki; Tsuji, Akira; Abe, Shinya; Hayashi, Hiromitsu; Imai, Katsunori; Nitta, Hidetoshi; Ishiko, Takatoshi; Watanabe, Masayuki; Beppu, Toru; Baba, Hideo

    2014-01-01

    The division of the pancreatic parenchyma using a stapler is important in pancreatic surgery, especially for laparoscopic surgery. However, this procedure has not yet been standardized. We analyzed the effects of the closing speed of stapler jaws using bovine pancreases for each method. Furthermore, we assigned 10 min to the slow compression method, 5 min to the medium-fast compression method, and 30 s to the rapid compression (RC) method. The time allotted to holding (3 min) and dividing (30 s) was equal under each testing situation. We found that the RC method showed a high-pressure tolerance compared with the other two groups (rapid, 126 ± 49.0 mmHg; medium-fast, 55.5 ± 25.8 mmHg; slow, 45.0 ± 15.7 mmHg; p < 0.01), although the histological findings of the cut end were similar. The histological findings of the pancreatic capsule and parenchyma after the compression by staple jaws without firing also were similar. RC may provide an advantage as measured by pressure tolerance. A small series of distal pancreatectomy with a stapler that compares the speed of different stapler jaw closing times is required to prove the feasibility of these results after the confirmation of the advantages of the RC method under various settings.

  8. A software tool for determination of breast cancer treatment methods using data mining approach.

    PubMed

    Cakır, Abdülkadir; Demirel, Burçin

    2011-12-01

    In this work, breast cancer treatment methods are determined using data mining. For this purpose, software is developed to help to oncology doctor for the suggestion of application of the treatment methods about breast cancer patients. 462 breast cancer patient data, obtained from Ankara Oncology Hospital, are used to determine treatment methods for new patients. This dataset is processed with Weka data mining tool. Classification algorithms are applied one by one for this dataset and results are compared to find proper treatment method. Developed software program called as "Treatment Assistant" uses different algorithms (IB1, Multilayer Perception and Decision Table) to find out which one is giving better result for each attribute to predict and by using Java Net beans interface. Treatment methods are determined for the post surgical operation of breast cancer patients using this developed software tool. At modeling step of data mining process, different Weka algorithms are used for output attributes. For hormonotherapy output IB1, for tamoxifen and radiotherapy outputs Multilayer Perceptron and for the chemotherapy output decision table algorithm shows best accuracy performance compare to each other. In conclusion, this work shows that data mining approach can be a useful tool for medical applications particularly at the treatment decision step. Data mining helps to the doctor to decide in a short time.

  9. Stepping Back to Move Forward? Exploring Outdoor Education Students' Fresher and Graduate Identities and Their Impact on Employment Destinations

    ERIC Educational Resources Information Center

    Stott, Tim; Zaitseva, Elena; Cui, Vanessa

    2014-01-01

    This four-year mixed method longitudinal study utilises data collected from four cohorts of Outdoor Education (OE) students to compare "fresher" and "graduate" identities and to explore the impact of identity on graduate employment. Findings demonstrate that compared to other programmes, and the university as a whole, OE…

  10. Findings from the 2012 West Virginia Online Writing Scoring Comparability Study

    ERIC Educational Resources Information Center

    Hixson, Nate; Rhudy, Vaughn

    2013-01-01

    Student responses to the West Virginia Educational Standards Test (WESTEST) 2 Online Writing Assessment are scored by a computer-scoring engine. The scoring method is not widely understood among educators, and there exists a misperception that it is not comparable to hand scoring. To address these issues, the West Virginia Department of Education…

  11. The Emergence of a Regional Hub: Comparing International Student Choices and Experiences in South Korea

    ERIC Educational Resources Information Center

    Jon, Jae-Eun; Lee, Jenny J.; Byun, Kiyong

    2014-01-01

    As the demand for international education increases, middle-income non-English speaking countries, such as South Korea, play an increasing role in hosting the world's students. This mixed-methods study compares the different motivations and experiences of international students within and outside the East Asian region. Based on findings, this…

  12. Access to Vocational Training in Three Sectors of the European Economy. Comparative Analysis. 2nd Edition. CEDEFOP Panorama.

    ERIC Educational Resources Information Center

    Lassibille, Gerard; Paul, Jean-Jacques

    This report presents findings of a study of the theoretical and practical methods of access to continuing vocational training. It summarizes six reports that compare the following: the construction sector in Spain, France, Italy, and Luxembourg; the banking, insurance, commerce, and administration sectors in Germany, Ireland, the Netherlands, and…

  13. Measuring What People Value: A Comparison of “Attitude” and “Preference” Surveys

    PubMed Central

    Phillips, Kathryn A; Johnson, F Reed; Maddala, Tara

    2002-01-01

    Objective To compare and contrast methods and findings from two approaches to valuation used in the same survey: measurement of “attitudes” using simple rankings and ratings versus measurement of “preferences” using conjoint analysis. Conjoint analysis, a stated preference method, involves comparing scenarios composed of attribute descriptions by ranking, rating, or choosing scenarios. We explore possible explanations for our findings using focus groups conducted after the quantitative survey. Methods A self-administered survey, measuring attitudes and preferences for HIV tests, was conducted at HIV testing sites in San Francisco in 1999–2000 (n = 365, response rate=96 percent). Attitudes were measured and analyzed using standard approaches. Conjoint analysis scenarios were developed using a fractional factorial design and results analyzed using random effects probit models. We examined how the results using the two approaches were both similar and different. Results We found that “attitudes” and “preferences” were generally consistent, but there were some important differences. Although rankings based on the attitude and conjoint analysis surveys were similar, closer examination revealed important differences in how respondents valued price and attributes with “halo” effects, variation in how attribute levels were valued, and apparent differences in decision-making processes. Conclusions To our knowledge, this is the first study to compare attitude surveys and conjoint analysis surveys and to explore the meaning of the results using post-hoc focus groups. Although the overall findings for attitudes and preferences were similar, the two approaches resulted in some different conclusions. Health researchers should consider the advantages and limitations of both methods when determining how to measure what people value. PMID:12546291

  14. Investigation of IRT-Based Equating Methods in the Presence of Outlier Common Items

    ERIC Educational Resources Information Center

    Hu, Huiqin; Rogers, W. Todd; Vukmirovic, Zarko

    2008-01-01

    Common items with inconsistent b-parameter estimates may have a serious impact on item response theory (IRT)--based equating results. To find a better way to deal with the outlier common items with inconsistent b-parameters, the current study investigated the comparability of 10 variations of four IRT-based equating methods (i.e., concurrent…

  15. Compared to What? The Effectiveness of Synthetic Control Methods for Causal Inference in Educational Assessment

    ERIC Educational Resources Information Center

    Johnson, Clay Stephen

    2013-01-01

    Synthetic control methods are an innovative matching technique first introduced within the economics and political science literature that have begun to find application in educational research as well. Synthetic controls create an aggregate-level, time-series comparison for a single treated unit of interest for causal inference with observational…

  16. 3D documentation and visualization of external injury findings by integration of simple photography in CT/MRI data sets (IprojeCT).

    PubMed

    Campana, Lorenzo; Breitbeck, Robert; Bauer-Kreuz, Regula; Buck, Ursula

    2016-05-01

    This study evaluated the feasibility of documenting patterned injury using three dimensions and true colour photography without complex 3D surface documentation methods. This method is based on a generated 3D surface model using radiologic slice images (CT) while the colour information is derived from photographs taken with commercially available cameras. The external patterned injuries were documented in 16 cases using digital photography as well as highly precise photogrammetry-supported 3D structured light scanning. The internal findings of these deceased were recorded using CT and MRI. For registration of the internal with the external data, two different types of radiographic markers were used and compared. The 3D surface model generated from CT slice images was linked with the photographs, and thereby digital true-colour 3D models of the patterned injuries could be created (Image projection onto CT/IprojeCT). In addition, these external models were merged with the models of the somatic interior. We demonstrated that 3D documentation and visualization of external injury findings by integration of digital photography in CT/MRI data sets is suitable for the 3D documentation of individual patterned injuries to a body. Nevertheless, this documentation method is not a substitution for photogrammetry and surface scanning, especially when the entire bodily surface is to be recorded in three dimensions including all external findings, and when precise data is required for comparing highly detailed injury features with the injury-inflicting tool.

  17. Comparative analysis of cryopreservation methods in Chlamydomonas reinhardtii.

    PubMed

    Scarbrough, Chasity; Wirschell, Maureen

    2016-10-01

    Chlamydomonas is a model organism used for studies of many important biological processes. Traditionally, strains have been propagated on solid agar, which requires routine passaging for long-term maintenance. Cryopreservation of Chlamydomonas is possible, yet long-term viability is highly variable. Thus, improved cryopreservation methods for Chlamydomonas are an important requirement for sustained study of genetically defined strains. Here, we tested a commercial cryopreservation kit and directly compared it's effectiveness to a methanol-based method. We also tested thaw-back procedures comparing the growth of cells in liquid culture or on solid agar media. We demonstrated that methanol was the superior cryopreservation method for Chlamydomonas compared to the commercial kit and that post-thaw culture conditions dramatically affect viability. We also demonstrated that cryopreserved cells could be successfully thawed and plated directly onto solid agar plates. Our findings have important implications for the long-term storage of Chlamydomonas that can likely be extended to other algal species. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Comparison of Standardized Clinical Classification with Fundus Photograph Grading for the assessment of Diabetic Retinopathy and Diabetic Macular Edema Severity

    PubMed Central

    Gangaputra, Sapna; Lovato, James F.; Hubbard, Larry; Davis, Matthew D; Esser, Barbara A; Ambrosius, Walter T.; Chew, Emily Y.; Greven, Craig; Perdue, Letitia H; Wong, Wai T.; Condren, Audree; Wilkinson, Charles P.; Agrón, Elvira; Adler, Sharon; Danis, Ronald P

    2013-01-01

    Purpose To compare evaluation by clinical examination with image grading at a reading center (RC) for the classification of diabetic retinopathy (DR) and diabetic macular edema (DME). Methods ACCORD and FIND had similar methods of clinical and fundus photograph evaluation. For analysis purposes the photographic grading scales were condensed to correspond to the clinical scales and agreement between clinicians and reading center classification were compared. Results 6902 eyes of ACCORD participants and 3638 eyes of FIND participants were analyzed for agreement (percent, kappa) on DR on a 5 level scale. Exact agreement between clinicians and RC on DR severity category was 69% in ACCORD and 74% in FIND (Kappa 0.42 and 0.65). Sensitivity of the clinical grading to identify presence of mild nonproliferative retinopathy or worse was 0.53 in ACCORD and 0.84 in FIND. Specificities were 0.97 and 0.96, respectively. DME agreement in 6649 eyes of ACCORD participants and 3366 eyes of FIND participants was similar in both studies (Kappa 0.35 and 0.41). Sensitivities of the clinical grading to identify DME were 0.44 and 0.53 and specificities were 0.99 and 0.94, respectively. Conclusion Our results support the use of clinical information for defining broad severity categories, but not for documenting small to moderate changes in DR over time. PMID:23615341

  19. A Synthetic Comparator Approach to Local Evaluation of School-Based Substance Use Prevention Programming.

    PubMed

    Hansen, William B; Derzon, James H; Reese, Eric L

    2014-06-01

    We propose a method for creating groups against which outcomes of local pretest-posttest evaluations of evidence-based programs can be judged. This involves assessing pretest markers for new and previously conducted evaluations to identify groups that have high pretest similarity. A database of 802 prior local evaluations provided six summary measures for analysis. The proximity of all groups using these variables is calculated as standardized proximities having values between 0 and 1. Five methods for creating standardized proximities are demonstrated. The approach allows proximity limits to be adjusted to find sufficient numbers of synthetic comparators. Several index cases are examined to assess the numbers of groups available to serve as comparators. Results show that most local evaluations would have sufficient numbers of comparators available for estimating program effects. This method holds promise as a tool for local evaluations to estimate relative effectiveness. © The Author(s) 2012.

  20. Comparison of lipid and calorie loss from donor human milk among 3 methods of simulated gavage feeding: one-hour, 2-hour, and intermittent gravity feedings.

    PubMed

    Brooks, Christine; Vickers, Amy Manning; Aryal, Subhash

    2013-04-01

    The objective of this study was to compare the differences in lipid loss from 24 samples of banked donor human milk (DHM) among 3 feeding methods: DHM given by syringe pump over 1 hour, 2 hours, and by bolus/gravity gavage. Comparative, descriptive. There were no human subjects. Twenty-four samples of 8 oz of DHM were divided into four 60-mL aliquots. Timed feedings were given by Medfusion 2001 syringe pumps with syringes connected to narrow-lumened extension sets designed for enteral feedings and connected to standard silastic enteral feeding tubes. Gravity feedings were given using the identical syringes connected to the same silastic feeding tubes. All aliquots were analyzed with the York Dairy Analyzer. Univariate repeated-measures analyses of variance were used for the omnibus testing for overall differences between the feeding methods. Lipid content expressed as grams per deciliter at the end of each feeding method was compared with the prefed control samples using the Dunnett's test. The Tukey correction was used for other pairwise multiple comparisons. The univariate repeated-measures analysis of variance conducted to test for overall differences between feeding methods showed a significant difference between the methods (F = 58.57, df = 3, 69, P < .0001). Post hoc analysis using the Dunnett's approach revealed that there was a significant difference in fat content between the control sample and the 1-hour and 2-hours feeding methods (P < .0001), but we did not find any significant difference in fat content between the control and the gravity feeding methods (P = .3296). Pairwise comparison using the Tukey correction revealed a significant difference between both gravity and 1-hour feeding methods (P < .0001), and gravity and 2-hour feeding method (P < .0001). There was no significant difference in lipid content between the 1-hour and 2-hour feeding methods (P = .2729). Unlike gravity feedings, the timed feedings resulted in a statistically significant loss of fat as compared with their controls. These findings should raise questions about how those infants in the neonatal intensive care unit are routinely gavage fed.

  1. Compass: a hybrid method for clinical and biobank data mining.

    PubMed

    Krysiak-Baltyn, K; Nordahl Petersen, T; Audouze, K; Jørgensen, Niels; Angquist, L; Brunak, S

    2014-02-01

    We describe a new method for identification of confident associations within large clinical data sets. The method is a hybrid of two existing methods; Self-Organizing Maps and Association Mining. We utilize Self-Organizing Maps as the initial step to reduce the search space, and then apply Association Mining in order to find association rules. We demonstrate that this procedure has a number of advantages compared to traditional Association Mining; it allows for handling numerical variables without a priori binning and is able to generate variable groups which act as "hotspots" for statistically significant associations. We showcase the method on infertility-related data from Danish military conscripts. The clinical data we analyzed contained both categorical type questionnaire data and continuous variables generated from biological measurements, including missing values. From this data set, we successfully generated a number of interesting association rules, which relate an observation with a specific consequence and the p-value for that finding. Additionally, we demonstrate that the method can be used on non-clinical data containing chemical-disease associations in order to find associations between different phenotypes, such as prostate cancer and breast cancer. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Logistic model analysis of neurological findings in Minamata disease and the predicting index.

    PubMed

    Nakagawa, Masanori; Kodama, Tomoko; Akiba, Suminori; Arimura, Kimiyoshi; Wakamiya, Junji; Futatsuka, Makoto; Kitano, Takao; Osame, Mitsuhiro

    2002-01-01

    To establish a statistical diagnostic method to identify patients with Minamata disease (MD) considering factors of aging and sex, we analyzed the neurological findings in MD patients, inhabitants in a methylmercury polluted (MP) area, and inhabitants in a non-MP area. We compared the neurological findings in MD patients and inhabitants aged more than 40 years in the non-MP area. Based on the different frequencies of the neurological signs in the two groups, we devised the following formula to calculate the predicting index for MD: predicting index = 1/(1+e(-x)) x 100 (The value of x was calculated using the regression coefficients of each neurological finding obtained from logistic analysis. The index 100 indicated MD, and 0, non-MD). Using this method, we found that 100% of male and 98% of female patients with MD (95 cases) gave predicting indices higher than 95. Five percent of the aged inhabitants in the MP area (598 inhabitants) and 0.2% of those in the non-MP area (558 inhabitants) gave predicting indices of 50 or higher. Our statistical diagnostic method for MD was useful in distinguishing MD patients from healthy elders based on their neurological findings.

  3. A comparison of methods to estimate seismic phase delays--Numerical examples for coda wave interferometry

    USGS Publications Warehouse

    Mikesell, T. Dylan; Malcolm, Alison E.; Yang, Di; Haney, Matthew M.

    2015-01-01

    Time-shift estimation between arrivals in two seismic traces before and after a velocity perturbation is a crucial step in many seismic methods. The accuracy of the estimated velocity perturbation location and amplitude depend on this time shift. Windowed cross correlation and trace stretching are two techniques commonly used to estimate local time shifts in seismic signals. In the work presented here, we implement Dynamic Time Warping (DTW) to estimate the warping function – a vector of local time shifts that globally minimizes the misfit between two seismic traces. We illustrate the differences of all three methods compared to one another using acoustic numerical experiments. We show that DTW is comparable to or better than the other two methods when the velocity perturbation is homogeneous and the signal-to-noise ratio is high. When the signal-to-noise ratio is low, we find that DTW and windowed cross correlation are more accurate than the stretching method. Finally, we show that the DTW algorithm has better time resolution when identifying small differences in the seismic traces for a model with an isolated velocity perturbation. These results impact current methods that utilize not only time shifts between (multiply) scattered waves, but also amplitude and decoherence measurements. DTW is a new tool that may find new applications in seismology and other geophysical methods (e.g., as a waveform inversion misfit function).

  4. A reconsideration of negative ratings for network-based recommendation

    NASA Astrophysics Data System (ADS)

    Hu, Liang; Ren, Liang; Lin, Wenbin

    2018-01-01

    Recommendation algorithms based on bipartite networks have become increasingly popular, thanks to their accuracy and flexibility. Currently, many of these methods ignore users' negative ratings. In this work, we propose a method to exploit negative ratings for the network-based inference algorithm. We find that negative ratings play a positive role regardless of sparsity of data sets. Furthermore, we improve the efficiency of our method and compare it with the state-of-the-art algorithms. Experimental results show that the present method outperforms the existing algorithms.

  5. Cultural influence on crowding norms in outdoor recreation: a comparative analysis of visitors to national parks in Turkey and the United States.

    PubMed

    Sayan, Selcuk; Krymkowski, Daniel H; Manning, Robert E; Valliere, William A; Rovelstad, Ellen L

    2013-08-01

    Formulation of standards of quality in parks and outdoor recreation can be guided by normative theory and related empirical methods. We apply this approach to measure the acceptability of a range of use levels in national parks in Turkey and the United States. Using statistical methods for comparing norm curves across contexts, we find significant differences among Americans, British, and Turkish respondents. In particular, American and British respondents were substantially less tolerant of seeing other visitors and demonstrated higher norm intensity than Turkish respondents. We discuss the role of culture in explaining these findings, paying particular attention to Turkey as a traditional "contact culture" and the conventional emphasis on solitude and escape in American environmental history and policy. We conclude with a number of recommendations to stimulate more research on the relationship between culture and outdoor recreation.

  6. Comparison of array comparative genomic hybridization and quantitative real-time PCR-based aneuploidy screening of blastocyst biopsies.

    PubMed

    Capalbo, Antonio; Treff, Nathan R; Cimadomo, Danilo; Tao, Xin; Upham, Kathleen; Ubaldi, Filippo Maria; Rienzi, Laura; Scott, Richard T

    2015-07-01

    Comprehensive chromosome screening (CCS) methods are being extensively used to select chromosomally normal embryos in human assisted reproduction. Some concerns related to the stage of analysis and which aneuploidy screening method to use still remain. In this study, the reliability of blastocyst-stage aneuploidy screening and the diagnostic performance of the two mostly used CCS methods (quantitative real-time PCR (qPCR) and array comparative genome hybridization (aCGH)) has been assessed. aCGH aneuploid blastocysts were rebiopsied, blinded, and evaluated by qPCR. Discordant cases were subsequently rebiopsied, blinded, and evaluated by single-nucleotide polymorphism (SNP) array-based CCS. Although 81.7% of embryos showed the same diagnosis when comparing aCGH and qPCR-based CCS, 18.3% (22/120) of embryos gave a discordant result for at least one chromosome. SNP array reanalysis showed that a discordance was reported in ten blastocysts for aCGH, mostly due to false positives, and in four cases for qPCR. The discordant aneuploidy call rate per chromosome was significantly higher for aCGH (5.7%) compared with qPCR (0.6%; P<0.01). To corroborate these findings, 39 embryos were simultaneously biopsied for aCGH and qPCR during blastocyst-stage aneuploidy screening cycles. 35 matched including all 21 euploid embryos. Blinded SNP analysis on rebiopsies of the four embryos matched qPCR. These findings demonstrate the high reliability of diagnosis performed at the blastocyst stage with the use of different CCS methods. However, the application of aCGH can be expected to result in a higher aneuploidy rate than other contemporary methods of CCS.

  7. A parametric method for determining the number of signals in narrow-band direction finding

    NASA Astrophysics Data System (ADS)

    Wu, Qiang; Fuhrmann, Daniel R.

    1991-08-01

    A novel and more accurate method to determine the number of signals in the multisource direction finding problem is developed. The information-theoretic criteria of Yin and Krishnaiah (1988) are applied to a set of quantities which are evaluated from the log-likelihood function. Based on proven asymptotic properties of the maximum likelihood estimation, these quantities have the properties required by the criteria. Since the information-theoretic criteria use these quantities instead of the eigenvalues of the estimated correlation matrix, this approach possesses the advantage of not requiring a subjective threshold, and also provides higher performance than when eigenvalues are used. Simulation results are presented and compared to those obtained from the nonparametric method given by Wax and Kailath (1985).

  8. Clinical versus actuarial judgment.

    PubMed

    Dawes, R M; Faust, D; Meehl, P E

    1989-03-31

    Professionals are frequently consulted to diagnose and predict human behavior; optimal treatment and planning often hinge on the consultant's judgmental accuracy. The consultant may rely on one of two contrasting approaches to decision-making--the clinical and actuarial methods. Research comparing these two approaches shows the actuarial method to be superior. Factors underlying the greater accuracy of actuarial methods, sources of resistance to the scientific findings, and the benefits of increased reliance on actuarial approaches are discussed.

  9. Finding Direction in the Search for Selection.

    PubMed

    Thiltgen, Grant; Dos Reis, Mario; Goldstein, Richard A

    2017-01-01

    Tests for positive selection have mostly been developed to look for diversifying selection where change away from the current amino acid is often favorable. However, in many cases we are interested in directional selection where there is a shift toward specific amino acids, resulting in increased fitness in the species. Recently, a few methods have been developed to detect and characterize directional selection on a molecular level. Using the results of evolutionary simulations as well as HIV drug resistance data as models of directional selection, we compare two such methods with each other, as well as against a standard method for detecting diversifying selection. We find that the method to detect diversifying selection also detects directional selection under certain conditions. One method developed for detecting directional selection is powerful and accurate for a wide range of conditions, while the other can generate an excessive number of false positives.

  10. Tracking rural-to-urban migration in China: Lessons from the 2005 inter-census population survey.

    PubMed

    Ebenstein, Avraham; Zhao, Yaohui

    2015-01-01

    We examined migration in China using the 2005 inter-census population survey, in which migrants were registered at both their place of original (hukou) residence and at their destination. We find evidence that the estimated number of internal migrants in China is extremely sensitive to the enumeration method. We estimate that the traditional destination-based survey method fails to account for more than a third of migrants found using comparable origin-based methods. The 'missing' migrants are disproportionately young, male, and holders of rural hukou. We find that origin-based methods are more effective at capturing migrants who travel short distances for short periods, whereas destination-based methods are more effective when entire households have migrated and no remaining family members are located at the hukou location. We conclude with a set of policy recommendations for the design of population surveys in countries with large migrant populations.

  11. Teaching Cardiac Examination Skills

    PubMed Central

    Smith, Christopher A; Hart, Avery S; Sadowski, Laura S; Riddle, Janet; Evans, Arthur T; Clarke, Peter M; Ganschow, Pamela S; Mason, Ellen; Sequeira, Winston; Wang, Yue

    2006-01-01

    OBJECTIVE To determine if structured teaching of bedside cardiac examination skills improves medical residents' examination technique and their identification of key clinical findings. DESIGN Firm-based single-blinded controlled trial. SETTING Inpatient service at a university-affiliated public teaching hospital. PARTICIPANTS Eighty Internal Medicine residents. METHODS The study assessed 2 intervention groups that received 3-hour bedside teaching sessions during their 4-week rotation using either: (1) a traditional teaching method, “demonstration and practice” (DP) (n=26) or (2) an innovative method, “collaborative discovery” (CD) (n=24). The control group received their usual ward teaching sessions (n=25). The main outcome measures were scores on examination technique and correct identification of key clinical findings on an objective structured clinical examination (OSCE). RESULTS All 3 groups had similar scores for both their examination technique and identification of key findings in the preintervention OSCE. After teaching, both intervention groups significantly improved their technical examination skills compared with the control group. The increase was 10% (95% confidence interval [CI] 4% to 17%) for CD versus control and 12% (95% CI 6% to 19%) for DP versus control (both P<.005) equivalent to an additional 3 to 4 examination skills being correctly performed. Improvement in key findings was limited to a 5% (95% CI 2% to 9%) increase for the CD teaching method, CD versus control P=.046, equivalent to the identification of an additional 2 key clinical findings. CONCLUSIONS Both programs of bedside teaching increase the technical examination skills of residents but improvements in the identification of key clinical findings were modest and only demonstrated with a new method of teaching. PMID:16423116

  12. In-Service Teacher Training in Japan and Turkey: A Comparative Analysis of Institutions and Practices

    ERIC Educational Resources Information Center

    Bayrakci, Mustafa

    2009-01-01

    The purpose of this study is to compare policies and practices relating to teacher in-service training in Japan and Turkey. On the basis of the findings of the study, suggestions are made about in-service training activities in Turkey. The research was carried using qualitative research methods. In-service training activities in the two education…

  13. Motivation to Study Core French: Comparing Recent Immigrants and Canadian-Born Secondary School Students

    ERIC Educational Resources Information Center

    Mady, Callie J.

    2010-01-01

    As the number of Allophone students attending public schools in Canada continues to increase (Statistics Canada, 2008), it is clear that a need exists in English-dominant areas to purposefully address the integration of these students into core French. I report the findings of a mixed-method study that was conducted to assess and compare the…

  14. A Simulation of Alternatives for Wholesale Inventory Replenishment

    DTIC Science & Technology

    2016-03-01

    algorithmic details. The last method is a mixed-integer, linear optimization model. Comparative Inventory Simulation, a discrete event simulation model, is...simulation; event graphs; reorder point; fill-rate; backorder; discrete event simulation; wholesale inventory optimization model 15. NUMBER OF PAGES...model. Comparative Inventory Simulation, a discrete event simulation model, is designed to find fill rates achieved for each National Item

  15. Comparing Task-Based Instruction and Traditional Instruction on Task Engagement and Vocabulary Development in Secondary Language Education

    ERIC Educational Resources Information Center

    Halici Page, Merve; Mede, Enisa

    2018-01-01

    The purpose of this study was to investigate and compare the impact of task-based instruction (TBI) and traditional instruction (TI) on the motivation and vocabulary development in secondary language education. The focus of the study was to also find out the perceptions of teachers about implementing these two instructional methods in their…

  16. Comparative effectiveness of next generation genomic sequencing for disease diagnosis: design of a randomized controlled trial in patients with colorectal cancer/polyposis syndromes.

    PubMed

    Gallego, Carlos J; Bennette, Caroline S; Heagerty, Patrick; Comstock, Bryan; Horike-Pyne, Martha; Hisama, Fuki; Amendola, Laura M; Bennett, Robin L; Dorschner, Michael O; Tarczy-Hornoch, Peter; Grady, William M; Fullerton, S Malia; Trinidad, Susan B; Regier, Dean A; Nickerson, Deborah A; Burke, Wylie; Patrick, Donald L; Jarvik, Gail P; Veenstra, David L

    2014-09-01

    Whole exome and whole genome sequencing are applications of next generation sequencing transforming clinical care, but there is little evidence whether these tests improve patient outcomes or if they are cost effective compared to current standard of care. These gaps in knowledge can be addressed by comparative effectiveness and patient-centered outcomes research. We designed a randomized controlled trial that incorporates these research methods to evaluate whole exome sequencing compared to usual care in patients being evaluated for hereditary colorectal cancer and polyposis syndromes. Approximately 220 patients will be randomized and followed for 12 months after return of genomic findings. Patients will receive findings associated with colorectal cancer in a first return of results visit, and findings not associated with colorectal cancer (incidental findings) during a second return of results visit. The primary outcome is efficacy to detect mutations associated with these syndromes; secondary outcomes include psychosocial impact, cost-effectiveness and comparative costs. The secondary outcomes will be obtained via surveys before and after each return visit. The expected challenges in conducting this randomized controlled trial include the relatively low prevalence of genetic disease, difficult interpretation of some genetic variants, and uncertainty about which incidental findings should be returned to patients. The approaches utilized in this study may help guide other investigators in clinical genomics to identify useful outcome measures and strategies to address comparative effectiveness questions about the clinical implementation of genomic sequencing in clinical care. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Comparing Impact Findings from Design-Based and Model-Based Methods: An Empirical Investigation. NCEE 2017-4026

    ERIC Educational Resources Information Center

    Kautz, Tim; Schochet, Peter Z.; Tilley, Charles

    2017-01-01

    A new design-based theory has recently been developed to estimate impacts for randomized controlled trials (RCTs) and basic quasi-experimental designs (QEDs) for a wide range of designs used in social policy research (Imbens & Rubin, 2015; Schochet, 2016). These methods use the potential outcomes framework and known features of study designs…

  18. Performance-Based Seismic Design of Steel Frames Utilizing Colliding Bodies Algorithm

    PubMed Central

    Veladi, H.

    2014-01-01

    A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm. PMID:25202717

  19. Performance-based seismic design of steel frames utilizing colliding bodies algorithm.

    PubMed

    Veladi, H

    2014-01-01

    A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm.

  20. Dose-finding designs for trials of molecularly targeted agents and immunotherapies

    PubMed Central

    Chiuzan, Cody; Shtaynberger, Jonathan; Manji, Gulam A.; Duong, Jimmy K.; Schwartz, Gary K.; Ivanova, Anastasia; Lee, Shing M.

    2017-01-01

    Recently, there has been a surge of early phase trials of molecularly targeted agents (MTAs) and immunotherapies. These new therapies have different toxicity profiles compared to cytotoxic therapies. MTAs can benefit from new trial designs that allow inclusion of low-grade toxicities, late-onset toxicities, addition of an efficacy endpoint, and flexibility in the specification of a target toxicity probability. To study the degree of adoption of these methods, we conducted a Web of Science search of articles published between 2008 and 2014 that describe phase 1 oncology trials. Trials were categorized based on the dose-finding design used and the type of drug studied. Out of 1,712 dose-finding trials that met our criteria, 1,591 (92.9%) utilized a rule-based design, and 92 (5.4%; range 2.3% in 2009 to 9.7% in 2014) utilized a model-based or novel design. Over half of the trials tested an MTA or immunotherapy. Among the MTA and immunotherapy trials, 5.8% used model-based methods, compared to 3.9% and 8.3% of the chemotherapy or radiotherapy trials, respectively. While the percentage of trials using novel dose-finding designs has tripled since 2007, only 7.1% of trials use novel designs. PMID:28166468

  1. Introducing conjoint analysis method into delayed lotteries studies: its validity and time stability are higher than in adjusting

    PubMed Central

    Białek, Michał; Markiewicz, Łukasz; Sawicki, Przemysław

    2015-01-01

    The delayed lotteries are much more common in everyday life than are pure lotteries. Usually, we need to wait to find out the outcome of the risky decision (e.g., investing in a stock market, engaging in a relationship). However, most research has studied the time discounting and probability discounting in isolation using the methodologies designed specifically to track changes in one parameter. Most commonly used method is adjusting, but its reported validity and time stability in research on discounting are suboptimal. The goal of this study was to introduce the novel method for analyzing delayed lotteries—conjoint analysis—which hypothetically is more suitable for analyzing individual preferences in this area. A set of two studies compared the conjoint analysis with adjusting. The results suggest that individual parameters of discounting strength estimated with conjoint have higher predictive value (Study 1 and 2), and they are more stable over time (Study 2) compared to adjusting. We discuss these findings, despite the exploratory character of reported studies, by suggesting that future research on delayed lotteries should be cross-validated using both methods. PMID:25674069

  2. Implementation of an effective hybrid GA for large-scale traveling salesman problems.

    PubMed

    Nguyen, Hung Dinh; Yoshihara, Ikuo; Yamamori, Kunihito; Yasunaga, Moritoshi

    2007-02-01

    This correspondence describes a hybrid genetic algorithm (GA) to find high-quality solutions for the traveling salesman problem (TSP). The proposed method is based on a parallel implementation of a multipopulation steady-state GA involving local search heuristics. It uses a variant of the maximal preservative crossover and the double-bridge move mutation. An effective implementation of the Lin-Kernighan heuristic (LK) is incorporated into the method to compensate for the GA's lack of local search ability. The method is validated by comparing it with the LK-Helsgaun method (LKH), which is one of the most effective methods for the TSP. Experimental results with benchmarks having up to 316228 cities show that the proposed method works more effectively and efficiently than LKH when solving large-scale problems. Finally, the method is used together with the implementation of the iterated LK to find a new best tour (as of June 2, 2003) for a 1904711-city TSP challenge.

  3. Coupling HYDRUS-1D Code with PA-DDS Algorithms for Inverse Calibration

    NASA Astrophysics Data System (ADS)

    Wang, Xiang; Asadzadeh, Masoud; Holländer, Hartmut

    2017-04-01

    Numerical modelling requires calibration to predict future stages. A standard method for calibration is inverse calibration where generally multi-objective optimization algorithms are used to find a solution, e.g. to find an optimal solution of the van Genuchten Mualem (VGM) parameters to predict water fluxes in the vadose zone. We coupled HYDRUS-1D with PA-DDS to add a new, robust function for inverse calibration to the model. The PA-DDS method is a recently developed multi-objective optimization algorithm, which combines Dynamically Dimensioned Search (DDS) and Pareto Archived Evolution Strategy (PAES). The results were compared to a standard method (Marquardt-Levenberg method) implemented in HYDRUS-1D. Calibration performance is evaluated using observed and simulated soil moisture at two soil layers in the Southern Abbotsford, British Columbia, Canada in the terms of the root mean squared error (RMSE) and the Nash-Sutcliffe Efficiency (NSE). Results showed low RMSE values of 0.014 and 0.017 and strong NSE values of 0.961 and 0.939. Compared to the results by the Marquardt-Levenberg method, we received better calibration results for deeper located soil sensors. However, VGM parameters were similar comparing with previous studies. Both methods are equally computational efficient. We claim that a direct implementation of PA-DDS into HYDRUS-1D should reduce the computation effort further. This, the PA-DDS method is efficient for calibrating recharge for complex vadose zone modelling with multiple soil layer and can be a potential tool for calibration of heat and solute transport. Future work should focus on the effectiveness of PA-DDS for calibrating more complex versions of the model with complex vadose zone settings, with more soil layers, and against measured heat and solute transport. Keywords: Recharge, Calibration, HYDRUS-1D, Multi-objective Optimization

  4. Comparison of Control Group Generating Methods.

    PubMed

    Szekér, Szabolcs; Fogarassy, György; Vathy-Fogarassy, Ágnes

    2017-01-01

    Retrospective studies suffer from drawbacks such as selection bias. As the selection of the control group has a significant impact on the evaluation of the results, it is very important to find the proper method to generate the most appropriate control group. In this paper we suggest two nearest neighbors based control group selection methods that aim to achieve good matching between the individuals of case and control groups. The effectiveness of the proposed methods is evaluated by runtime and accuracy tests and the results are compared to the classical stratified sampling method.

  5. Explicit methods in extended phase space for inseparable Hamiltonian problems

    NASA Astrophysics Data System (ADS)

    Pihajoki, Pauli

    2015-03-01

    We present a method for explicit leapfrog integration of inseparable Hamiltonian systems by means of an extended phase space. A suitably defined new Hamiltonian on the extended phase space leads to equations of motion that can be numerically integrated by standard symplectic leapfrog (splitting) methods. When the leapfrog is combined with coordinate mixing transformations, the resulting algorithm shows good long term stability and error behaviour. We extend the method to non-Hamiltonian problems as well, and investigate optimal methods of projecting the extended phase space back to original dimension. Finally, we apply the methods to a Hamiltonian problem of geodesics in a curved space, and a non-Hamiltonian problem of a forced non-linear oscillator. We compare the performance of the methods to a general purpose differential equation solver LSODE, and the implicit midpoint method, a symplectic one-step method. We find the extended phase space methods to compare favorably to both for the Hamiltonian problem, and to the implicit midpoint method in the case of the non-linear oscillator.

  6. Magnetic resonance image segmentation using multifractal techniques

    NASA Astrophysics Data System (ADS)

    Yu, Yue-e.; Wang, Fang; Liu, Li-lin

    2015-11-01

    In order to delineate target region for magnetic resonance image (MRI) with diseases, the classical multifractal spectrum (MFS)-segmentation method and latest multifractal detrended fluctuation spectrum (MF-DFS)-based segmentation method are employed in our study. One of our main conclusions from experiments is that both of the two multifractal-based methods are workable for handling MRIs. The best result is obtained by MF-DFS-based method using Lh10 as local characteristic. The anti-noises experiments also suppot the conclusion. This interest finding shows that the features can be better represented by the strong fluctuations instead of the weak fluctuations for the MRIs. By comparing the multifractal nature between lesion and non-lesion area on the basis of the segmentation results, an interest finding is that the gray value's fluctuation in lesion area is much severer than that in non-lesion area.

  7. Comparative proteomic assessment of matrisome enrichment methodologies.

    PubMed

    Krasny, Lukas; Paul, Angela; Wai, Patty; Howard, Beatrice A; Natrajan, Rachael C; Huang, Paul H

    2016-11-01

    The matrisome is a complex and heterogeneous collection of extracellular matrix (ECM) and ECM-associated proteins that play important roles in tissue development and homeostasis. While several strategies for matrisome enrichment have been developed, it is currently unknown how the performance of these different methodologies compares in the proteomic identification of matrisome components across multiple tissue types. In the present study, we perform a comparative proteomic assessment of two widely used decellularisation protocols and two extraction methods to characterise the matrisome in four murine organs (heart, mammary gland, lung and liver). We undertook a systematic evaluation of the performance of the individual methods on protein yield, matrisome enrichment capability and the ability to isolate core matrisome and matrisome-associated components. Our data find that sodium dodecyl sulphate (SDS) decellularisation leads to the highest matrisome enrichment efficiency, while the extraction protocol that comprises chemical and trypsin digestion of the ECM fraction consistently identifies the highest number of matrisomal proteins across all types of tissue examined. Matrisome enrichment had a clear benefit over non-enriched tissue for the comprehensive identification of matrisomal components in murine liver and heart. Strikingly, we find that all four matrisome enrichment methods led to significant losses in the soluble matrisome-associated proteins across all organs. Our findings highlight the multiple factors (including tissue type, matrisome class of interest and desired enrichment purity) that influence the choice of enrichment methodology, and we anticipate that these data will serve as a useful guide for the design of future proteomic studies of the matrisome. © 2016 The Author(s).

  8. Familiarity Vs Trust: A Comparative Study of Domain Scientists' Trust in Visual Analytics and Conventional Analysis Methods.

    PubMed

    Dasgupta, Aritra; Lee, Joon-Yong; Wilson, Ryan; Lafrance, Robert A; Cramer, Nick; Cook, Kristin; Payne, Samuel

    2017-01-01

    Combining interactive visualization with automated analytical methods like statistics and data mining facilitates data-driven discovery. These visual analytic methods are beginning to be instantiated within mixed-initiative systems, where humans and machines collaboratively influence evidence-gathering and decision-making. But an open research question is that, when domain experts analyze their data, can they completely trust the outputs and operations on the machine-side? Visualization potentially leads to a transparent analysis process, but do domain experts always trust what they see? To address these questions, we present results from the design and evaluation of a mixed-initiative, visual analytics system for biologists, focusing on analyzing the relationships between familiarity of an analysis medium and domain experts' trust. We propose a trust-augmented design of the visual analytics system, that explicitly takes into account domain-specific tasks, conventions, and preferences. For evaluating the system, we present the results of a controlled user study with 34 biologists where we compare the variation of the level of trust across conventional and visual analytic mediums and explore the influence of familiarity and task complexity on trust. We find that despite being unfamiliar with a visual analytic medium, scientists seem to have an average level of trust that is comparable with the same in conventional analysis medium. In fact, for complex sense-making tasks, we find that the visual analytic system is able to inspire greater trust than other mediums. We summarize the implications of our findings with directions for future research on trustworthiness of visual analytic systems.

  9. Efficient Computing Budget Allocation for Finding Simplest Good Designs

    PubMed Central

    Jia, Qing-Shan; Zhou, Enlu; Chen, Chun-Hung

    2012-01-01

    In many applications some designs are easier to implement, require less training data and shorter training time, and consume less storage than the others. Such designs are called simple designs, and are usually preferred over complex ones when they all have good performance. Despite the abundant existing studies on how to find good designs in simulation-based optimization (SBO), there exist few studies on finding simplest good designs. We consider this important problem in this paper, and make the following contributions. First, we provide lower bounds for the probabilities of correctly selecting the m simplest designs with top performance, and selecting the best m such simplest good designs, respectively. Second, we develop two efficient computing budget allocation methods to find m simplest good designs and to find the best m such designs, respectively; and show their asymptotic optimalities. Third, we compare the performance of the two methods with equal allocations over 6 academic examples and a smoke detection problem in wireless sensor networks. We hope that this work brings insight to finding the simplest good designs in general. PMID:23687404

  10. How Asian Teachers Polish Each Lesson to Perfection.

    ERIC Educational Resources Information Center

    Stigler, James W.; Stevenson, Harold W.

    1991-01-01

    Compares elementary mathematics instruction in Taiwan, Japan, Chicago, and Minneapolis. Finds that American teachers are overworked and devote less time to conducting lessons than Asian teachers, who employ proven inductive methods within the framework of standardized curricula. (DM)

  11. Evaluation of nutrient agar for the culture of Mycobacterium tuberculosis using the microcolony detection method.

    PubMed

    Satti, L; Abbasi, S; Faiz, U

    2012-07-01

    We evaluated nutrient agar using the microcolony detection method for the recovery of Mycobacterium tuberculosis on 37 acid-fast bacilli (AFB) positive sputum specimens, and compared it with conventional Löwenstein-Jensen (LJ) medium. Nutrient agar detected 35 isolates compared to 34 on LJ medium. The mean time to detection of mycobacteria on nutrient agar and LJ medium was respectively 9.6 and 21.4 days. The contamination rate on nutrient agar and LJ medium was respectively 5.4% and 2.7%. Nutrient agar detects M. tuberculosis more rapidly than LJ medium, and could be an economical, rapid culture method in resource-poor settings, provided our findings are confirmed by further studies.

  12. Combined node and link partitions method for finding overlapping communities in complex networks

    PubMed Central

    Jin, Di; Gabrys, Bogdan; Dang, Jianwu

    2015-01-01

    Community detection in complex networks is a fundamental data analysis task in various domains, and how to effectively find overlapping communities in real applications is still a challenge. In this work, we propose a new unified model and method for finding the best overlapping communities on the basis of the associated node and link partitions derived from the same framework. Specifically, we first describe a unified model that accommodates node and link communities (partitions) together, and then present a nonnegative matrix factorization method to learn the parameters of the model. Thereafter, we infer the overlapping communities based on the derived node and link communities, i.e., determine each overlapped community between the corresponding node and link community with a greedy optimization of a local community function conductance. Finally, we introduce a model selection method based on consensus clustering to determine the number of communities. We have evaluated our method on both synthetic and real-world networks with ground-truths, and compared it with seven state-of-the-art methods. The experimental results demonstrate the superior performance of our method over the competing ones in detecting overlapping communities for all analysed data sets. Improved performance is particularly pronounced in cases of more complicated networked community structures. PMID:25715829

  13. Realistic inversion of diffraction data for an amorphous solid: The case of amorphous silicon

    NASA Astrophysics Data System (ADS)

    Pandey, Anup; Biswas, Parthapratim; Bhattarai, Bishal; Drabold, D. A.

    2016-12-01

    We apply a method called "force-enhanced atomic refinement" (FEAR) to create a computer model of amorphous silicon (a -Si) based upon the highly precise x-ray diffraction experiments of Laaziri et al. [Phys. Rev. Lett. 82, 3460 (1999), 10.1103/PhysRevLett.82.3460]. The logic underlying our calculation is to estimate the structure of a real sample a -Si using experimental data and chemical information included in a nonbiased way, starting from random coordinates. The model is in close agreement with experiment and also sits at a suitable energy minimum according to density-functional calculations. In agreement with experiments, we find a small concentration of coordination defects that we discuss, including their electronic consequences. The gap states in the FEAR model are delocalized compared to a continuous random network model. The method is more efficient and accurate, in the sense of fitting the diffraction data, than conventional melt-quench methods. We compute the vibrational density of states and the specific heat, and we find that both compare favorably to experiments.

  14. Recent developments in the Dorfman-Berbaum-Metz procedure for multireader ROC study analysis.

    PubMed

    Hillis, Stephen L; Berbaum, Kevin S; Metz, Charles E

    2008-05-01

    The Dorfman-Berbaum-Metz (DBM) method has been one of the most popular methods for analyzing multireader receiver-operating characteristic (ROC) studies since it was proposed in 1992. Despite its popularity, the original procedure has several drawbacks: it is limited to jackknife accuracy estimates, it is substantially conservative, and it is not based on a satisfactory conceptual or theoretical model. Recently, solutions to these problems have been presented in three papers. Our purpose is to summarize and provide an overview of these recent developments. We present and discuss the recently proposed solutions for the various drawbacks of the original DBM method. We compare the solutions in a simulation study and find that they result in improved performance for the DBM procedure. We also compare the solutions using two real data studies and find that the modified DBM procedure that incorporates these solutions yields more significant results and clearer interpretations of the variance component parameters than the original DBM procedure. We recommend using the modified DBM procedure that incorporates the recent developments.

  15. Chemical accuracy from quantum Monte Carlo for the benzene dimer.

    PubMed

    Azadi, Sam; Cohen, R E

    2015-09-14

    We report an accurate study of interactions between benzene molecules using variational quantum Monte Carlo (VMC) and diffusion quantum Monte Carlo (DMC) methods. We compare these results with density functional theory using different van der Waals functionals. In our quantum Monte Carlo (QMC) calculations, we use accurate correlated trial wave functions including three-body Jastrow factors and backflow transformations. We consider two benzene molecules in the parallel displaced geometry, and find that by highly optimizing the wave function and introducing more dynamical correlation into the wave function, we compute the weak chemical binding energy between aromatic rings accurately. We find optimal VMC and DMC binding energies of -2.3(4) and -2.7(3) kcal/mol, respectively. The best estimate of the coupled-cluster theory through perturbative triplets/complete basis set limit is -2.65(2) kcal/mol [Miliordos et al., J. Phys. Chem. A 118, 7568 (2014)]. Our results indicate that QMC methods give chemical accuracy for weakly bound van der Waals molecular interactions, comparable to results from the best quantum chemistry methods.

  16. Numerical Polynomial Homotopy Continuation Method and String Vacua

    DOE PAGES

    Mehta, Dhagash

    2011-01-01

    Finding vmore » acua for the four-dimensional effective theories for supergravity which descend from flux compactifications and analyzing them according to their stability is one of the central problems in string phenomenology. Except for some simple toy models, it is, however, difficult to find all the vacua analytically. Recently developed algorithmic methods based on symbolic computer algebra can be of great help in the more realistic models. However, they suffer from serious algorithmic complexities and are limited to small system sizes. In this paper, we review a numerical method called the numerical polynomial homotopy continuation (NPHC) method, first used in the areas of lattice field theories, which by construction finds all of the vacua of a given potential that is known to have only isolated solutions. The NPHC method is known to suffer from no major algorithmic complexities and is embarrassingly parallelizable , and hence its applicability goes way beyond the existing symbolic methods. We first solve a simple toy model as a warm-up example to demonstrate the NPHC method at work. We then show that all the vacua of a more complicated model of a compactified M theory model, which has an S U ( 3 ) structure, can be obtained by using a desktop machine in just about an hour, a feat which was reported to be prohibitively difficult by the existing symbolic methods. Finally, we compare the various technicalities between the two methods.« less

  17. Comparison of genetic algorithm methods for fuel management optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeChaine, M.D.; Feltus, M.A.

    1995-12-31

    The CIGARO system was developed for genetic algorithm fuel management optimization. Tests are performed to find the best fuel location swap mutation operator probability and to compare genetic algorithm to a truly random search method. Tests showed the fuel swap probability should be between 0% and 10%, and a 50% definitely hampered the optimization. The genetic algorithm performed significantly better than the random search method, which did not even satisfy the peak normalized power constraint.

  18. Multispectral image fusion for target detection

    NASA Astrophysics Data System (ADS)

    Leviner, Marom; Maltz, Masha

    2009-09-01

    Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in an experiment using MSSF against two established methods: Averaging and Principle Components Analysis (PCA), and against its two source bands, visible and infrared. The task that we studied was: target detection in the cluttered environment. MSSF proved superior to the other fusion methods. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.

  19. Normal uniform mixture differential gene expression detection for cDNA microarrays

    PubMed Central

    Dean, Nema; Raftery, Adrian E

    2005-01-01

    Background One of the primary tasks in analysing gene expression data is finding genes that are differentially expressed in different samples. Multiple testing issues due to the thousands of tests run make some of the more popular methods for doing this problematic. Results We propose a simple method, Normal Uniform Differential Gene Expression (NUDGE) detection for finding differentially expressed genes in cDNA microarrays. The method uses a simple univariate normal-uniform mixture model, in combination with new normalization methods for spread as well as mean that extend the lowess normalization of Dudoit, Yang, Callow and Speed (2002) [1]. It takes account of multiple testing, and gives probabilities of differential expression as part of its output. It can be applied to either single-slide or replicated experiments, and it is very fast. Three datasets are analyzed using NUDGE, and the results are compared to those given by other popular methods: unadjusted and Bonferroni-adjusted t tests, Significance Analysis of Microarrays (SAM), and Empirical Bayes for microarrays (EBarrays) with both Gamma-Gamma and Lognormal-Normal models. Conclusion The method gives a high probability of differential expression to genes known/suspected a priori to be differentially expressed and a low probability to the others. In terms of known false positives and false negatives, the method outperforms all multiple-replicate methods except for the Gamma-Gamma EBarrays method to which it offers comparable results with the added advantages of greater simplicity, speed, fewer assumptions and applicability to the single replicate case. An R package called nudge to implement the methods in this paper will be made available soon at . PMID:16011807

  20. Evaluation of Accuracy of DIAGNOdent in Diagnosis of Primary and Secondary Caries in Comparison to Conventional Methods

    PubMed Central

    Nokhbatolfoghahaie, Hanieh; Alikhasi, Marzieh; Chiniforush, Nasim; Khoei, Farzaneh; Safavi, Nassimeh; Yaghoub Zadeh, Behnoush

    2013-01-01

    Introduction: Today the prevalence of teeth decays has considerably decreased. Related organizations and institutions mention several reasons for it such as improvement of decay diagnostic equipment and tools which are even capable of detecting caries in their initial stages. This resulted in reduction of costs for patients and remarkable increase in teeth life span. There are many methods for decay diagnostic, like: visual and radiographic methods, devices with fluorescence such as Quantitative light-induced fluorescence (QLF), Vista proof, Laser fluorescence (LF or DIAGNOdent), Fluorescence Camera (FC) and Digital radiography. Although DIAGNOdent is considered a valuable device for decay diagnostic ,there are concerns regarding its efficacy and accuracy. Considering the sensitivity of decaydiagnosis and the exorbitant annual expenses supported by government and people for caries treatment, finding the best method for early caries detection is of the most importance. Numerous studies were performed to compare different diagnostic methods with conflicting results. The objective of this study is a comparative review of the efficiency of DIAGNOdent in comparison to visual methods and radiographic methods in the diagnostic of teeth occlusal surfaces. Methods: Search of PubMed, Google Scholar electronic resources was performed in order to find clinical trials in English in the period between 1998 and 2013. Full texts of only 35 articles were available. Conclusion: Considering the sensitivity and specificity reported in the different studies, it seems that DIAGNOdent is an appropriate modality for caries detection as a complementary method beside other methods and its use alone to obtain treatment plan is not enough. PMID:25606325

  1. Image Quality Ranking Method for Microscopy

    PubMed Central

    Koho, Sami; Fazeli, Elnaz; Eriksson, John E.; Hänninen, Pekka E.

    2016-01-01

    Automated analysis of microscope images is necessitated by the increased need for high-resolution follow up of events in time. Manually finding the right images to be analyzed, or eliminated from data analysis are common day-to-day problems in microscopy research today, and the constantly growing size of image datasets does not help the matter. We propose a simple method and a software tool for sorting images within a dataset, according to their relative quality. We demonstrate the applicability of our method in finding good quality images in a STED microscope sample preparation optimization image dataset. The results are validated by comparisons to subjective opinion scores, as well as five state-of-the-art blind image quality assessment methods. We also show how our method can be applied to eliminate useless out-of-focus images in a High-Content-Screening experiment. We further evaluate the ability of our image quality ranking method to detect out-of-focus images, by extensive simulations, and by comparing its performance against previously published, well-established microscopy autofocus metrics. PMID:27364703

  2. A Clinical Evaluation of Cone Beam Computed Tomography

    DTIC Science & Technology

    2013-07-31

    body of literature lacks in vivo studies comparing CBCT images with clinical findings. The purpose of this descriptive case series was to...All (18/18) bone measurements were underrepresented on CBCT images in this study . This case series also identified limitations in accuracy when... study was to compare pre-surgical CBCT images against the actual clinical presentation of the hard tissues. METHOD: Eleven patients requiring

  3. The Vermont Model for Rural HIV Care Delivery: Eleven Years of Outcome Data Comparing Urban and Rural Clinics

    ERIC Educational Resources Information Center

    Grace, Christopher; Kutzko, Deborah; Alston, W. Kemper; Ramundo, Mary; Polish, Louis; Osler, Turner

    2010-01-01

    Context: Provision of human immunodeficiency virus (HIV) care in rural areas has encountered unique barriers. Purpose: To compare medical outcomes of care provided at 3 HIV specialty clinics in rural Vermont with that provided at an urban HIV specialty clinic. Methods: This was a retrospective cohort study. Findings: Over an 11-year period 363 new…

  4. Origins of spatial, temporal and numerical cognition: Insights from comparative psychology.

    PubMed

    Haun, Daniel B M; Jordan, Fiona M; Vallortigara, Giorgio; Clayton, Nicky S

    2010-12-01

    Contemporary comparative cognition has a large repertoire of animal models and methods, with concurrent theoretical advances that are providing initial answers to crucial questions about human cognition. What cognitive traits are uniquely human? What are the species-typical inherited predispositions of the human mind? What is the human mind capable of without certain types of specific experiences with the surrounding environment? Here, we review recent findings from the domains of space, time and number cognition. These findings are produced using different comparative methodologies relying on different animal species, namely birds and non-human great apes. The study of these species not only reveals the range of cognitive abilities across vertebrates, but also increases our understanding of human cognition in crucial ways. Copyright © 2010 Elsevier Ltd. All rights reserved.

  5. Identifying Stakeholders and Their Preferences about NFR by Comparing Use Case Diagrams of Several Existing Systems

    NASA Astrophysics Data System (ADS)

    Kaiya, Haruhiko; Osada, Akira; Kaijiri, Kenji

    We present a method to identify stakeholders and their preferences about non-functional requirements (NFR) by using use case diagrams of existing systems. We focus on the changes about NFR because such changes help stakeholders to identify their preferences. Comparing different use case diagrams of the same domain helps us to find changes to be occurred. We utilize Goal-Question-Metrics (GQM) method for identifying variables that characterize NFR, and we can systematically represent changes about NFR using the variables. Use cases that represent system interactions help us to bridge the gap between goals and metrics (variables), and we can easily construct measurable NFR. For validating and evaluating our method, we applied our method to an application domain of Mail User Agent (MUA) system.

  6. The Role of Active Listening in Teacher-Parent Relations and the Moderating Role of Attachment Style

    ERIC Educational Resources Information Center

    Castro, Dotan R.; Alex, Cohen; Tohar, Gilad; Kluger, Avraham N.

    2013-01-01

    This study tested the perceived effectiveness of "Listening-Ask questions-Focus on the issue-Find a first step" method (McNaughton et al., 2008) in a parent-teacher conversation using a scenario study (N?=?208). As expected, a scenario based on this method compared with a scenario of a conversation omitting the four steps of the method…

  7. Value Tendency Differences between Pre-Service Social Studies Teachers within the Scope of the East and the West

    ERIC Educational Resources Information Center

    Osmanoglu, Ahmed Emin

    2017-01-01

    This study aims to comparatively examine the values that the students of the Department of Social Studies in Education Faculty at two universities located in the Eastern and Western parts of Turkey desire to find in people they interact with. Multiple methods, including quantitative and qualitative methods, were used in this study. The research…

  8. Comparing the structure of an emerging market with a mature one under global perturbation

    NASA Astrophysics Data System (ADS)

    Namaki, A.; Jafari, G. R.; Raei, R.

    2011-09-01

    In this paper we investigate the Tehran stock exchange (TSE) and Dow Jones Industrial Average (DJIA) in terms of perturbed correlation matrices. To perturb a stock market, there are two methods, namely local and global perturbation. In the local method, we replace a correlation coefficient of the cross-correlation matrix with one calculated from two Gaussian-distributed time series, whereas in the global method, we reconstruct the correlation matrix after replacing the original return series with Gaussian-distributed time series. The local perturbation is just a technical study. We analyze these markets through two statistical approaches, random matrix theory (RMT) and the correlation coefficient distribution. By using RMT, we find that the largest eigenvalue is an influence that is common to all stocks and this eigenvalue has a peak during financial shocks. We find there are a few correlated stocks that make the essential robustness of the stock market but we see that by replacing these return time series with Gaussian-distributed time series, the mean values of correlation coefficients, the largest eigenvalues of the stock markets and the fraction of eigenvalues that deviate from the RMT prediction fall sharply in both markets. By comparing these two markets, we can see that the DJIA is more sensitive to global perturbations. These findings are crucial for risk management and portfolio selection.

  9. Case Study on Optimal Routing in Logistics Network by Priority-based Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoguang; Lin, Lin; Gen, Mitsuo; Shiota, Mitsushige

    Recently, research on logistics caught more and more attention. One of the important issues on logistics system is to find optimal delivery routes with the least cost for products delivery. Numerous models have been developed for that reason. However, due to the diversity and complexity of practical problem, the existing models are usually not very satisfying to find the solution efficiently and convinently. In this paper, we treat a real-world logistics case with a company named ABC Co. ltd., in Kitakyusyu Japan. Firstly, based on the natures of this conveyance routing problem, as an extension of transportation problem (TP) and fixed charge transportation problem (fcTP) we formulate the problem as a minimum cost flow (MCF) model. Due to the complexity of fcTP, we proposed a priority-based genetic algorithm (pGA) approach to find the most acceptable solution to this problem. In this pGA approach, a two-stage path decoding method is adopted to develop delivery paths from a chromosome. We also apply the pGA approach to this problem, and compare our results with the current logistics network situation, and calculate the improvement of logistics cost to help the management to make decisions. Finally, in order to check the effectiveness of the proposed method, the results acquired are compared with those come from the two methods/ software, such as LINDO and CPLEX.

  10. Actigraphic Assessment of Motor Activity in Acutely Admitted Inpatients with Bipolar Disorder

    PubMed Central

    Krane-Gartiser, Karoline; Henriksen, Tone Elise Gjotterud; Morken, Gunnar; Vaaler, Arne; Fasmer, Ole Bernt

    2014-01-01

    Introduction Mania is associated with increased activity, whereas psychomotor retardation is often found in bipolar depression. Actigraphy is a promising tool for monitoring phase shifts and changes following treatment in bipolar disorder. The aim of this study was to compare recordings of motor activity in mania, bipolar depression and healthy controls, using linear and nonlinear analytical methods. Materials and Methods Recordings from 18 acutely hospitalized inpatients with mania were compared to 12 recordings from bipolar depression inpatients and 28 healthy controls. 24-hour actigraphy recordings and 64-minute periods of continuous motor activity in the morning and evening were analyzed. Mean activity and several measures of variability and complexity were calculated. Results Patients with depression had a lower mean activity level compared to controls, but higher variability shown by increased standard deviation (SD) and root mean square successive difference (RMSSD) over 24 hours and in the active morning period. The patients with mania had lower first lag autocorrelation compared to controls, and Fourier analysis showed higher variance in the high frequency part of the spectrum corresponding to the period from 2–8 minutes. Both patient groups had a higher RMSSD/SD ratio compared to controls. In patients with mania we found an increased complexity of time series in the active morning period, compared to patients with depression. The findings in the patients with mania are similar to previous findings in patients with schizophrenia and healthy individuals treated with a glutamatergic antagonist. Conclusion We have found distinctly different activity patterns in hospitalized patients with bipolar disorder in episodes of mania and depression, assessed by actigraphy and analyzed with linear and nonlinear mathematical methods, as well as clear differences between the patients and healthy comparison subjects. PMID:24586883

  11. An optimal generic model for multi-parameters and big data optimizing: a laboratory experimental study

    NASA Astrophysics Data System (ADS)

    Utama, D. N.; Ani, N.; Iqbal, M. M.

    2018-03-01

    Optimization is a process for finding parameter (parameters) that is (are) able to deliver an optimal value for an objective function. Seeking an optimal generic model for optimizing is a computer science study that has been being practically conducted by numerous researchers. Generic model is a model that can be technically operated to solve any varieties of optimization problem. By using an object-oriented method, the generic model for optimizing was constructed. Moreover, two types of optimization method, simulated-annealing and hill-climbing, were functioned in constructing the model and compared to find the most optimal one then. The result said that both methods gave the same result for a value of objective function and the hill-climbing based model consumed the shortest running time.

  12. Treatment of Alzheimer’s Disease in Iranian Traditional Medicine

    PubMed Central

    Ahmadian-Attari, Mohammad Mahdi; Ahmadiani, Abolhassan; Kamalinejad, Mohammad; Dargahi, Leila; Shirzad, Meysam; Mosaddegh, Mahmoud

    2014-01-01

    Background: Alzheimer’s disease (AD) is a progressive neurodegenerative disease with a high prevalence in recent years. Dramatic growth in AD prevalence has increased the importance of more researches on AD treatment. History has shown that traditional medicine can be a source of inspiration to find new therapies. Objectives: This study tried to codify the recommendations of Iranian traditional medicine (ITM) by studying the main medical manuscripts. The second purpose was to compare these findings with new medical information. Materials and Methods: Cardinal traditional medical and pharmacological texts from 10th to 18th century were searched for traditional terms of dementia (Nesyan, Fisad-uz-Zekr, Faramooshkari) focused on treatment methods. The findings were classified into three groups: lifestyle recommendations, dietary approaches, and drug therapies. These findings were compared with new medical findings. Results: ITM has dietary recommendations for dementia such as increasing consumption of nuts, poultry and eggs, milk, and grape products (like raisin and currant). These compounds are full of unsaturated fatty acids, cholesterol, and polyphenolic compounds. New findings suggest that these substances can help in prevention and treatment of AD. ITM has some lifestyle considerations like increasing physical and mental activities, listening to music, attending musical feasts, and smelling specific perfumes. New medical findings confirm nearly all of these recommendations. Along with the aforementioned items, treatment with natural medicines is in the first line of traditional treatment of dementia. New investigations show that many of these herbs have antioxidant, anti-inflammatory factors and acetylcholine esterase inhibitory effects. A few of them also have N-methyl-D-aspartate (NMDA) blocking activity. When these herbs are put together in traditional formulations, they can comprehensively fight against the disease. Conclusions: More ethnopharmacological and ethnomedical studies on ITM antidementia therapy can be followed by fruitful results. PMID:25763264

  13. The Facial Appearance of CEOs: Faces Signal Selection but Not Performance.

    PubMed

    Stoker, Janka I; Garretsen, Harry; Spreeuwers, Luuk J

    2016-01-01

    Research overwhelmingly shows that facial appearance predicts leader selection. However, the evidence on the relevance of faces for actual leader ability and consequently performance is inconclusive. By using a state-of-the-art, objective measure for face recognition, we test the predictive value of CEOs' faces for firm performance in a large sample of faces. We first compare the faces of Fortune500 CEOs with those of US citizens and professors. We find clear confirmation that CEOs do look different when compared to citizens or professors, replicating the finding that faces matter for selection. More importantly, we also find that faces of CEOs of top performing firms do not differ from other CEOs. Based on our advanced face recognition method, our results suggest that facial appearance matters for leader selection but that it does not do so for leader performance.

  14. Clothing Protection from Ultraviolet Radiation: A New Method for Assessment.

    PubMed

    Gage, Ryan; Leung, William; Stanley, James; Reeder, Anthony; Barr, Michelle; Chambers, Tim; Smith, Moira; Signal, Louise

    2017-11-01

    Clothing modifies ultraviolet radiation (UVR) exposure from the sun and has an impact on skin cancer risk and the endogenous synthesis of vitamin D. There is no standardized method available for assessing body surface area (BSA) covered by clothing, which limits generalizability between study findings. We calculated the body cover provided by 38 clothing items using diagrams of BSA, adjusting the values to account for differences in BSA by age. Diagrams displaying each clothing item were developed and incorporated into a coverage assessment procedure (CAP). Five assessors used the CAP and Lund & Browder chart, an existing method for estimating BSA, to calculate the clothing coverage of an image sample of 100 schoolchildren. Values of clothing coverage, inter-rater reliability and assessment time were compared between CAP and Lund & Browder methods. Both methods had excellent inter-rater reliability (>0.90) and returned comparable results, although the CAP method was significantly faster in determining a person's clothing coverage. On balance, the CAP method appears to be a feasible method for calculating clothing coverage. Its use could improve comparability between sun-safety studies and aid in quantifying the health effects of UVR exposure. © 2017 The American Society of Photobiology.

  15. Chronic Fatigue Syndrome versus Systemic Exertion Intolerance Disease

    PubMed Central

    Jason, Leonard A.; Sunnquist, Madison; Brown, Abigail; Newton, Julia L.; Strand, Elin Bolle; Vernon, Suzanne D.

    2015-01-01

    Background The Institute of Medicine has recommended a change in the name and criteria for Chronic Fatigue Syndrome (CFS), renaming the illness Systemic Exertion Intolerance Disease (SEID). The new SEID case definition requires substantial reductions or impairments in the ability to engage in pre-illness activities, unrefreshing sleep, post-exertional malaise, and either cognitive impairment or orthostatic intolerance. Purpose In the current study, samples were generated through several different methods and were used to compare this new case definition to previous case definitions for CFS, Myalgic Encephalomyelitis (ME-ICC), Myalgic Encephalomyelitis/Chronic Fatigue Syndrome (ME/CFS), as well as a case definition developed through empirical methods. Methods We used a cross-sectional design with samples from tertiary care settings, a biobank sample, and other forums. 796 patients from the US, Great Britain, and Norway completed the DePaul Symptom Questionnaire. Results Findings indicated that the SEID criteria identified 88% of participants in the samples analyzed, which is comparable to the 92% that met the Fukuda criteria. The SEID case definition was compared to a four item empiric criteria, and findings indicated that the four item empiric criteria identified a smaller, more functionally limited and symptomatic group of patients. Conclusion The recently developed SEID criteria appears to identify a group comparable in size to the Fukuda et al. criteria, but a larger group of patients than the Canadian ME/CFS and ME criteria, and selects more patients who have less impairment and fewer symptoms than a four item empiric criteria. PMID:26345409

  16. Comparative molecular species delimitation in the charismatic Nawab butterflies (Nymphalidae, Charaxinae, Polyura).

    PubMed

    Toussaint, Emmanuel F A; Morinière, Jérôme; Müller, Chris J; Kunte, Krushnamegh; Turlin, Bernard; Hausmann, Axel; Balke, Michael

    2015-10-01

    The charismatic tropical Polyura Nawab butterflies are distributed across twelve biodiversity hotspots in the Indomalayan/Australasian archipelago. In this study, we tested an array of species delimitation methods and compared the results to existing morphology-based taxonomy. We sequenced two mitochondrial and two nuclear gene fragments to reconstruct phylogenetic relationships within Polyura using both Bayesian inference and maximum likelihood. Based on this phylogenetic framework, we used the recently introduced bGMYC, BPP and PTP methods to investigate species boundaries. Based on our results, we describe two new species Polyura paulettae Toussaint sp. n. and Polyura smilesi Toussaint sp. n., propose one synonym, and five populations are raised to species status. Most of the newly recognized species are single-island endemics likely resulting from the recent highly complex geological history of the Indomalayan-Australasian archipelago. Surprisingly, we also find two newly recognized species in the Indomalayan region where additional biotic or abiotic factors have fostered speciation. Species delimitation methods were largely congruent and succeeded to cross-validate most extant morphological species. PTP and BPP seem to yield more consistent and robust estimations of species boundaries with respect to morphological characters while bGMYC delivered contrasting results depending on the different gene trees considered. Our findings demonstrate the efficiency of comparative approaches using molecular species delimitation methods on empirical data. They also pave the way for the investigation of less well-known groups to unveil patterns of species richness and catalogue Earth's concealed, therefore unappreciated diversity. Published by Elsevier Inc.

  17. A new model-independent approach for finding the arrival direction of an extensive air shower

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hedayati, H. Kh., E-mail: hedayati@kntu.ac.ir

    2016-11-01

    A new accurate method for reconstructing the arrival direction of an extensive air shower (EAS) is described. Compared to existing methods, it is not subject to minimization of a function and, therefore, is fast and stable. This method also does not need to know detailed curvature or thickness structure of an EAS. It can have angular resolution of about 1 degree for a typical surface array in central regions. Also, it has better angular resolution than other methods in the marginal area of arrays.

  18. Applications of the Lattice Boltzmann Method to Complex and Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Luo, Li-Shi; Qi, Dewei; Wang, Lian-Ping; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    We briefly review the method of the lattice Boltzmann equation (LBE). We show the three-dimensional LBE simulation results for a non-spherical particle in Couette flow and 16 particles in sedimentation in fluid. We compare the LBE simulation of the three-dimensional homogeneous isotropic turbulence flow in a periodic cubic box of the size 1283 with the pseudo-spectral simulation, and find that the two results agree well with each other but the LBE method is more dissipative than the pseudo-spectral method in small scales, as expected.

  19. Comparative study of original recover and recover KL in separable non-negative matrix factorization for topic detection in Twitter

    NASA Astrophysics Data System (ADS)

    Prabandari, R. D.; Murfi, H.

    2017-07-01

    An increasing amount of information on social media such as Twitter requires an efficient way to find the topics so that the information can be well managed. One of an automated method for topic detection is separable non-negative matrix factorization (SNMF). SNMF assumes that each topic has at least one word that does not appear on other topics. This method uses the direct approach and gives polynomial-time complexity, while the previous methods are iterative approaches and have NP-hard complexity. There are three steps of SNMF algorithm, i.e. constructing word co-occurrences, finding anchor words, and recovering topics. In this paper, we examine two topic recover methods, namely original recover that is using algebraic manipulation and recover KL that using probability approach with Kullback-Leibler divergence. Our simulations show that recover KL provides better accuracies in term of topic recall than original recover.

  20. Self-guided method to search maximal Bell violations for unknown quantum states

    NASA Astrophysics Data System (ADS)

    Yang, Li-Kai; Chen, Geng; Zhang, Wen-Hao; Peng, Xing-Xiang; Yu, Shang; Ye, Xiang-Jun; Li, Chuan-Feng; Guo, Guang-Can

    2017-11-01

    In recent decades, a great variety of research and applications concerning Bell nonlocality have been developed with the advent of quantum information science. Providing that Bell nonlocality can be revealed by the violation of a family of Bell inequalities, finding maximal Bell violation (MBV) for unknown quantum states becomes an important and inevitable task during Bell experiments. In this paper we introduce a self-guided method to find MBVs for unknown states using a stochastic gradient ascent algorithm (SGA), by parametrizing the corresponding Bell operators. For three investigated systems (two qubit, three qubit, and two qutrit), this method can ascertain the MBV of general two-setting inequalities within 100 iterations. Furthermore, we prove SGA is also feasible when facing more complex Bell scenarios, e.g., d -setting d -outcome Bell inequality. Moreover, compared to other possible methods, SGA exhibits significant superiority in efficiency, robustness, and versatility.

  1. MSClique: Multiple Structure Discovery through the Maximum Weighted Clique Problem.

    PubMed

    Sanroma, Gerard; Penate-Sanchez, Adrian; Alquézar, René; Serratosa, Francesc; Moreno-Noguer, Francesc; Andrade-Cetto, Juan; González Ballester, Miguel Ángel

    2016-01-01

    We present a novel approach for feature correspondence and multiple structure discovery in computer vision. In contrast to existing methods, we exploit the fact that point-sets on the same structure usually lie close to each other, thus forming clusters in the image. Given a pair of input images, we initially extract points of interest and extract hierarchical representations by agglomerative clustering. We use the maximum weighted clique problem to find the set of corresponding clusters with maximum number of inliers representing the multiple structures at the correct scales. Our method is parameter-free and only needs two sets of points along with their tentative correspondences, thus being extremely easy to use. We demonstrate the effectiveness of our method in multiple-structure fitting experiments in both publicly available and in-house datasets. As shown in the experiments, our approach finds a higher number of structures containing fewer outliers compared to state-of-the-art methods.

  2. A Color-locus Method for Mapping R V Using Ensembles of Stars

    NASA Astrophysics Data System (ADS)

    Lee, Albert; Green, Gregory M.; Schlafly, Edward F.; Finkbeiner, Douglas P.; Burgett, William; Chambers, Ken; Flewelling, Heather; Hodapp, Klaus; Kaiser, Nick; Kudritzki, Rolf-Peter; Magnier, Eugene; Metcalfe, Nigel; Wainscoat, Richard; Waters, Christopher

    2018-02-01

    We present a simple but effective technique for measuring angular variation in R V across the sky. We divide stars from the Pan-STARRS1 catalog into Healpix pixels and determine the posterior distribution of reddening and R V for each pixel using two independent Monte Carlo methods. We find the two methods to be self-consistent in the limits where they are expected to perform similarly. We also find some agreement with high-precision photometric studies of R V in Perseus and Ophiuchus, as well as with a map of reddening near the Galactic plane based on stellar spectra from APOGEE. While current studies of R V are mostly limited to isolated clouds, we have developed a systematic method for comparing R V values for the majority of observable dust. This is a proof of concept for a more rigorous Galactic reddening map.

  3. Binding ligand prediction for proteins using partial matching of local surface patches.

    PubMed

    Sael, Lee; Kihara, Daisuke

    2010-01-01

    Functional elucidation of uncharacterized protein structures is an important task in bioinformatics. We report our new approach for structure-based function prediction which captures local surface features of ligand binding pockets. Function of proteins, specifically, binding ligands of proteins, can be predicted by finding similar local surface regions of known proteins. To enable partial comparison of binding sites in proteins, a weighted bipartite matching algorithm is used to match pairs of surface patches. The surface patches are encoded with the 3D Zernike descriptors. Unlike the existing methods which compare global characteristics of the protein fold or the global pocket shape, the local surface patch method can find functional similarity between non-homologous proteins and binding pockets for flexible ligand molecules. The proposed method improves prediction results over global pocket shape-based method which was previously developed by our group.

  4. Binding Ligand Prediction for Proteins Using Partial Matching of Local Surface Patches

    PubMed Central

    Sael, Lee; Kihara, Daisuke

    2010-01-01

    Functional elucidation of uncharacterized protein structures is an important task in bioinformatics. We report our new approach for structure-based function prediction which captures local surface features of ligand binding pockets. Function of proteins, specifically, binding ligands of proteins, can be predicted by finding similar local surface regions of known proteins. To enable partial comparison of binding sites in proteins, a weighted bipartite matching algorithm is used to match pairs of surface patches. The surface patches are encoded with the 3D Zernike descriptors. Unlike the existing methods which compare global characteristics of the protein fold or the global pocket shape, the local surface patch method can find functional similarity between non-homologous proteins and binding pockets for flexible ligand molecules. The proposed method improves prediction results over global pocket shape-based method which was previously developed by our group. PMID:21614188

  5. Performance and Scalability of Discriminative Metrics for Comparative Gene Identification in 12 Drosophila Genomes

    PubMed Central

    Lin, Michael F.; Deoras, Ameya N.; Rasmussen, Matthew D.; Kellis, Manolis

    2008-01-01

    Comparative genomics of multiple related species is a powerful methodology for the discovery of functional genomic elements, and its power should increase with the number of species compared. Here, we use 12 Drosophila genomes to study the power of comparative genomics metrics to distinguish between protein-coding and non-coding regions. First, we study the relative power of different comparative metrics and their relationship to single-species metrics. We find that even relatively simple multi-species metrics robustly outperform advanced single-species metrics, especially for shorter exons (≤240 nt), which are common in animal genomes. Moreover, the two capture largely independent features of protein-coding genes, with different sensitivity/specificity trade-offs, such that their combinations lead to even greater discriminatory power. In addition, we study how discovery power scales with the number and phylogenetic distance of the genomes compared. We find that species at a broad range of distances are comparably effective informants for pairwise comparative gene identification, but that these are surpassed by multi-species comparisons at similar evolutionary divergence. In particular, while pairwise discovery power plateaued at larger distances and never outperformed the most advanced single-species metrics, multi-species comparisons continued to benefit even from the most distant species with no apparent saturation. Last, we find that genes in functional categories typically considered fast-evolving can nonetheless be recovered at very high rates using comparative methods. Our results have implications for comparative genomics analyses in any species, including the human. PMID:18421375

  6. Anonymity communication VPN and Tor: a comparative study

    NASA Astrophysics Data System (ADS)

    Ramadhani, E.

    2018-03-01

    VPN and Tor is a technology based on anonymity communication. These two technologies have their advantage and disadvantage. The objective of this paper is to find the difference between VPN and Tor technologies by comparing their security of communication on the public network based on the CIA triad concept. The comparative study in this paper is based on the survey method. At last, the result of this paper is a recommendation on when to use a VPN and Tor to secure communication

  7. A new method for detecting velocity shifts and distortions between optical spectra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Tyler M.; Murphy, Michael T., E-mail: tevans@astro.swin.edu.au

    2013-12-01

    Recent quasar spectroscopy from the Very Large Telescope (VLT) and Keck suggests that fundamental constants may not actually be constant. To better confirm or refute this result, systematic errors between telescopes must be minimized. We present a new method to directly compare spectra of the same object and measure any velocity shifts between them. This method allows for the discovery of wavelength-dependent velocity shifts between spectra, i.e., velocity distortions, that could produce spurious detections of cosmological variations in fundamental constants. This 'direct comparison' method has several advantages over alternative techniques: it is model-independent (cf. line-fitting approaches), blind, in that spectralmore » features do not need to be identified beforehand, and it produces meaningful uncertainty estimates for the velocity shift measurements. In particular, we demonstrate that, when comparing echelle-resolution spectra with unresolved absorption features, the uncertainty estimates are reliable for signal-to-noise ratios ≳7 per pixel. We apply this method to spectra of quasar J2123–0050 observed with Keck and the VLT and find no significant distortions over long wavelength ranges (∼1050 Å) greater than ≈180 m s{sup –1}. We also find no evidence for systematic velocity distortions within echelle orders greater than 500 m s{sup –1}. Moreover, previous constraints on cosmological variations in the proton-electron mass ratio should not have been affected by velocity distortions in these spectra by more than 4.0 ± 4.2 parts per million. This technique may also find application in measuring stellar radial velocities in search of extra-solar planets and attempts to directly observe the expansion history of the universe using quasar absorption spectra.« less

  8. Detection of Alzheimer's disease using group lasso SVM-based region selection

    NASA Astrophysics Data System (ADS)

    Sun, Zhuo; Fan, Yong; Lelieveldt, Boudewijn P. F.; van de Giessen, Martijn

    2015-03-01

    Alzheimer's disease (AD) is one of the most frequent forms of dementia and an increasing challenging public health problem. In the last two decades, structural magnetic resonance imaging (MRI) has shown potential in distinguishing patients with Alzheimer's disease and elderly controls (CN). To obtain AD-specific biomarkers, previous research used either statistical testing to find statistically significant different regions between the two clinical groups, or l1 sparse learning to select isolated features in the image domain. In this paper, we propose a new framework that uses structural MRI to simultaneously distinguish the two clinical groups and find the bio-markers of AD, using a group lasso support vector machine (SVM). The group lasso term (mixed l1- l2 norm) introduces anatomical information from the image domain into the feature domain, such that the resulting set of selected voxels are more meaningful than the l1 sparse SVM. Because of large inter-structure size variation, we introduce a group specific normalization factor to deal with the structure size bias. Experiments have been performed on a well-designed AD vs. CN dataset1 to validate our method. Comparing to the l1 sparse SVM approach, our method achieved better classification performance and a more meaningful biomarker selection. When we vary the training set, the selected regions by our method were more stable than the l1 sparse SVM. Classification experiments showed that our group normalization lead to higher classification accuracy with fewer selected regions than the non-normalized method. Comparing to the state-of-art AD vs. CN classification methods, our approach not only obtains a high accuracy with the same dataset, but more importantly, we simultaneously find the brain anatomies that are closely related to the disease.

  9. Semiautomated tremor detection using a combined cross-correlation and neural network approach

    USGS Publications Warehouse

    Horstmann, Tobias; Harrington, Rebecca M.; Cochran, Elizabeth S.

    2013-01-01

    Despite observations of tectonic tremor in many locations around the globe, the emergent phase arrivals, low‒amplitude waveforms, and variable event durations make automatic detection a nontrivial task. In this study, we employ a new method to identify tremor in large data sets using a semiautomated technique. The method first reduces the data volume with an envelope cross‒correlation technique, followed by a Self‒Organizing Map (SOM) algorithm to identify and classify event types. The method detects tremor in an automated fashion after calibrating for a specific data set, hence we refer to it as being “semiautomated”. We apply the semiautomated detection algorithm to a newly acquired data set of waveforms from a temporary deployment of 13 seismometers near Cholame, California, from May 2010 to July 2011. We manually identify tremor events in a 3 week long test data set and compare to the SOM output and find a detection accuracy of 79.5%. Detection accuracy improves with increasing signal‒to‒noise ratios and number of available stations. We find detection completeness of 96% for tremor events with signal‒to‒noise ratios above 3 and optimal results when data from at least 10 stations are available. We compare the SOM algorithm to the envelope correlation method of Wech and Creager and find the SOM performs significantly better, at least for the data set examined here. Using the SOM algorithm, we detect 2606 tremor events with a cumulative signal duration of nearly 55 h during the 13 month deployment. Overall, the SOM algorithm is shown to be a flexible new method that utilizes characteristics of the waveforms to identify tremor from noise or other seismic signals.

  10. Semiautomated tremor detection using a combined cross-correlation and neural network approach

    NASA Astrophysics Data System (ADS)

    Horstmann, T.; Harrington, R. M.; Cochran, E. S.

    2013-09-01

    Despite observations of tectonic tremor in many locations around the globe, the emergent phase arrivals, low-amplitude waveforms, and variable event durations make automatic detection a nontrivial task. In this study, we employ a new method to identify tremor in large data sets using a semiautomated technique. The method first reduces the data volume with an envelope cross-correlation technique, followed by a Self-Organizing Map (SOM) algorithm to identify and classify event types. The method detects tremor in an automated fashion after calibrating for a specific data set, hence we refer to it as being "semiautomated". We apply the semiautomated detection algorithm to a newly acquired data set of waveforms from a temporary deployment of 13 seismometers near Cholame, California, from May 2010 to July 2011. We manually identify tremor events in a 3 week long test data set and compare to the SOM output and find a detection accuracy of 79.5%. Detection accuracy improves with increasing signal-to-noise ratios and number of available stations. We find detection completeness of 96% for tremor events with signal-to-noise ratios above 3 and optimal results when data from at least 10 stations are available. We compare the SOM algorithm to the envelope correlation method of Wech and Creager and find the SOM performs significantly better, at least for the data set examined here. Using the SOM algorithm, we detect 2606 tremor events with a cumulative signal duration of nearly 55 h during the 13 month deployment. Overall, the SOM algorithm is shown to be a flexible new method that utilizes characteristics of the waveforms to identify tremor from noise or other seismic signals.

  11. Computerized test versus personal interview as admission methods for graduate nursing studies: A retrospective cohort study.

    PubMed

    Hazut, Koren; Romem, Pnina; Malkin, Smadar; Livshiz-Riven, Ilana

    2016-12-01

    The purpose of this study was to compare the predictive validity, economic efficiency, and faculty staff satisfaction of a computerized test versus a personal interview as admission methods for graduate nursing studies. A mixed method study was designed, including cross-sectional and retrospective cohorts, interviews, and cost analysis. One hundred and thirty-four students in the Master of Nursing program participated. The success of students in required core courses was similar in both admission method groups. The personal interview method was found to be a significant predictor of success, with cognitive variables the only significant contributors to the model. Higher satisfaction levels were reported with the computerized test compared with the personal interview method. The cost of the personal interview method, in annual hourly work, was 2.28 times higher than the computerized test. These findings may promote discussion regarding the cost benefit of the personal interview as an admission method for advanced academic studies in healthcare professions. © 2016 John Wiley & Sons Australia, Ltd.

  12. Diabetes as Experienced by Adolescents.

    ERIC Educational Resources Information Center

    Meldman, Linda S.

    1987-01-01

    Explored adolescents' perspective of their diabetic management by interviewing 12 adolescent counselors-in-training at a diabetic youth camp. Interviews were analyzed using the constant comparative method; themes were further grouped into three categories: psychosocial, developmental, and clinical. A striking finding throughout the data was the…

  13. How does tele-learning compare with other forms of education delivery? A systematic review of tele-learning educational outcomes for health professionals.

    PubMed

    Tomlinson, Jo; Shaw, Tim; Munro, Ana; Johnson, Ros; Madden, D Lynne; Phillips, Rosemary; McGregor, Deborah

    2013-11-01

    Telecommuniciation technologies, including audio and videoconferencing facilities, afford geographically dispersed health professionals the opportunity to connect and collaborate with others. Recognised for enabling tele-consultations and tele-collaborations between teams of health care professionals and their patients, these technologies are also well suited to the delivery of distance learning programs, known as tele-learning. To determine whether tele-learning delivery methods achieve equivalent learning outcomes when compared with traditional face-to-face education delivery methods. A systematic literature review was commissioned by the NSW Ministry of Health to identify results relevant to programs applying tele-learning delivery methods in the provision of education to health professionals. The review found few studies that rigorously compared tele-learning with traditional formats. There was some evidence, however, to support the premise that tele-learning models achieve comparable learning outcomes and that participants are generally satisfied with and accepting of this delivery method. The review illustrated that tele-learning technologies not only enable distance learning opportunities, but achieve comparable learning outcomes to traditional face-to-face models. More rigorous evidence is required to strengthen these findings and should be the focus of future tele-learning research.

  14. Handwritten digits recognition using HMM and PSO based on storks

    NASA Astrophysics Data System (ADS)

    Yan, Liao; Jia, Zhenhong; Yang, Jie; Pang, Shaoning

    2010-07-01

    A new method for handwritten digits recognition based on hidden markov model (HMM) and particle swarm optimization (PSO) is proposed. This method defined 24 strokes with the sense of directional, to make up for the shortage that is sensitive in choice of stating point in traditional methods, but also reduce the ambiguity caused by shakes. Make use of excellent global convergence of PSO; improving the probability of finding the optimum and avoiding local infinitesimal obviously. Experimental results demonstrate that compared with the traditional methods, the proposed method can make most of the recognition rate of handwritten digits improved.

  15. Finding minimum gene subsets with heuristic breadth-first search algorithm for robust tumor classification

    PubMed Central

    2012-01-01

    Background Previous studies on tumor classification based on gene expression profiles suggest that gene selection plays a key role in improving the classification performance. Moreover, finding important tumor-related genes with the highest accuracy is a very important task because these genes might serve as tumor biomarkers, which is of great benefit to not only tumor molecular diagnosis but also drug development. Results This paper proposes a novel gene selection method with rich biomedical meaning based on Heuristic Breadth-first Search Algorithm (HBSA) to find as many optimal gene subsets as possible. Due to the curse of dimensionality, this type of method could suffer from over-fitting and selection bias problems. To address these potential problems, a HBSA-based ensemble classifier is constructed using majority voting strategy from individual classifiers constructed by the selected gene subsets, and a novel HBSA-based gene ranking method is designed to find important tumor-related genes by measuring the significance of genes using their occurrence frequencies in the selected gene subsets. The experimental results on nine tumor datasets including three pairs of cross-platform datasets indicate that the proposed method can not only obtain better generalization performance but also find many important tumor-related genes. Conclusions It is found that the frequencies of the selected genes follow a power-law distribution, indicating that only a few top-ranked genes can be used as potential diagnosis biomarkers. Moreover, the top-ranked genes leading to very high prediction accuracy are closely related to specific tumor subtype and even hub genes. Compared with other related methods, the proposed method can achieve higher prediction accuracy with fewer genes. Moreover, they are further justified by analyzing the top-ranked genes in the context of individual gene function, biological pathway, and protein-protein interaction network. PMID:22830977

  16. Peri-Implant Tissue Findings in Bone Grafted Oral Cancer Patients Compared to non Bone Grafted Patients without Oral Cancer

    PubMed Central

    Agata, Hideki; Sándor, George K.; Haimi, Suvi

    2011-01-01

    ABSTRACT Objectives The aim of this study was to compare microbiological, histological, and mechanical findings from tissues around osseointergrated dental implants in patients who had undergone tumour resection and subsequent bone grafting with non bone grafted patients without a history of oral cancer and to develop an effective tool for the monitoring of the peri-implant tissues. A third aim was to assess and compare the masticatory function of the two patient groups after reconstruction with dental implants. Material and Methods A total of 20 patients were divided into 2 groups. The first group was edentulous and treated with dental implants without the need for bone grafting. The second edentulous group, with a history of oral cancer involving the mandible, received onlay bone grafts with concurrent placement of dental implants. Microbiological, histological, mechanical and biochemical assessment methods, crevicular fluid flow rate, hygiene-index, implant mobility, and the masticatory function were analysed and compared in both patient groups. Results The microbiological examinations showed no evidence of the three most common pathogenic bacteria: Porphyromonas gingivalis, Prevotella intermedius, Actinobacillus actinomycetencomitans. A causal relationship between specific microbes and peri-implant inflammation could not be found. All biopsies in both patient groups revealed early signs of soft tissue peri-implant inflammation. Conclusions The crevicular fluid volume and grade of gingival inflammation around the dental implants were related. Peri-implant tissue findings were similar in the two patient groups despite the history of oral cancer and the need for bone grafting at the time of dental implant placement. PMID:24421999

  17. The implementation of hybrid clustering using fuzzy c-means and divisive algorithm for analyzing DNA human Papillomavirus cause of cervical cancer

    NASA Astrophysics Data System (ADS)

    Andryani, Diyah Septi; Bustamam, Alhadi; Lestari, Dian

    2017-03-01

    Clustering aims to classify the different patterns into groups called clusters. In this clustering method, we use n-mers frequency to calculate the distance matrix which is considered more accurate than using the DNA alignment. The clustering results could be used to discover biologically important sub-sections and groups of genes. Many clustering methods have been developed, while hard clustering methods considered less accurate than fuzzy clustering methods, especially if it is used for outliers data. Among fuzzy clustering methods, fuzzy c-means is one the best known for its accuracy and simplicity. Fuzzy c-means clustering uses membership function variable, which refers to how likely the data could be members into a cluster. Fuzzy c-means clustering works using the principle of minimizing the objective function. Parameters of membership function in fuzzy are used as a weighting factor which is also called the fuzzier. In this study we implement hybrid clustering using fuzzy c-means and divisive algorithm which could improve the accuracy of cluster membership compare to traditional partitional approach only. In this study fuzzy c-means is used in the first step to find partition results. Furthermore divisive algorithms will run on the second step to find sub-clusters and dendogram of phylogenetic tree. To find the best number of clusters is determined using the minimum value of Davies Bouldin Index (DBI) of the cluster results. In this research, the results show that the methods introduced in this paper is better than other partitioning methods. Finally, we found 3 clusters with DBI value of 1.126628 at first step of clustering. Moreover, DBI values after implementing the second step of clustering are always producing smaller IDB values compare to the results of using first step clustering only. This condition indicates that the hybrid approach in this study produce better performance of the cluster results, in term its DBI values.

  18. Drug screening in medical examiner casework by high-resolution mass spectrometry (UPLC-MSE-TOF).

    PubMed

    Rosano, Thomas G; Wood, Michelle; Ihenetu, Kenneth; Swift, Thomas A

    2013-10-01

    Postmortem drug findings yield important analytical evidence in medical examiner casework, and chromatography coupled with nominal mass spectrometry (MS) serves as the predominant general unknown screening approach. We report screening by ultra performance liquid chromatography (UPLC) coupled with hybrid quadrupole time-of-flight mass spectrometer (MS(E)-TOF), with comparison to previously validated nominal mass UPLC-MS and UPLC-MS-MS methods. UPLC-MS(E)-TOF screening for over 950 toxicologically relevant drugs and metabolites was performed in a full-spectrum (m/z 50-1,000) mode using an MS(E) acquisition of both molecular and fragment ion data at low (6 eV) and ramped (10-40 eV) collision energies. Mass error averaged 1.27 ppm for a large panel of reference drugs and metabolites. The limit of detection by UPLC-MS(E)-TOF ranges from 0.5 to 100 ng/mL and compares closely with UPLC-MS-MS. The influence of column recovery and matrix effect on the limit of detection was demonstrated with ion suppression by matrix components correlating closely with early and late eluting reference analytes. Drug and metabolite findings by UPLC-MS(E)-TOF were compared with UPLC-MS and UPLC-MS-MS analyses of postmortem blood in 300 medical examiner cases. Positive findings by all methods totaled 1,528, with a detection rate of 57% by UPLC-MS, 72% by UPLC-MS-MS and 80% by combined UPLC-MS and UPLC-MS-MS screening. Compared with nominal mass screening methods, UPLC-MS(E)-TOF screening resulted in a 99% detection rate and, in addition, offered the potential for the detection of nontargeted analytes via high-resolution acquisition of molecular and fragment ion data.

  19. Docking and multivariate methods to explore HIV-1 drug-resistance: a comparative analysis

    NASA Astrophysics Data System (ADS)

    Almerico, Anna Maria; Tutone, Marco; Lauria, Antonino

    2008-05-01

    In this paper we describe a comparative analysis between multivariate and docking methods in the study of the drug resistance to the reverse transcriptase and the protease inhibitors. In our early papers we developed a simple but efficient method to evaluate the features of compounds that are less likely to trigger resistance or are effective against mutant HIV strains, using the multivariate statistical procedures PCA and DA. In the attempt to create a more solid background for the prediction of susceptibility or resistance, we carried out a comparative analysis between our previous multivariate approach and molecular docking study. The intent of this paper is not only to find further support to the results obtained by the combined use of PCA and DA, but also to evidence the structural features, in terms of molecular descriptors, similarity, and energetic contributions, derived from docking, which can account for the arising of drug-resistance against mutant strains.

  20. An effective framework for finding similar cases of dengue from audio and text data using domain thesaurus and case base reasoning

    NASA Astrophysics Data System (ADS)

    Sandhu, Rajinder; Kaur, Jaspreet; Thapar, Vivek

    2018-02-01

    Dengue, also known as break-bone fever, is a tropical disease transmitted by mosquitoes. If the similarity between dengue infected users can be identified, it can help government's health agencies to manage the outbreak more effectively. To find similarity between cases affected by Dengue, user's personal and health information are the two fundamental requirements. Identification of similar symptoms, causes, effects, predictions and treatment procedures, is important. In this paper, an effective framework is proposed which finds similar patients suffering from dengue using keyword aware domain thesaurus and case base reasoning method. This paper focuses on the use of ontology dependent domain thesaurus technique to extract relevant keywords and then build cases with the help of case base reasoning method. Similar cases can be shared with users, nearby hospitals and health organizations to manage the problem more adequately. Two million case bases were generated to test the proposed similarity method. Experimental evaluations of proposed framework resulted in high accuracy and low error rate for finding similar cases of dengue as compared to UPCC and IPCC algorithms. The framework developed in this paper is for dengue but can easily be extended to other domains also.

  1. A novel spinal kinematic analysis using X-ray imaging and vicon motion analysis: a case study.

    PubMed

    Noh, Dong K; Lee, Nam G; You, Joshua H

    2014-01-01

    This study highlights a novel spinal kinematic analysis method and the feasibility of X-ray imaging measurements to accurately assess thoracic spine motion. The advanced X-ray Nash-Moe method and analysis were used to compute the segmental range of motion in thoracic vertebra pedicles in vivo. This Nash-Moe X-ray imaging method was compared with a standardized method using the Vicon 3-dimensional motion capture system. Linear regression analysis showed an excellent and significant correlation between the two methods (R2 = 0.99, p < 0.05), suggesting that the analysis of spinal segmental range of motion using X-ray imaging measurements was accurate and comparable to the conventional 3-dimensional motion analysis system. Clinically, this novel finding is compelling evidence demonstrating that measurements with X-ray imaging are useful to accurately decipher pathological spinal alignment and movement impairments in idiopathic scoliosis (IS).

  2. A Review of Depth and Normal Fusion Algorithms

    PubMed Central

    Štolc, Svorad; Pock, Thomas

    2018-01-01

    Geometric surface information such as depth maps and surface normals can be acquired by various methods such as stereo light fields, shape from shading and photometric stereo techniques. We compare several algorithms which deal with the combination of depth with surface normal information in order to reconstruct a refined depth map. The reasons for performance differences are examined from the perspective of alternative formulations of surface normals for depth reconstruction. We review and analyze methods in a systematic way. Based on our findings, we introduce a new generalized fusion method, which is formulated as a least squares problem and outperforms previous methods in the depth error domain by introducing a novel normal weighting that performs closer to the geodesic distance measure. Furthermore, a novel method is introduced based on Total Generalized Variation (TGV) which further outperforms previous approaches in terms of the geodesic normal distance error and maintains comparable quality in the depth error domain. PMID:29389903

  3. The model of flood control using servqual method and importance performance analysis in Surakarta City – Indonesia

    NASA Astrophysics Data System (ADS)

    Titi Purwantini, V.; Sutanto, Yusuf

    2018-05-01

    This research is to create a model of flood control in the city of Surakarta using Servqual method and Importance Performance Analysis. Service quality is generally defined as the overall assessment of a service by the customersor the extent to which a service meets customer’s needs or expectations. The purpose of this study is to find the first model of flood control that is appropriate to the condition of the community. Surakarta This means looking for a model that can provide satisfactory service for the people of Surakarta who are in the location of the flood. The second is to find the right model to improve service performance of Surakarta City Government in serving the people in flood location. The method used to determine the satisfaction of the public on the quality of service is to see the difference in the quality of service expected by the community with the reality. This method is Servqual Method While to assess the performance of city government officials is by comparing the actual performance with the quality of services provided, this method is This means looking for a model that can provide satisfactory service for the people of Surakarta who are in the location of the flood.The second is to find the right model to improve service performance of Surakarta City Government in serving the people in flood location. The method used to determine the satisfaction of the public on the quality of service is to see the difference in the quality of service expected by the community with the reality. This method is Servqual Method While to assess the performance of city government officials is by comparing the actual performance with the quality of services provided, this method is Importance Performance Analysis. Samples were people living in flooded areas in the city of Surakarta. Result this research is Satisfaction = Responsiveness+ Realibility + Assurance + Empathy+ Tangible (Servqual Model) and Importance Performance Analysis is From Cartesian diagram can be made Flood Control Formula as follow: Food Control = High performance

  4. A new method of derived equatorial plasma bubbles motion by tracing OI 630 nm emission all-sky images

    NASA Astrophysics Data System (ADS)

    Li, M.; Yu, T.; Chunliang, X.; Zuo, X.; Liu, Z.

    2017-12-01

    A new method for estimating the equatorial plasma bubbles (EPBs) motions from airglow emission all-sky images is presented in this paper. This method, which is called 'cloud-derived wind technology' and widely used in satellite observation of wind, could reasonable derive zonal and meridional velocity vectors of EPBs drifts by tracking a series of successive airglow 630.0 nm emission images. Airglow emission images data are available from an all sky airglow camera in Hainan Fuke (19.5°N, 109.2°E) supported by China Meridional Project, which can receive the 630.0nm emission from the ionosphere F region at low-latitudes to observe plasma bubbles. A series of pretreatment technology, e.g. image enhancement, orientation correction, image projection are utilized to preprocess the raw observation. Then the regions of plasma bubble extracted from the images are divided into several small tracing windows and each tracing window can find a target window in the searching area in following image, which is considered as the position tracing window moved to. According to this, velocities in each window are calculated by using the technology of cloud-derived wind. When applying the cloud-derived wind technology, the maximum correlation coefficient (MCC) and the histogram of gradient (HOG) methods to find the target window, which mean to find the maximum correlation and the minimum euclidean distance between two gradient histograms in respectively, are investigated and compared in detail. The maximum correlation method is fianlly adopted in this study to analyze the velocity of plasma bubbles because of its better performance than HOG. All-sky images from Hainan Fuke, between August 2014 and October 2014, are analyzed to investigate the plasma bubble drift velocities using MCC method. The data at different local time at 9 nights are studied and find that zonal drift velocity in different latitude at different local time ranges from 50 m/s to 180 m/s and there is a peak value at about 20°N. For comparison and validation, EPBs motions obtained from three traditional methods are also investigated and compared with MC method. The advantages and disadvantages of using cloud-derived wind technology to calculate EPB drift velocity are discussed.

  5. Ground State Structure Search of Fluoroperovskites through Lattice Instability

    NASA Astrophysics Data System (ADS)

    Mei, W. N.; Hatch, D. M.; Stokes, H. T.; Boyer, L. L.

    2002-03-01

    Many Fluoroperovskite are capable of a ferroelectric transition from a cubic to a tetragonal and even lower-symmetry structures. In this work, we studied systematically the structural phase transitions of several fluoroperovskites ABF3 where A= Na, K and B= Ca, Sr. Combining the Self-Consistent Atom Deformation (SCAD) -- a density-functional method using localized densities -- and the frozen-phonon method which utilizes the isotropy subgroup operations, we calculate the phonon energies and find instabilities which lower the symmetry of the crystal. Following this scheme, we work down to lower symmetry structures until we no longer find instabilities. The final results are used to compare with those obtained from molecular dynamics based on Gordon-Kim potentials.

  6. Solar arc method for the analysis of burial places in Eneolithic

    NASA Astrophysics Data System (ADS)

    Szucs-Csillik, Iharka; Comsa, Alexandra

    2017-12-01

    In this study is presented the solar arc method used in the NeolithicEneolithic of central Europe to analyse and compare three sites of the Gumelnitţa culture: V\\varǎşti-Grǎdiştea Ulmilor, Dridu and Durankulak. The scope of this paper is to find the possible responses by alignment difference through the same culture to understand better the Eneolithic period.

  7. CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila

    2015-03-10

    We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observedmore » galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs.« less

  8. An in vitro study to find the incidence of mesiobuccal 2 canal in permanent maxillary first molars using three different methods.

    PubMed

    Vasundhara, V; Lashkari, Krishna Prasada

    2017-01-01

    In-vitro study was done to evaluate the incidence of MB2 canals using three different methods (CBCT, CLINICAL ANALYSIS AND DENTAL LOUPES) and to compare the efficacy of the three methods in identifying the incidence of MB2 canals in maxillary permanent first molars. The study sample consisted of 120 extracted intact permanent maxillary molars. These extracted teeth were subjected to CBCT. Later the teeth were access opened with naked eye to find the incidence of MB2 canal, and then the teeth were visualised under dental loupe to locate MB2 canal if they were missed under naked eye. Results was statistically analysed by Mc Nemar's tests with Bonferroni correction, Chi square test and Cochran's Q test. CBCT showed high incidence (68.3%) of MB2 canal in maxillary first molars and it showed to be a reliable method in detecting MB2 canal. When compared to dental loupe (52.5%) and naked eye (25%), the dental loupe improved the detection of MB2 canal. Within the parameter of this study in detecting the incidence of MB2 canal, using CBCT dental loupes and naked eye, detection of MB2 canal was significantly higher with CBCT followed by dental loupe and least with naked eye.

  9. Empirical Bayes method for reducing false discovery rates of correlation matrices with block diagonal structure.

    PubMed

    Pacini, Clare; Ajioka, James W; Micklem, Gos

    2017-04-12

    Correlation matrices are important in inferring relationships and networks between regulatory or signalling elements in biological systems. With currently available technology sample sizes for experiments are typically small, meaning that these correlations can be difficult to estimate. At a genome-wide scale estimation of correlation matrices can also be computationally demanding. We develop an empirical Bayes approach to improve covariance estimates for gene expression, where we assume the covariance matrix takes a block diagonal form. Our method shows lower false discovery rates than existing methods on simulated data. Applied to a real data set from Bacillus subtilis we demonstrate it's ability to detecting known regulatory units and interactions between them. We demonstrate that, compared to existing methods, our method is able to find significant covariances and also to control false discovery rates, even when the sample size is small (n=10). The method can be used to find potential regulatory networks, and it may also be used as a pre-processing step for methods that calculate, for example, partial correlations, so enabling the inference of the causal and hierarchical structure of the networks.

  10. Validation sampling can reduce bias in health care database studies: an illustration using influenza vaccination effectiveness.

    PubMed

    Nelson, Jennifer Clark; Marsh, Tracey; Lumley, Thomas; Larson, Eric B; Jackson, Lisa A; Jackson, Michael L

    2013-08-01

    Estimates of treatment effectiveness in epidemiologic studies using large observational health care databases may be biased owing to inaccurate or incomplete information on important confounders. Study methods that collect and incorporate more comprehensive confounder data on a validation cohort may reduce confounding bias. We applied two such methods, namely imputation and reweighting, to Group Health administrative data (full sample) supplemented by more detailed confounder data from the Adult Changes in Thought study (validation sample). We used influenza vaccination effectiveness (with an unexposed comparator group) as an example and evaluated each method's ability to reduce bias using the control time period before influenza circulation. Both methods reduced, but did not completely eliminate, the bias compared with traditional effectiveness estimates that do not use the validation sample confounders. Although these results support the use of validation sampling methods to improve the accuracy of comparative effectiveness findings from health care database studies, they also illustrate that the success of such methods depends on many factors, including the ability to measure important confounders in a representative and large enough validation sample, the comparability of the full sample and validation sample, and the accuracy with which the data can be imputed or reweighted using the additional validation sample information. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. A novel video recommendation system based on efficient retrieval of human actions

    NASA Astrophysics Data System (ADS)

    Ramezani, Mohsen; Yaghmaee, Farzin

    2016-09-01

    In recent years, fast growth of online video sharing eventuated new issues such as helping users to find their requirements in an efficient way. Hence, Recommender Systems (RSs) are used to find the users' most favorite items. Finding these items relies on items or users similarities. Though, many factors like sparsity and cold start user impress the recommendation quality. In some systems, attached tags are used for searching items (e.g. videos) as personalized recommendation. Different views, incomplete and inaccurate tags etc. can weaken the performance of these systems. Considering the advancement of computer vision techniques can help improving RSs. To this end, content based search can be used for finding items (here, videos are considered). In such systems, a video is taken from the user to find and recommend a list of most similar videos to the query one. Due to relating most videos to humans, we present a novel low complex scalable method to recommend videos based on the model of included action. This method has recourse to human action retrieval approaches. For modeling human actions, some interest points are extracted from each action and their motion information are used to compute the action representation. Moreover, a fuzzy dissimilarity measure is presented to compare videos for ranking them. The experimental results on HMDB, UCFYT, UCF sport and KTH datasets illustrated that, in most cases, the proposed method can reach better results than most used methods.

  12. A comparison of cosegregation analysis methods for the clinical setting.

    PubMed

    Rañola, John Michael O; Liu, Quanhui; Rosenthal, Elisabeth A; Shirts, Brian H

    2018-04-01

    Quantitative cosegregation analysis can help evaluate the pathogenicity of genetic variants. However, genetics professionals without statistical training often use simple methods, reporting only qualitative findings. We evaluate the potential utility of quantitative cosegregation in the clinical setting by comparing three methods. One thousand pedigrees each were simulated for benign and pathogenic variants in BRCA1 and MLH1 using United States historical demographic data to produce pedigrees similar to those seen in the clinic. These pedigrees were analyzed using two robust methods, full likelihood Bayes factors (FLB) and cosegregation likelihood ratios (CSLR), and a simpler method, counting meioses. Both FLB and CSLR outperform counting meioses when dealing with pathogenic variants, though counting meioses is not far behind. For benign variants, FLB and CSLR greatly outperform as counting meioses is unable to generate evidence for benign variants. Comparing FLB and CSLR, we find that the two methods perform similarly, indicating that quantitative results from either of these methods could be combined in multifactorial calculations. Combining quantitative information will be important as isolated use of cosegregation in single families will yield classification for less than 1% of variants. To encourage wider use of robust cosegregation analysis, we present a website ( http://www.analyze.myvariant.org ) which implements the CSLR, FLB, and Counting Meioses methods for ATM, BRCA1, BRCA2, CHEK2, MEN1, MLH1, MSH2, MSH6, and PMS2. We also present an R package, CoSeg, which performs the CSLR analysis on any gene with user supplied parameters. Future variant classification guidelines should allow nuanced inclusion of cosegregation evidence against pathogenicity.

  13. Towards Extending Forward Kinematic Models on Hyper-Redundant Manipulator to Cooperative Bionic Arms

    NASA Astrophysics Data System (ADS)

    Singh, Inderjeet; Lakhal, Othman; Merzouki, Rochdi

    2017-01-01

    Forward Kinematics is a stepping stone towards finding an inverse solution and subsequently a dynamic model of a robot. Hence a study and comparison of various Forward Kinematic Models (FKMs) is necessary for robot design. This paper deals with comparison of three FKMs on the same hyper-redundant Compact Bionic Handling Assistant (CBHA) manipulator under same conditions. The aim of this study is to project on modeling cooperative bionic manipulators. Two of these methods are quantitative methods, Arc Geometry HTM (Homogeneous Transformation Matrix) Method and Dual Quaternion Method, while the other one is Hybrid Method which uses both quantitative as well as qualitative approach. The methods are compared theoretically and experimental results are discussed to add further insight to the comparison. HTM is the widely used and accepted technique, is taken as reference and trajectory deviation in other techniques are compared with respect to HTM. Which method allows obtaining an accurate kinematic behavior of the CBHA, controlled in the real-time.

  14. The impact of loss to follow-up on hypothesis tests of the treatment effect for several statistical methods in substance abuse clinical trials.

    PubMed

    Hedden, Sarra L; Woolson, Robert F; Carter, Rickey E; Palesch, Yuko; Upadhyaya, Himanshu P; Malcolm, Robert J

    2009-07-01

    "Loss to follow-up" can be substantial in substance abuse clinical trials. When extensive losses to follow-up occur, one must cautiously analyze and interpret the findings of a research study. Aims of this project were to introduce the types of missing data mechanisms and describe several methods for analyzing data with loss to follow-up. Furthermore, a simulation study compared Type I error and power of several methods when missing data amount and mechanism varies. Methods compared were the following: Last observation carried forward (LOCF), multiple imputation (MI), modified stratified summary statistics (SSS), and mixed effects models. Results demonstrated nominal Type I error for all methods; power was high for all methods except LOCF. Mixed effect model, modified SSS, and MI are generally recommended for use; however, many methods require that the data are missing at random or missing completely at random (i.e., "ignorable"). If the missing data are presumed to be nonignorable, a sensitivity analysis is recommended.

  15. Squeal Those Tires! Automobile-Accident Reconstruction.

    ERIC Educational Resources Information Center

    Caples, Linda Griffin

    1992-01-01

    Methods use to reconstruct traffic accidents provide settings for real life applications for students in precalculus, mathematical analysis, or trigonometry. Described is the investigation of an accident in conjunction with the local Highway Patrol Academy integrating physics, vector, and trigonometry. Class findings were compared with those of…

  16. Perioperative mortality after hemiarthroplasty related to fixation method.

    PubMed

    Costain, Darren J; Whitehouse, Sarah L; Pratt, Nicole L; Graves, Stephen E; Ryan, Philip; Crawford, Ross W

    2011-06-01

    The appropriate fixation method for hemiarthroplasty of the hip as it relates to implant survivorship and patient mortality is a matter of ongoing debate. We examined the influence of fixation method on revision rate and mortality. We analyzed approximately 25,000 hemiarthroplasty cases from the AOA National Joint Replacement Registry. Deaths at 1 day, 1 week, 1 month, and 1 year were compared for all patients and among subgroups based on implant type. Patients treated with cemented monoblock hemiarthroplasty had a 1.7-times higher day-1 mortality compared to uncemented monoblock components (p < 0.001). This finding was reversed by 1 week, 1 month, and 1 year after surgery (p < 0.001). Modular hemiarthroplasties did not reveal a difference in mortality between fixation methods at any time point. This study shows lower (or similar) overall mortality with cemented hemiarthroplasty of the hip.

  17. Preferences for partner notification method: variation in responses between respondents as index patients and contacts.

    PubMed

    Apoola, A; Radcliffe, K W; Das, S; Robshaw, V; Gilleran, G; Kumari, B S; Boothby, M; Rajakumar, R

    2007-07-01

    There have been very few studies focusing on what form of communication patients would find acceptable from a clinic. This study looks at the differences in preferences for various partner notification methods when the respondents were index patients compared with when they had to be contacted because a partner had a sexually transmitted infection (STI). There were 2544 respondents. When the clinic had to notify partners, respondents were more likely to report the method as good when a partner had an STI and they were being contacted compared with when the respondents had an infection and the partner was being contacted. The opposite was true for patient referral partner notification. Therefore, there are variations in the preferences of respondents for partner notification method, which depend on whether they see themselves as index patients or contacts.

  18. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review.

    PubMed

    Mathes, Tim; Klaßen, Pauline; Pieper, Dawid

    2017-11-28

    Our objective was to assess the frequency of data extraction errors and its potential impact on results in systematic reviews. Furthermore, we evaluated the effect of different extraction methods, reviewer characteristics and reviewer training on error rates and results. We performed a systematic review of methodological literature in PubMed, Cochrane methodological registry, and by manual searches (12/2016). Studies were selected by two reviewers independently. Data were extracted in standardized tables by one reviewer and verified by a second. The analysis included six studies; four studies on extraction error frequency, one study comparing different reviewer extraction methods and two studies comparing different reviewer characteristics. We did not find a study on reviewer training. There was a high rate of extraction errors (up to 50%). Errors often had an influence on effect estimates. Different data extraction methods and reviewer characteristics had moderate effect on extraction error rates and effect estimates. The evidence base for established standards of data extraction seems weak despite the high prevalence of extraction errors. More comparative studies are needed to get deeper insights into the influence of different extraction methods.

  19. A Comparative Analysis of Taguchi Methodology and Shainin System DoE in the Optimization of Injection Molding Process Parameters

    NASA Astrophysics Data System (ADS)

    Khavekar, Rajendra; Vasudevan, Hari, Dr.; Modi, Bhavik

    2017-08-01

    Two well-known Design of Experiments (DoE) methodologies, such as Taguchi Methods (TM) and Shainin Systems (SS) are compared and analyzed in this study through their implementation in a plastic injection molding unit. Experiments were performed at a perfume bottle cap manufacturing company (made by acrylic material) using TM and SS to find out the root cause of defects and to optimize the process parameters for minimum rejection. Experiments obtained the rejection rate to be 8.57% from 40% (appx.) during trial runs, which is quiet low, representing successful implementation of these DoE methods. The comparison showed that both methodologies gave same set of variables as critical for defect reduction, but with change in their significance order. Also, Taguchi methods require more number of experiments and consume more time compared to the Shainin System. Shainin system is less complicated and is easy to implement, whereas Taguchi methods is statistically more reliable for optimization of process parameters. Finally, experimentations implied that DoE methods are strong and reliable in implementation, as organizations attempt to improve the quality through optimization.

  20. Application of the dual-kinetic-balance sets in the relativistic many-body problem of atomic structure

    NASA Astrophysics Data System (ADS)

    Beloy, Kyle; Derevianko, Andrei

    2008-05-01

    The dual-kinetic-balance (DKB) finite basis set method for solving the Dirac equation for hydrogen-like ions [V. M. Shabaev et al., Phys. Rev. Lett. 93, 130405 (2004)] is extended to problems with a non-local spherically-symmetric Dirac-Hartree-Fock potential. We implement the DKB method using B-spline basis sets and compare its performance with the widely- employed approach of Notre Dame (ND) group [W.R. Johnson, S.A. Blundell, J. Sapirstein, Phys. Rev. A 37, 307-15 (1988)]. We compare the performance of the ND and DKB methods by computing various properties of Cs atom: energies, hyperfine integrals, the parity-non-conserving amplitude of the 6s1/2-7s1/2 transition, and the second-order many-body correction to the removal energy of the valence electrons. We find that for a comparable size of the basis set the accuracy of both methods is similar for matrix elements accumulated far from the nuclear region. However, for atomic properties determined by small distances, the DKB method outperforms the ND approach.

  1. Improving Nursing Students' Learning Outcomes in Fundamentals of Nursing Course through Combination of Traditional and e-Learning Methods

    PubMed Central

    Sheikhaboumasoudi, Rouhollah; Bagheri, Maryam; Hosseini, Sayed Abbas; Ashouri, Elaheh; Elahi, Nasrin

    2018-01-01

    Background: Fundamentals of nursing course are prerequisite to providing comprehensive nursing care. Despite development of technology on nursing education, effectiveness of using e-learning methods in fundamentals of nursing course is unclear in clinical skills laboratory for nursing students. The aim of this study was to compare the effect of blended learning (combining e-learning with traditional learning methods) with traditional learning alone on nursing students' scores. Materials and Methods: A two-group post-test experimental study was administered from February 2014 to February 2015. Two groups of nursing students who were taking the fundamentals of nursing course in Iran were compared. Sixty nursing students were selected as control group (just traditional learning methods) and experimental group (combining e-learning with traditional learning methods) for two consecutive semesters. Both groups participated in Objective Structured Clinical Examination (OSCE) and were evaluated in the same way using a prepared checklist and questionnaire of satisfaction. Statistical analysis was conducted through SPSS software version 16. Results: Findings of this study reflected that mean of midterm (t = 2.00, p = 0.04) and final score (t = 2.50, p = 0.01) of the intervention group (combining e-learning with traditional learning methods) were significantly higher than the control group (traditional learning methods). The satisfaction of male students in intervention group was higher than in females (t = 2.60, p = 0.01). Conclusions: Based on the findings, this study suggests that the use of combining traditional learning methods with e-learning methods such as applying educational website and interactive online resources for fundamentals of nursing course instruction can be an effective supplement for improving nursing students' clinical skills. PMID:29861761

  2. Comparing different methods for assessing contaminant bioavailability during sediment remediation.

    PubMed

    Jia, Fang; Liao, Chunyang; Xue, Jiaying; Taylor, Allison; Gan, Jay

    2016-12-15

    Sediment contamination by persistent organic pollutants from historical episodes is widespread and remediation is often needed to clean up severely contaminated sites. Measuring contaminant bioavailability in a before-and-after manner lends to improved assessment of remediation effectiveness. However, a number of bioavailability measurement methods have been developed, posing a challenge in method selection for practitioners. In this study, three different bioavailability measurement methods, i.e., solid phase microextraction (SPME), Tenax desorption, and isotope dilution method (IDM), were compared in evaluating changes in bioavailability of DDT and its degradates in sediment following simulated remediation treatments. When compared to the unamended sediments, all three methods predicted essentially the same degrees of changes in bioavailability after amendment with activated carbon, charcoal or sand. After normalizing over the unamended control, measurements by different methods were linearly correlated with each other, with slopes close to 1. The same observation was further made with a Superfund site marine sediment. This finding suggests that different methods may be used in evaluating remediation efficiency. However, Tenax desorption or IDM consistently offered better sensitivity than SPME in detecting bioavailability changes. Results from this study highlight the value of considering bioavailability when evaluating remediation effectiveness and provide guidance on the selection of bioavailability measurement methods in such assessments. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Comparison of manual & automated analysis methods for corneal endothelial cell density measurements by specular microscopy.

    PubMed

    Huang, Jianyan; Maram, Jyotsna; Tepelus, Tudor C; Modak, Cristina; Marion, Ken; Sadda, SriniVas R; Chopra, Vikas; Lee, Olivia L

    2017-08-07

    To determine the reliability of corneal endothelial cell density (ECD) obtained by automated specular microscopy versus that of validated manual methods and factors that predict such reliability. Sharp central images from 94 control and 106 glaucomatous eyes were captured with Konan specular microscope NSP-9900. All images were analyzed by trained graders using Konan CellChek Software, employing the fully- and semi-automated methods as well as Center Method. Images with low cell count (input cells number <100) and/or guttata were compared with the Center and Flex-Center Methods. ECDs were compared and absolute error was used to assess variation. The effect on ECD of age, cell count, cell size, and cell size variation was evaluated. No significant difference was observed between the Center and Flex-Center Methods in corneas with guttata (p=0.48) or low ECD (p=0.11). No difference (p=0.32) was observed in ECD of normal controls <40 yrs old between the fully-automated method and manual Center Method. However, in older controls and glaucomatous eyes, ECD was overestimated by the fully-automated method (p=0.034) and semi-automated method (p=0.025) as compared to manual method. Our findings show that automated analysis significantly overestimates ECD in the eyes with high polymegathism and/or large cell size, compared to the manual method. Therefore, we discourage reliance upon the fully-automated method alone to perform specular microscopy analysis, particularly if an accurate ECD value is imperative. Copyright © 2017. Published by Elsevier España, S.L.U.

  4. A Novel Method to Determine the Hydrodynamic Coefficients of an Eyeball ROV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yh, Eng; Ws, Lau; Low, E.

    2009-01-12

    A good dynamics model is essential and critical for the successful design of navigation and control system of an underwater vehicle. However, it is difficult to determine from the hydrodynamic forces, the inertial added mass terms and the drag coefficients. In this paper, a new experimental method has been used to find the hydrodynamic forces for the ROV II, a remotely operated underwater vehicle. The proposed method is based on the classical free decay test, but with the spring oscillation replaced by a pendulum motion. The experiment results determined from the free decay test of a scaled model compared wellmore » with the simulation results obtained from well‐established computational fluid dynamics (CFD) program. Thus, the proposed approach can be used to find the added mass and drag coefficients for other underwater vehicles.« less

  5. Improving computer-aided detection assistance in breast cancer screening by removal of obviously false-positive findings.

    PubMed

    Mordang, Jan-Jurre; Gubern-Mérida, Albert; Bria, Alessandro; Tortorella, Francesco; den Heeten, Gerard; Karssemeijer, Nico

    2017-04-01

    Computer-aided detection (CADe) systems for mammography screening still mark many false positives. This can cause radiologists to lose confidence in CADe, especially when many false positives are obviously not suspicious to them. In this study, we focus on obvious false positives generated by microcalcification detection algorithms. We aim at reducing the number of obvious false-positive findings by adding an additional step in the detection method. In this step, a multiclass machine learning method is implemented in which dedicated classifiers learn to recognize the patterns of obvious false-positive subtypes that occur most frequently. The method is compared to a conventional two-class approach, where all false-positive subtypes are grouped together in one class, and to the baseline CADe system without the new false-positive removal step. The methods are evaluated on an independent dataset containing 1,542 screening examinations of which 80 examinations contain malignant microcalcifications. Analysis showed that the multiclass approach yielded a significantly higher sensitivity compared to the other two methods (P < 0.0002). At one obvious false positive per 100 images, the baseline CADe system detected 61% of the malignant examinations, while the systems with the two-class and multiclass false-positive reduction step detected 73% and 83%, respectively. Our study showed that by adding the proposed method to a CADe system, the number of obvious false positives can decrease significantly (P < 0.0002). © 2017 American Association of Physicists in Medicine.

  6. Engineering Ethics Education: A Comparative Study of Japan and Malaysia.

    PubMed

    Balakrishnan, Balamuralithara; Tochinai, Fumihiko; Kanemitsu, Hidekazu

    2018-03-22

    This paper reports the findings of a comparative study in which students' perceived attainment of the objectives of an engineering ethics education and their attitude towards engineering ethics were investigated and compared. The investigation was carried out in Japan and Malaysia, involving 163 and 108 engineering undergraduates respectively. The research method used was based on a survey in which respondents were sent a questionnaire to elicit relevant data. Both descriptive and inferential statistical analyses were performed on the data. The results of the analyses showed that the attainment of the objectives of engineering ethics education and students' attitude towards socio-ethical issues in engineering were significantly higher and positive among Japanese engineering students compared to Malaysian engineering students. Such findings suggest that a well-structured, integrated, and innovative pedagogy for teaching ethics will have an impact on the students' attainment of ethics education objectives and their attitude towards engineering ethics. As such, the research findings serve as a cornerstone to which the current practice of teaching and learning of engineering ethics education can be examined more critically, such that further improvements can be made to the existing curriculum that can help produce engineers that have strong moral and ethical characters.

  7. The Facial Appearance of CEOs: Faces Signal Selection but Not Performance

    PubMed Central

    Garretsen, Harry; Spreeuwers, Luuk J.

    2016-01-01

    Research overwhelmingly shows that facial appearance predicts leader selection. However, the evidence on the relevance of faces for actual leader ability and consequently performance is inconclusive. By using a state-of-the-art, objective measure for face recognition, we test the predictive value of CEOs’ faces for firm performance in a large sample of faces. We first compare the faces of Fortune500 CEOs with those of US citizens and professors. We find clear confirmation that CEOs do look different when compared to citizens or professors, replicating the finding that faces matter for selection. More importantly, we also find that faces of CEOs of top performing firms do not differ from other CEOs. Based on our advanced face recognition method, our results suggest that facial appearance matters for leader selection but that it does not do so for leader performance. PMID:27462986

  8. Using transfer learning to detect galaxy mergers

    NASA Astrophysics Data System (ADS)

    Ackermann, Sandro; Schawinksi, Kevin; Zhang, Ce; Weigel, Anna K.; Turp, M. Dennis

    2018-05-01

    We investigate the use of deep convolutional neural networks (deep CNNs) for automatic visual detection of galaxy mergers. Moreover, we investigate the use of transfer learning in conjunction with CNNs, by retraining networks first trained on pictures of everyday objects. We test the hypothesis that transfer learning is useful for improving classification performance for small training sets. This would make transfer learning useful for finding rare objects in astronomical imaging datasets. We find that these deep learning methods perform significantly better than current state-of-the-art merger detection methods based on nonparametric systems like CAS and GM20. Our method is end-to-end and robust to image noise and distortions; it can be applied directly without image preprocessing. We also find that transfer learning can act as a regulariser in some cases, leading to better overall classification accuracy (p = 0.02). Transfer learning on our full training set leads to a lowered error rate from 0.0381 down to 0.0321, a relative improvement of 15%. Finally, we perform a basic sanity-check by creating a merger sample with our method, and comparing with an already existing, manually created merger catalogue in terms of colour-mass distribution and stellar mass function.

  9. Novel method of measuring the mental workload of anaesthetists during clinical practice.

    PubMed

    Byrne, A J; Oliver, M; Bodger, O; Barnett, W A; Williams, D; Jones, H; Murphy, A

    2010-12-01

    Cognitive overload has been recognized as a significant cause of error in industries such as aviation and measuring mental workload has become a key method of improving safety. The aim of this study was to pilot the use of a new method of measuring mental workload in the operating theatre using a previously published methodology. The mental workload of the anaesthetists was assessed by measuring their response times to a wireless vibrotactile device and the NASA TLX subjective workload score during routine surgical procedures. Primary task workload was inferred from the phase of anaesthesia. Significantly increased response time was associated with the induction phase of anaesthesia compared with maintenance/emergence, non-consultant grade, and during more complex cases. Increased response was also associated with self-reported mental load, physical load, and frustration. These findings are consistent with periods of increased mental workload and with the findings of other studies using similar techniques. These findings confirm the importance of mental workload to the performance of anaesthetists and suggest that increased mental workload is likely to be a common problem in clinical practice. Although further studies are required, the method described may be useful for the measurement of the mental workload of anaesthetists.

  10. Edge grouping combining boundary and region information.

    PubMed

    Stahl, Joachim S; Wang, Song

    2007-10-01

    This paper introduces a new edge-grouping method to detect perceptually salient structures in noisy images. Specifically, we define a new grouping cost function in a ratio form, where the numerator measures the boundary proximity of the resulting structure and the denominator measures the area of the resulting structure. This area term introduces a preference towards detecting larger-size structures and, therefore, makes the resulting edge grouping more robust to image noise. To find the optimal edge grouping with the minimum grouping cost, we develop a special graph model with two different kinds of edges and then reduce the grouping problem to finding a special kind of cycle in this graph with a minimum cost in ratio form. This optimal cycle-finding problem can be solved in polynomial time by a previously developed graph algorithm. We implement this edge-grouping method, test it on both synthetic data and real images, and compare its performance against several available edge-grouping and edge-linking methods. Furthermore, we discuss several extensions of the proposed method, including the incorporation of the well-known grouping cues of continuity and intensity homogeneity, introducing a factor to balance the contributions from the boundary and region information, and the prevention of detecting self-intersecting boundaries.

  11. A New Multiconstraint Method for Determining the Optimal Cable Stresses in Cable-Stayed Bridges

    PubMed Central

    Asgari, B.; Osman, S. A.; Adnan, A.

    2014-01-01

    Cable-stayed bridges are one of the most popular types of long-span bridges. The structural behaviour of cable-stayed bridges is sensitive to the load distribution between the girder, pylons, and cables. The determination of pretensioning cable stresses is critical in the cable-stayed bridge design procedure. By finding the optimum stresses in cables, the load and moment distribution of the bridge can be improved. In recent years, different research works have studied iterative and modern methods to find optimum stresses of cables. However, most of the proposed methods have limitations in optimising the structural performance of cable-stayed bridges. This paper presents a multiconstraint optimisation method to specify the optimum cable forces in cable-stayed bridges. The proposed optimisation method produces less bending moments and stresses in the bridge members and requires shorter simulation time than other proposed methods. The results of comparative study show that the proposed method is more successful in restricting the deck and pylon displacements and providing uniform deck moment distribution than unit load method (ULM). The final design of cable-stayed bridges can be optimised considerably through proposed multiconstraint optimisation method. PMID:25050400

  12. A new multiconstraint method for determining the optimal cable stresses in cable-stayed bridges.

    PubMed

    Asgari, B; Osman, S A; Adnan, A

    2014-01-01

    Cable-stayed bridges are one of the most popular types of long-span bridges. The structural behaviour of cable-stayed bridges is sensitive to the load distribution between the girder, pylons, and cables. The determination of pretensioning cable stresses is critical in the cable-stayed bridge design procedure. By finding the optimum stresses in cables, the load and moment distribution of the bridge can be improved. In recent years, different research works have studied iterative and modern methods to find optimum stresses of cables. However, most of the proposed methods have limitations in optimising the structural performance of cable-stayed bridges. This paper presents a multiconstraint optimisation method to specify the optimum cable forces in cable-stayed bridges. The proposed optimisation method produces less bending moments and stresses in the bridge members and requires shorter simulation time than other proposed methods. The results of comparative study show that the proposed method is more successful in restricting the deck and pylon displacements and providing uniform deck moment distribution than unit load method (ULM). The final design of cable-stayed bridges can be optimised considerably through proposed multiconstraint optimisation method.

  13. The Social and Political Structuring of Faculty Ethicality in Education

    ERIC Educational Resources Information Center

    Reybold, L. Earle

    2008-01-01

    This study examined the experience of faculty ethicality in education. Research questions focused on faculty characterizations of professional ethics, related socialization experiences, and responses to dilemmas. Interviews were conducted with 32 faculty members and analyzed using the constant comparative method. Findings describe the experiential…

  14. Library Design Analysis Using Post-Occupancy Evaluation Methods.

    ERIC Educational Resources Information Center

    James, Dennis C.; Stewart, Sharon L.

    1995-01-01

    Presents findings of a user-based study of the interior of Rodger's Science and Engineering Library at the University of Alabama. Compared facility evaluations from faculty, library staff, and graduate and undergraduate students. Features evaluated include: acoustics, aesthetics, book stacks, design, finishes/materials, furniture, lighting,…

  15. Discovering novel subsystems using comparative genomics

    PubMed Central

    Ferrer, Luciana; Shearer, Alexander G.; Karp, Peter D.

    2011-01-01

    Motivation: Key problems for computational genomics include discovering novel pathways in genome data, and discovering functional interaction partners for genes to define new members of partially elucidated pathways. Results: We propose a novel method for the discovery of subsystems from annotated genomes. For each gene pair, a score measuring the likelihood that the two genes belong to a same subsystem is computed using genome context methods. Genes are then grouped based on these scores, and the resulting groups are filtered to keep only high-confidence groups. Since the method is based on genome context analysis, it relies solely on structural annotation of the genomes. The method can be used to discover new pathways, find missing genes from a known pathway, find new protein complexes or other kinds of functional groups and assign function to genes. We tested the accuracy of our method in Escherichia coli K-12. In one configuration of the system, we find that 31.6% of the candidate groups generated by our method match a known pathway or protein complex closely, and that we rediscover 31.2% of all known pathways and protein complexes of at least 4 genes. We believe that a significant proportion of the candidates that do not match any known group in E.coli K-12 corresponds to novel subsystems that may represent promising leads for future laboratory research. We discuss in-depth examples of these findings. Availability: Predicted subsystems are available at http://brg.ai.sri.com/pwy-discovery/journal.html. Contact: lferrer@ai.sri.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21775308

  16. Variance fluctuations in nonstationary time series: a comparative study of music genres

    NASA Astrophysics Data System (ADS)

    Jennings, Heather D.; Ivanov, Plamen Ch.; De Martins, Allan M.; da Silva, P. C.; Viswanathan, G. M.

    2004-05-01

    An important problem in physics concerns the analysis of audio time series generated by transduced acoustic phenomena. Here, we develop a new method to quantify the scaling properties of the local variance of nonstationary time series. We apply this technique to analyze audio signals obtained from selected genres of music. We find quantitative differences in the correlation properties of high art music, popular music, and dance music. We discuss the relevance of these objective findings in relation to the subjective experience of music.

  17. An Efficient and Reliable Statistical Method for Estimating Functional Connectivity in Large Scale Brain Networks Using Partial Correlation

    PubMed Central

    Wang, Yikai; Kang, Jian; Kemmer, Phebe B.; Guo, Ying

    2016-01-01

    Currently, network-oriented analysis of fMRI data has become an important tool for understanding brain organization and brain networks. Among the range of network modeling methods, partial correlation has shown great promises in accurately detecting true brain network connections. However, the application of partial correlation in investigating brain connectivity, especially in large-scale brain networks, has been limited so far due to the technical challenges in its estimation. In this paper, we propose an efficient and reliable statistical method for estimating partial correlation in large-scale brain network modeling. Our method derives partial correlation based on the precision matrix estimated via Constrained L1-minimization Approach (CLIME), which is a recently developed statistical method that is more efficient and demonstrates better performance than the existing methods. To help select an appropriate tuning parameter for sparsity control in the network estimation, we propose a new Dens-based selection method that provides a more informative and flexible tool to allow the users to select the tuning parameter based on the desired sparsity level. Another appealing feature of the Dens-based method is that it is much faster than the existing methods, which provides an important advantage in neuroimaging applications. Simulation studies show that the Dens-based method demonstrates comparable or better performance with respect to the existing methods in network estimation. We applied the proposed partial correlation method to investigate resting state functional connectivity using rs-fMRI data from the Philadelphia Neurodevelopmental Cohort (PNC) study. Our results show that partial correlation analysis removed considerable between-module marginal connections identified by full correlation analysis, suggesting these connections were likely caused by global effects or common connection to other nodes. Based on partial correlation, we find that the most significant direct connections are between homologous brain locations in the left and right hemisphere. When comparing partial correlation derived under different sparse tuning parameters, an important finding is that the sparse regularization has more shrinkage effects on negative functional connections than on positive connections, which supports previous findings that many of the negative brain connections are due to non-neurophysiological effects. An R package “DensParcorr” can be downloaded from CRAN for implementing the proposed statistical methods. PMID:27242395

  18. An Efficient and Reliable Statistical Method for Estimating Functional Connectivity in Large Scale Brain Networks Using Partial Correlation.

    PubMed

    Wang, Yikai; Kang, Jian; Kemmer, Phebe B; Guo, Ying

    2016-01-01

    Currently, network-oriented analysis of fMRI data has become an important tool for understanding brain organization and brain networks. Among the range of network modeling methods, partial correlation has shown great promises in accurately detecting true brain network connections. However, the application of partial correlation in investigating brain connectivity, especially in large-scale brain networks, has been limited so far due to the technical challenges in its estimation. In this paper, we propose an efficient and reliable statistical method for estimating partial correlation in large-scale brain network modeling. Our method derives partial correlation based on the precision matrix estimated via Constrained L1-minimization Approach (CLIME), which is a recently developed statistical method that is more efficient and demonstrates better performance than the existing methods. To help select an appropriate tuning parameter for sparsity control in the network estimation, we propose a new Dens-based selection method that provides a more informative and flexible tool to allow the users to select the tuning parameter based on the desired sparsity level. Another appealing feature of the Dens-based method is that it is much faster than the existing methods, which provides an important advantage in neuroimaging applications. Simulation studies show that the Dens-based method demonstrates comparable or better performance with respect to the existing methods in network estimation. We applied the proposed partial correlation method to investigate resting state functional connectivity using rs-fMRI data from the Philadelphia Neurodevelopmental Cohort (PNC) study. Our results show that partial correlation analysis removed considerable between-module marginal connections identified by full correlation analysis, suggesting these connections were likely caused by global effects or common connection to other nodes. Based on partial correlation, we find that the most significant direct connections are between homologous brain locations in the left and right hemisphere. When comparing partial correlation derived under different sparse tuning parameters, an important finding is that the sparse regularization has more shrinkage effects on negative functional connections than on positive connections, which supports previous findings that many of the negative brain connections are due to non-neurophysiological effects. An R package "DensParcorr" can be downloaded from CRAN for implementing the proposed statistical methods.

  19. Comparison of accuracy of physical examination findings in initial progress notes between paper charts and a newly implemented electronic health record.

    PubMed

    Yadav, Siddhartha; Kazanji, Noora; K C, Narayan; Paudel, Sudarshan; Falatko, John; Shoichet, Sandor; Maddens, Michael; Barnes, Michael A

    2017-01-01

    There have been several concerns about the quality of documentation in electronic health records (EHRs) when compared to paper charts. This study compares the accuracy of physical examination findings documentation between the two in initial progress notes. Initial progress notes from patients with 5 specific diagnoses with invariable physical findings admitted to Beaumont Hospital, Royal Oak, between August 2011 and July 2013 were randomly selected for this study. A total of 500 progress notes were retrospectively reviewed. The paper chart arm consisted of progress notes completed prior to the transition to an EHR on July 1, 2012. The remaining charts were placed in the EHR arm. The primary endpoints were accuracy, inaccuracy, and omission of information. Secondary endpoints were time of initiation of progress note, word count, number of systems documented, and accuracy based on level of training. The rate of inaccurate documentation was significantly higher in the EHRs compared to the paper charts (24.4% vs 4.4%). However, expected physical examination findings were more likely to be omitted in the paper notes compared to EHRs (41.2% vs 17.6%). Resident physicians had a smaller number of inaccuracies (5.3% vs 17.3%) and omissions (16.8% vs 33.9%) compared to attending physicians. During the initial phase of implementation of an EHR, inaccuracies were more common in progress notes in the EHR compared to the paper charts. Residents had a lower rate of inaccuracies and omissions compared to attending physicians. Further research is needed to identify training methods and incentives that can reduce inaccuracies in EHRs during initial implementation. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Novel scheme to compute chemical potentials of chain molecules on a lattice

    NASA Astrophysics Data System (ADS)

    Mooij, G. C. A. M.; Frenkel, D.

    We present a novel method that allows efficient computation of the total number of allowed conformations of a chain molecule in a dense phase. Using this method, it is possible to estimate the chemical potential of such a chain molecule. We have tested the present method in simulations of a two-dimensional monolayer of chain molecules on a lattice (Whittington-Chapman model) and compared it with existing schemes to compute the chemical potential. We find that the present approach is two to three orders of magnitude faster than the most efficient of the existing methods.

  1. The Effect of Instructional Method on Cardiopulmonary Resuscitation Skill Performance: A Comparison Between Instructor-Led Basic Life Support and Computer-Based Basic Life Support With Voice-Activated Manikin.

    PubMed

    Wilson-Sands, Cathy; Brahn, Pamela; Graves, Kristal

    2015-01-01

    Validating participants' ability to correctly perform cardiopulmonary resuscitation (CPR) skills during basic life support courses can be a challenge for nursing professional development specialists. This study compares two methods of basic life support training, instructor-led and computer-based learning with voice-activated manikins, to identify if one method is more effective for performance of CPR skills. The findings suggest that a computer-based learning course with voice-activated manikins is a more effective method of training for improved CPR performance.

  2. Spatial cluster detection using dynamic programming

    PubMed Central

    2012-01-01

    Background The task of spatial cluster detection involves finding spatial regions where some property deviates from the norm or the expected value. In a probabilistic setting this task can be expressed as finding a region where some event is significantly more likely than usual. Spatial cluster detection is of interest in fields such as biosurveillance, mining of astronomical data, military surveillance, and analysis of fMRI images. In almost all such applications we are interested both in the question of whether a cluster exists in the data, and if it exists, we are interested in finding the most accurate characterization of the cluster. Methods We present a general dynamic programming algorithm for grid-based spatial cluster detection. The algorithm can be used for both Bayesian maximum a-posteriori (MAP) estimation of the most likely spatial distribution of clusters and Bayesian model averaging over a large space of spatial cluster distributions to compute the posterior probability of an unusual spatial clustering. The algorithm is explained and evaluated in the context of a biosurveillance application, specifically the detection and identification of Influenza outbreaks based on emergency department visits. A relatively simple underlying model is constructed for the purpose of evaluating the algorithm, and the algorithm is evaluated using the model and semi-synthetic test data. Results When compared to baseline methods, tests indicate that the new algorithm can improve MAP estimates under certain conditions: the greedy algorithm we compared our method to was found to be more sensitive to smaller outbreaks, while as the size of the outbreaks increases, in terms of area affected and proportion of individuals affected, our method overtakes the greedy algorithm in spatial precision and recall. The new algorithm performs on-par with baseline methods in the task of Bayesian model averaging. Conclusions We conclude that the dynamic programming algorithm performs on-par with other available methods for spatial cluster detection and point to its low computational cost and extendability as advantages in favor of further research and use of the algorithm. PMID:22443103

  3. An Efficient Method to Detect Mutual Overlap of a Large Set of Unordered Images for Structure-From

    NASA Astrophysics Data System (ADS)

    Wang, X.; Zhan, Z. Q.; Heipke, C.

    2017-05-01

    Recently, low-cost 3D reconstruction based on images has become a popular focus of photogrammetry and computer vision research. Methods which can handle an arbitrary geometric setup of a large number of unordered and convergent images are of particular interest. However, determining the mutual overlap poses a considerable challenge. We propose a new method which was inspired by and improves upon methods employing random k-d forests for this task. Specifically, we first derive features from the images and then a random k-d forest is used to find the nearest neighbours in feature space. Subsequently, the degree of similarity between individual images, the image overlaps and thus images belonging to a common block are calculated as input to a structure-from-motion (sfm) pipeline. In our experiments we show the general applicability of the new method and compare it with other methods by analyzing the time efficiency. Orientations and 3D reconstructions were successfully conducted with our overlap graphs by sfm. The results show a speed-up of a factor of 80 compared to conventional pairwise matching, and of 8 and 2 compared to the VocMatch approach using 1 and 4 CPU, respectively.

  4. Impacts of future radiation management scenarios on terrestrial carbon dynamics simulated with fully coupled NorESM

    NASA Astrophysics Data System (ADS)

    Ekici, Altug; Tjiputra, Jerry; Grini, Alf; Muri, Helene

    2017-04-01

    We have simulated 3 different radiation management geoengineering methods (CCT - cirrus cloud thinning; SAI - stratospheric aerosol injection; MSB - marine sky brightening) on top of future RCP8.5 scenario with the fully coupled Norwegian Earth System Model (NorESM). A globally consistent cooling in both atmosphere and soil is observed with all methods. However, precipitation patterns are dependent on the used method. Globally CCT and MSB methods do not affect the vegetation carbon budget, while SAI leads to a loss compared to RCP8.5 simulations. Spatially the most sensitive region is the tropics. Here, the changes in vegetation carbon content are related to the precipitation changes. Increase in soil carbon is projected in all three methods, the biggest change seen in SAI method. Simulations with CCT method leads to twice as much soil carbon retention in the tropics compared to the MSB method. Our findings show that there are unforeseen regional consequences of such geoengineering methods in the biogeochemical cycles and they should be considered with care in future climate policies.

  5. Improving Nursing Students' Learning Outcomes in Fundamentals of Nursing Course through Combination of Traditional and e-Learning Methods.

    PubMed

    Sheikhaboumasoudi, Rouhollah; Bagheri, Maryam; Hosseini, Sayed Abbas; Ashouri, Elaheh; Elahi, Nasrin

    2018-01-01

    Fundamentals of nursing course are prerequisite to providing comprehensive nursing care. Despite development of technology on nursing education, effectiveness of using e-learning methods in fundamentals of nursing course is unclear in clinical skills laboratory for nursing students. The aim of this study was to compare the effect of blended learning (combining e-learning with traditional learning methods) with traditional learning alone on nursing students' scores. A two-group post-test experimental study was administered from February 2014 to February 2015. Two groups of nursing students who were taking the fundamentals of nursing course in Iran were compared. Sixty nursing students were selected as control group (just traditional learning methods) and experimental group (combining e-learning with traditional learning methods) for two consecutive semesters. Both groups participated in Objective Structured Clinical Examination (OSCE) and were evaluated in the same way using a prepared checklist and questionnaire of satisfaction. Statistical analysis was conducted through SPSS software version 16. Findings of this study reflected that mean of midterm (t = 2.00, p = 0.04) and final score (t = 2.50, p = 0.01) of the intervention group (combining e-learning with traditional learning methods) were significantly higher than the control group (traditional learning methods). The satisfaction of male students in intervention group was higher than in females (t = 2.60, p = 0.01). Based on the findings, this study suggests that the use of combining traditional learning methods with e-learning methods such as applying educational website and interactive online resources for fundamentals of nursing course instruction can be an effective supplement for improving nursing students' clinical skills.

  6. The Quality of Methods Reporting in Parasitology Experiments

    PubMed Central

    Flórez-Vargas, Oscar; Bramhall, Michael; Noyes, Harry; Cruickshank, Sheena; Stevens, Robert; Brass, Andy

    2014-01-01

    There is a growing concern both inside and outside the scientific community over the lack of reproducibility of experiments. The depth and detail of reported methods are critical to the reproducibility of findings, but also for making it possible to compare and integrate data from different studies. In this study, we evaluated in detail the methods reporting in a comprehensive set of trypanosomiasis experiments that should enable valid reproduction, integration and comparison of research findings. We evaluated a subset of other parasitic (Leishmania, Toxoplasma, Plasmodium, Trichuris and Schistosoma) and non-parasitic (Mycobacterium) experimental infections in order to compare the quality of method reporting more generally. A systematic review using PubMed (2000–2012) of all publications describing gene expression in cells and animals infected with Trypanosoma spp was undertaken based on PRISMA guidelines; 23 papers were identified and included. We defined a checklist of essential parameters that should be reported and have scored the number of those parameters that are reported for each publication. Bibliometric parameters (impact factor, citations and h-index) were used to look for association between Journal and Author status and the quality of method reporting. Trichuriasis experiments achieved the highest scores and included the only paper to score 100% in all criteria. The mean of scores achieved by Trypanosoma articles through the checklist was 65.5% (range 32–90%). Bibliometric parameters were not correlated with the quality of method reporting (Spearman's rank correlation coefficient <−0.5; p>0.05). Our results indicate that the quality of methods reporting in experimental parasitology is a cause for concern and it has not improved over time, despite there being evidence that most of the assessed parameters do influence the results. We propose that our set of parameters be used as guidelines to improve the quality of the reporting of experimental infection models as a pre-requisite for integrating and comparing sets of data. PMID:25076044

  7. The quality of methods reporting in parasitology experiments.

    PubMed

    Flórez-Vargas, Oscar; Bramhall, Michael; Noyes, Harry; Cruickshank, Sheena; Stevens, Robert; Brass, Andy

    2014-01-01

    There is a growing concern both inside and outside the scientific community over the lack of reproducibility of experiments. The depth and detail of reported methods are critical to the reproducibility of findings, but also for making it possible to compare and integrate data from different studies. In this study, we evaluated in detail the methods reporting in a comprehensive set of trypanosomiasis experiments that should enable valid reproduction, integration and comparison of research findings. We evaluated a subset of other parasitic (Leishmania, Toxoplasma, Plasmodium, Trichuris and Schistosoma) and non-parasitic (Mycobacterium) experimental infections in order to compare the quality of method reporting more generally. A systematic review using PubMed (2000-2012) of all publications describing gene expression in cells and animals infected with Trypanosoma spp was undertaken based on PRISMA guidelines; 23 papers were identified and included. We defined a checklist of essential parameters that should be reported and have scored the number of those parameters that are reported for each publication. Bibliometric parameters (impact factor, citations and h-index) were used to look for association between Journal and Author status and the quality of method reporting. Trichuriasis experiments achieved the highest scores and included the only paper to score 100% in all criteria. The mean of scores achieved by Trypanosoma articles through the checklist was 65.5% (range 32-90%). Bibliometric parameters were not correlated with the quality of method reporting (Spearman's rank correlation coefficient <-0.5; p>0.05). Our results indicate that the quality of methods reporting in experimental parasitology is a cause for concern and it has not improved over time, despite there being evidence that most of the assessed parameters do influence the results. We propose that our set of parameters be used as guidelines to improve the quality of the reporting of experimental infection models as a pre-requisite for integrating and comparing sets of data.

  8. Health-related quality-of-life assessments in diverse population groups in the United States.

    PubMed

    Stewart, A L; Nápoles-Springer, A

    2000-09-01

    Effectiveness research needs to represent the increasing diversity of the United States. Health-related quality-of-life (HRQOL) measures are often included as secondary treatment outcomes. Because most HRQOL measures were developed in nonminority, well-educated samples, we must determine whether such measures are conceptually and psychometrically equivalent in diverse subgroups. Without equivalence, overall findings and observed group differences may contain measurement bias. The objectives of this work were to discuss the nature of diversity, importance of ensuring the adequacy of HRQOL measures in diverse groups, methods for assessing comparability of HRQOL measures across groups, and methodological and analytical challenges. Integration of qualitative and quantitative methods is needed to achieve measurement adequacy in diverse groups. Little research explores conceptual equivalence across US subgroups; of the few studies of psychometric comparability, findings are inconsistent. Evidence is needed regarding whether current measures are comparable or need modifications to meet universality assumptions, and we need to determine the best methods for evaluating this. We recommend coordinated efforts to develop guidelines for assessing measurement adequacy across diverse subgroups, allocate resources for measurement studies in diverse populations, improve reporting of and access to measurement results by subgroups, and develop strategies for optimizing the universality of HRQOL measures and resolving inadequacies. We advocate culturally sensitive research that involves cultural subgroups throughout the research process. Because examining the cultural equivalence of HRQOL measures within the United States is somewhat new, we have a unique opportunity to shape the direction of this work through development and dissemination of appropriate methods.

  9. Validation of a One-Step Method for Extracting Fatty Acids from Salmon, Chicken and Beef Samples.

    PubMed

    Zhang, Zhichao; Richardson, Christine E; Hennebelle, Marie; Taha, Ameer Y

    2017-10-01

    Fatty acid extraction methods are time-consuming and expensive because they involve multiple steps and copious amounts of extraction solvents. In an effort to streamline the fatty acid extraction process, this study compared the standard Folch lipid extraction method to a one-step method involving a column that selectively elutes the lipid phase. The methods were tested on raw beef, salmon, and chicken. Compared to the standard Folch method, the one-step extraction process generally yielded statistically insignificant differences in chicken and salmon fatty acid concentrations, percent composition and weight percent. Initial testing showed that beef stearic, oleic and total fatty acid concentrations were significantly lower by 9-11% with the one-step method as compared to the Folch method, but retesting on a different batch of samples showed a significant 4-8% increase in several omega-3 and omega-6 fatty acid concentrations with the one-step method relative to the Folch. Overall, the findings reflect the utility of a one-step extraction method for routine and rapid monitoring of fatty acids in chicken and salmon. Inconsistencies in beef concentrations, although minor (within 11%), may be due to matrix effects. A one-step fatty acid extraction method has broad applications for rapidly and routinely monitoring fatty acids in the food supply and formulating controlled dietary interventions. © 2017 Institute of Food Technologists®.

  10. An evaluation of computer-aided disproportionality analysis for post-marketing signal detection.

    PubMed

    Lehman, H P; Chen, J; Gould, A L; Kassekert, R; Beninger, P R; Carney, R; Goldberg, M; Goss, M A; Kidos, K; Sharrar, R G; Shields, K; Sweet, A; Wiholm, B E; Honig, P K

    2007-08-01

    To understand the value of computer-aided disproportionality analysis (DA) in relation to current pharmacovigilance signal detection methods, four products were retrospectively evaluated by applying an empirical Bayes method to Merck's post-marketing safety database. Findings were compared with the prior detection of labeled post-marketing adverse events. Disproportionality ratios (empirical Bayes geometric mean lower 95% bounds for the posterior distribution (EBGM05)) were generated for product-event pairs. Overall (1993-2004 data, EBGM05> or =2, individual terms) results of signal detection using DA compared to standard methods were sensitivity, 31.1%; specificity, 95.3%; and positive predictive value, 19.9%. Using groupings of synonymous labeled terms, sensitivity improved (40.9%). More of the adverse events detected by both methods were detected earlier using DA and grouped (versus individual) terms. With 1939-2004 data, diagnostic properties were similar to those from 1993 to 2004. DA methods using Merck's safety database demonstrate sufficient sensitivity and specificity to be considered for use as an adjunct to conventional signal detection methods.

  11. Comparing and Contrasting Consensus versus Empirical Domains

    PubMed Central

    Jason, Leonard A.; Kot, Bobby; Sunnquist, Madison; Brown, Abigail; Reed, Jordan; Furst, Jacob; Newton, Julia L.; Strand, Elin Bolle; Vernon, Suzanne D.

    2015-01-01

    Background Since the publication of the CFS case definition [1], there have been a number of other criteria proposed including the Canadian Consensus Criteria [2] and the Myalgic Encephalomyelitis: International Consensus Criteria. [3] Purpose The current study compared these domains that were developed through consensus methods to one obtained through more empirical approaches using factor analysis. Methods Using data mining, we compared and contrasted fundamental features of consensus-based criteria versus empirical latent factors. In general, these approaches found the domain of Fatigue/Post-exertional malaise as best differentiating patients from controls. Results Findings indicated that the Fukuda et al. criteria had the worst sensitivity and specificity. Conclusions These outcomes might help both theorists and researchers better determine which fundamental domains to be used for the case definition. PMID:26977374

  12. Does money matter in inflation forecasting?

    NASA Astrophysics Data System (ADS)

    Binner, J. M.; Tino, P.; Tepper, J.; Anderson, R.; Jones, B.; Kendall, G.

    2010-11-01

    This paper provides the most fully comprehensive evidence to date on whether or not monetary aggregates are valuable for forecasting US inflation in the early to mid 2000s. We explore a wide range of different definitions of money, including different methods of aggregation and different collections of included monetary assets. In our forecasting experiment we use two nonlinear techniques, namely, recurrent neural networks and kernel recursive least squares regression-techniques that are new to macroeconomics. Recurrent neural networks operate with potentially unbounded input memory, while the kernel regression technique is a finite memory predictor. The two methodologies compete to find the best fitting US inflation forecasting models and are then compared to forecasts from a naïve random walk model. The best models were nonlinear autoregressive models based on kernel methods. Our findings do not provide much support for the usefulness of monetary aggregates in forecasting inflation. Beyond its economic findings, our study is in the tradition of physicists’ long-standing interest in the interconnections among statistical mechanics, neural networks, and related nonparametric statistical methods, and suggests potential avenues of extension for such studies.

  13. Using Classification and Regression Trees (CART) and random forests to analyze attrition: Results from two simulations.

    PubMed

    Hayes, Timothy; Usami, Satoshi; Jacobucci, Ross; McArdle, John J

    2015-12-01

    In this article, we describe a recent development in the analysis of attrition: using classification and regression trees (CART) and random forest methods to generate inverse sampling weights. These flexible machine learning techniques have the potential to capture complex nonlinear, interactive selection models, yet to our knowledge, their performance in the missing data analysis context has never been evaluated. To assess the potential benefits of these methods, we compare their performance with commonly employed multiple imputation and complete case techniques in 2 simulations. These initial results suggest that weights computed from pruned CART analyses performed well in terms of both bias and efficiency when compared with other methods. We discuss the implications of these findings for applied researchers. (c) 2015 APA, all rights reserved).

  14. Using Classification and Regression Trees (CART) and Random Forests to Analyze Attrition: Results From Two Simulations

    PubMed Central

    Hayes, Timothy; Usami, Satoshi; Jacobucci, Ross; McArdle, John J.

    2016-01-01

    In this article, we describe a recent development in the analysis of attrition: using classification and regression trees (CART) and random forest methods to generate inverse sampling weights. These flexible machine learning techniques have the potential to capture complex nonlinear, interactive selection models, yet to our knowledge, their performance in the missing data analysis context has never been evaluated. To assess the potential benefits of these methods, we compare their performance with commonly employed multiple imputation and complete case techniques in 2 simulations. These initial results suggest that weights computed from pruned CART analyses performed well in terms of both bias and efficiency when compared with other methods. We discuss the implications of these findings for applied researchers. PMID:26389526

  15. Automatically Finding the Control Variables for Complex System Behavior

    NASA Technical Reports Server (NTRS)

    Gay, Gregory; Menzies, Tim; Davies, Misty; Gundy-Burlet, Karen

    2010-01-01

    Testing large-scale systems is expensive in terms of both time and money. Running simulations early in the process is a proven method of finding the design faults likely to lead to critical system failures, but determining the exact cause of those errors is still time-consuming and requires access to a limited number of domain experts. It is desirable to find an automated method that explores the large number of combinations and is able to isolate likely fault points. Treatment learning is a subset of minimal contrast-set learning that, rather than classifying data into distinct categories, focuses on finding the unique factors that lead to a particular classification. That is, they find the smallest change to the data that causes the largest change in the class distribution. These treatments, when imposed, are able to identify the factors most likely to cause a mission-critical failure. The goal of this research is to comparatively assess treatment learning against state-of-the-art numerical optimization techniques. To achieve this, this paper benchmarks the TAR3 and TAR4.1 treatment learners against optimization techniques across three complex systems, including two projects from the Robust Software Engineering (RSE) group within the National Aeronautics and Space Administration (NASA) Ames Research Center. The results clearly show that treatment learning is both faster and more accurate than traditional optimization methods.

  16. Clinical performance of the near-infrared imaging system VistaCam iX Proxi for detection of approximal enamel lesions

    PubMed Central

    Jablonski-Momeni, Anahita; Jablonski, Boris; Lippe, Nikola

    2017-01-01

    Objectives/Aims: Apart from the visual detection of caries, X-rays can be taken for detection of approximal lesions. The Proxi head of VistaCam iX intraoral camera system uses near-infrared light (NIR) to enable caries detection in approximal surfaces. The aim of this study was to evaluate the performance of the NIR for the detection of approximal enamel lesions by comparison with radiographic findings. Materials and methods: One hundred ninety-three approximal surfaces from 18 patients were examined visually and using digital radiographs for presence or absence of enamel lesions. Then digital images of each surface were produced using the near-infrared light. Correlation between methods was assessed using Spearman’s rank correlation coefficient (rs). Agreement between radiographic and NIR findings was calculated using the kappa coefficient. McNemar’s test was used to analyse differences between the radiographic and NIR findings (α=0.05). Results: Moderate correlation was found between all detection methods (rs=0.33–0.50, P<0.0001). Agreement between the radiographic and NIR findings was moderate (κ=0.50, 95% CI=0.37–0.62) for the distinction between sound surfaces and enamel caries. No significant differences were found between the findings (P=0.07). Conclusion: Radiographs and NIR were found to be comparable for the detection of enamel lesions in permanent teeth. PMID:29607082

  17. [Confidence interval or p-value--similarities and differences between two important methods of statistical inference of quantitative studies].

    PubMed

    Harari, Gil

    2014-01-01

    Statistic significance, also known as p-value, and CI (Confidence Interval) are common statistics measures and are essential for the statistical analysis of studies in medicine and life sciences. These measures provide complementary information about the statistical probability and conclusions regarding the clinical significance of study findings. This article is intended to describe the methodologies, compare between the methods, assert their suitability for the different needs of study results analysis and to explain situations in which each method should be used.

  18. An evaluation of dynamic mutuality measurements and methods in cyclic time series

    NASA Astrophysics Data System (ADS)

    Xia, Xiaohua; Huang, Guitian; Duan, Na

    2010-12-01

    Several measurements and techniques have been developed to detect dynamic mutuality and synchronicity of time series in econometrics. This study aims to compare the performances of five methods, i.e., linear regression, dynamic correlation, Markov switching models, concordance index and recurrence quantification analysis, through numerical simulations. We evaluate the abilities of these methods to capture structure changing and cyclicity in time series and the findings of this paper would offer guidance to both academic and empirical researchers. Illustration examples are also provided to demonstrate the subtle differences of these techniques.

  19. Nurses' adherence to the Kangaroo Care Method: support for nursing care management1

    PubMed Central

    da Silva, Laura Johanson; Leite, Josete Luzia; Scochi, Carmen Gracinda Silvan; da Silva, Leila Rangel; da Silva, Thiago Privado

    2015-01-01

    OBJECTIVE: construct an explanatory theoretical model about nurses' adherence to the Kangaroo Care Method at the Neonatal Intensive Care Unit, based on the meanings and interactions for care management. METHOD: qualitative research, based on the reference framework of the Grounded Theory. Eight nurses were interviewed at a Neonatal Intensive Care Unit in the city of Rio de Janeiro. The comparative analysis of the data comprised the phases of open, axial and selective coding. A theoretical conditional-causal model was constructed. RESULTS: four main categories emerged that composed the analytic paradigm: Giving one's best to the Kangaroo Method; Working with the complexity of the Kangaroo Method; Finding (de)motivation to apply the Kangaroo Method; and Facing the challenges for the adherence to and application of the Kangaroo Method. CONCLUSIONS: the central phenomenon revealed that each nurse and team professional has a role of multiplying values and practices that may or may not be constructive, potentially influencing the (dis)continuity of the Kangaroo Method at the Neonatal Intensive Care Unit. The findings can be used to outline management strategies that go beyond the courses and training and guarantee the strengthening of the care model. PMID:26155013

  20. Pathways for diffusion in the potential energy landscape of the network glass former SiO2

    NASA Astrophysics Data System (ADS)

    Niblett, S. P.; Biedermann, M.; Wales, D. J.; de Souza, V. K.

    2017-10-01

    We study the dynamical behaviour of a computer model for viscous silica, the archetypal strong glass former, and compare its diffusion mechanism with earlier studies of a fragile binary Lennard-Jones liquid. Three different methods of analysis are employed. First, the temperature and time scale dependence of the diffusion constant is analysed. Negative correlation of particle displacements influences transport properties in silica as well as in fragile liquids. We suggest that the difference between Arrhenius and super-Arrhenius diffusive behaviour results from competition between the correlation time scale and the caging time scale. Second, we analyse the dynamics using a geometrical definition of cage-breaking transitions that was proposed previously for fragile glass formers. We find that this definition accurately captures the bond rearrangement mechanisms that control transport in open network liquids, and reproduces the diffusion constants accurately at low temperatures. As the same method is applicable to both strong and fragile glass formers, we can compare correlation time scales in these two types of systems. We compare the time spent in chains of correlated cage breaks with the characteristic caging time and find that correlations in the fragile binary Lennard-Jones system persist for an order of magnitude longer than those in the strong silica system. We investigate the origin of the correlation behaviour by sampling the potential energy landscape for silica and comparing it with the binary Lennard-Jones model. We find no qualitative difference between the landscapes, but several metrics suggest that the landscape of the fragile liquid is rougher and more frustrated. Metabasins in silica are smaller than those in binary Lennard-Jones and contain fewer high-barrier processes. This difference probably leads to the observed separation of correlation and caging time scales.

  1. Music and the heart.

    PubMed

    Koelsch, Stefan; Jäncke, Lutz

    2015-11-21

    Music can powerfully evoke and modulate emotions and moods, along with changes in heart activity, blood pressure (BP), and breathing. Although there is great heterogeneity in methods and quality among previous studies on effects of music on the heart, the following findings emerge from the literature: Heart rate (HR) and respiratory rate (RR) are higher in response to exciting music compared with tranquilizing music. During musical frissons (involving shivers and piloerection), both HR and RR increase. Moreover, HR and RR tend to increase in response to music compared with silence, and HR appears to decrease in response to unpleasant music compared with pleasant music. We found no studies that would provide evidence for entrainment of HR to musical beats. Corresponding to the increase in HR, listening to exciting music (compared with tranquilizing music) is associated with a reduction of heart rate variability (HRV), including reductions of both low-frequency and high-frequency power of the HRV. Recent findings also suggest effects of music-evoked emotions on regional activity of the heart, as reflected in electrocardiogram amplitude patterns. In patients with heart disease (similar to other patient groups), music can reduce pain and anxiety, associated with lower HR and lower BP. In general, effects of music on the heart are small, and there is great inhomogeneity among studies with regard to methods, findings, and quality. Therefore, there is urgent need for systematic high-quality research on the effects of music on the heart, and on the beneficial effects of music in clinical settings. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2015. For permissions please email: journals.permissions@oup.com.

  2. Mental Health Aspects of Autistic Spectrum Disorders in Children

    ERIC Educational Resources Information Center

    Skokauskas, N.; Gallagher, L.

    2012-01-01

    Background: Previous studies have reported variable and at times opposite findings on comorbid psychiatric problems in children with autistic spectrum disorders (ASD). Aims: This study aimed to examine patterns of comorbid psychiatric problems in children with ASD and their parents compared with IQ matched controls and their parents. Methods:…

  3. Learning Strategies for Police Organization--Modeling Organizational Learning Perquisites.

    ERIC Educational Resources Information Center

    Luoma, Markku; Nokelainen, Petri; Ruohotie, Pekka

    The factors contributing to organizational learning in police units in Finland and elsewhere were examined to find strategies to improve the prerequisites of learning and compare linear and nonlinear methods of modeling organizational learning prerequisites. A questionnaire was used to collect data from the 281 staff members of five police…

  4. Academic Attainment Findings in Children with Sickle Cell Disease

    ERIC Educational Resources Information Center

    Epping, Amanda S.; Myrvik, Matthew P.; Newby, Robert F.; Panepinto, Julie A.; Brandow, Amanda M.; Scott, J. Paul

    2013-01-01

    Background: Children with sickle cell disease (SCD) demonstrate deficits in cognitive and academic functioning. This study compared the academic attainment of children with SCD relative to national, state, and local school district rates for African American students. Methods: A retrospective chart review of children with SCD was completed and…

  5. Korean Student's Online Learning Preferences and Issues: Cultural Sensitivity for Western Course Designers

    ERIC Educational Resources Information Center

    Washburn, Earlene

    2012-01-01

    Scope and Method of Study: While online courses offer educational solutions, they are not academically suited for everyone. International students find distractions in online courses constructed with American philosophy, epistemology, values, and cultures as compared to experiences in their home country. Learner's culture, value system, learning…

  6. Variation in Children's Understanding of Fractions: Preliminary Findings

    ERIC Educational Resources Information Center

    Fonger, Nicole L.; Tran, Dung; Elliott, Natasha

    2015-01-01

    This research targets children's informal strategies and knowledge of fractions by examining their ability to create, interpret, and connect representations in doing and communicating mathematics when solving fractions tasks. Our research group followed a constant comparative method to analyze clinical interviews of children in grades 2-6 solving…

  7. Downsizings, Mergers, and Acquisitions: Perspectives of Human Resource Development Practitioners

    ERIC Educational Resources Information Center

    Shook, LaVerne; Roth, Gene

    2011-01-01

    Purpose: This paper seeks to provide perspectives of HR practitioners based on their experiences with mergers, acquisitions, and/or downsizings. Design/methodology/approach: This qualitative study utilized interviews with 13 HR practitioners. Data were analyzed using a constant comparative method. Findings: HR practitioners were not involved in…

  8. Leaving College Prematurely: The Experiences of Nontraditional-Age College Students With Depression

    ERIC Educational Resources Information Center

    Thompson-Ebanks, Valerie

    2017-01-01

    This qualitative study examines the experiences of former nontraditional-age students with depression and reasons that led them to leave college prematurely. Constant comparative methods were used to illuminate themes within and across participants' stories. The findings showcase eight complex interlocking factors that these former students…

  9. Interdisciplines and Interdisciplinarity: Political Psychology and Psychohistory Compared

    ERIC Educational Resources Information Center

    Fuchsman, Ken

    2012-01-01

    Interdisciplines are specialties that connect ideas, methods, and findings from existing disciplines. Political psychology and psychohistory are interdisciplines which should have much in common, but even where they clearly intersect, their approaches usually diverge. Part of the reason for their dissimilarity lies in what each takes and rejects…

  10. A Comparative Study of Microscopic Images Captured by a Box Type Digital Camera Versus a Standard Microscopic Photography Camera Unit

    PubMed Central

    Desai, Nandini J.; Gupta, B. D.; Patel, Pratik Narendrabhai

    2014-01-01

    Introduction: Obtaining images of slides viewed by a microscope can be invaluable for both diagnosis and teaching.They can be transferred among technologically-advanced hospitals for further consultation and evaluation. But a standard microscopic photography camera unit (MPCU)(MIPS-Microscopic Image projection System) is costly and not available in resource poor settings. The aim of our endeavour was to find a comparable and cheaper alternative method for photomicrography. Materials and Methods: We used a NIKON Coolpix S6150 camera (box type digital camera) with Olympus CH20i microscope and a fluorescent microscope for the purpose of this study. Results: We got comparable results for capturing images of light microscopy, but the results were not as satisfactory for fluorescent microscopy. Conclusion: A box type digital camera is a comparable, less expensive and convenient alternative to microscopic photography camera unit. PMID:25478350

  11. Validation sampling can reduce bias in healthcare database studies: an illustration using influenza vaccination effectiveness

    PubMed Central

    Nelson, Jennifer C.; Marsh, Tracey; Lumley, Thomas; Larson, Eric B.; Jackson, Lisa A.; Jackson, Michael

    2014-01-01

    Objective Estimates of treatment effectiveness in epidemiologic studies using large observational health care databases may be biased due to inaccurate or incomplete information on important confounders. Study methods that collect and incorporate more comprehensive confounder data on a validation cohort may reduce confounding bias. Study Design and Setting We applied two such methods, imputation and reweighting, to Group Health administrative data (full sample) supplemented by more detailed confounder data from the Adult Changes in Thought study (validation sample). We used influenza vaccination effectiveness (with an unexposed comparator group) as an example and evaluated each method’s ability to reduce bias using the control time period prior to influenza circulation. Results Both methods reduced, but did not completely eliminate, the bias compared with traditional effectiveness estimates that do not utilize the validation sample confounders. Conclusion Although these results support the use of validation sampling methods to improve the accuracy of comparative effectiveness findings from healthcare database studies, they also illustrate that the success of such methods depends on many factors, including the ability to measure important confounders in a representative and large enough validation sample, the comparability of the full sample and validation sample, and the accuracy with which data can be imputed or reweighted using the additional validation sample information. PMID:23849144

  12. Spline-based procedures for dose-finding studies with active control

    PubMed Central

    Helms, Hans-Joachim; Benda, Norbert; Zinserling, Jörg; Kneib, Thomas; Friede, Tim

    2015-01-01

    In a dose-finding study with an active control, several doses of a new drug are compared with an established drug (the so-called active control). One goal of such studies is to characterize the dose–response relationship and to find the smallest target dose concentration d*, which leads to the same efficacy as the active control. For this purpose, the intersection point of the mean dose–response function with the expected efficacy of the active control has to be estimated. The focus of this paper is a cubic spline-based method for deriving an estimator of the target dose without assuming a specific dose–response function. Furthermore, the construction of a spline-based bootstrap CI is described. Estimator and CI are compared with other flexible and parametric methods such as linear spline interpolation as well as maximum likelihood regression in simulation studies motivated by a real clinical trial. Also, design considerations for the cubic spline approach with focus on bias minimization are presented. Although the spline-based point estimator can be biased, designs can be chosen to minimize and reasonably limit the maximum absolute bias. Furthermore, the coverage probability of the cubic spline approach is satisfactory, especially for bias minimal designs. © 2014 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. PMID:25319931

  13. STEME: A Robust, Accurate Motif Finder for Large Data Sets

    PubMed Central

    Reid, John E.; Wernisch, Lorenz

    2014-01-01

    Motif finding is a difficult problem that has been studied for over 20 years. Some older popular motif finders are not suitable for analysis of the large data sets generated by next-generation sequencing. We recently published an efficient approximation (STEME) to the EM algorithm that is at the core of many motif finders such as MEME. This approximation allows the EM algorithm to be applied to large data sets. In this work we describe several efficient extensions to STEME that are based on the MEME algorithm. Together with the original STEME EM approximation, these extensions make STEME a fully-fledged motif finder with similar properties to MEME. We discuss the difficulty of objectively comparing motif finders. We show that STEME performs comparably to existing prominent discriminative motif finders, DREME and Trawler, on 13 sets of transcription factor binding data in mouse ES cells. We demonstrate the ability of STEME to find long degenerate motifs which these discriminative motif finders do not find. As part of our method, we extend an earlier method due to Nagarajan et al. for the efficient calculation of motif E-values. STEME's source code is available under an open source license and STEME is available via a web interface. PMID:24625410

  14. Simultaneous gene finding in multiple genomes.

    PubMed

    König, Stefanie; Romoth, Lars W; Gerischer, Lizzy; Stanke, Mario

    2016-11-15

    As the tree of life is populated with sequenced genomes ever more densely, the new challenge is the accurate and consistent annotation of entire clades of genomes. We address this problem with a new approach to comparative gene finding that takes a multiple genome alignment of closely related species and simultaneously predicts the location and structure of protein-coding genes in all input genomes, thereby exploiting negative selection and sequence conservation. The model prefers potential gene structures in the different genomes that are in agreement with each other, or-if not-where the exon gains and losses are plausible given the species tree. We formulate the multi-species gene finding problem as a binary labeling problem on a graph. The resulting optimization problem is NP hard, but can be efficiently approximated using a subgradient-based dual decomposition approach. The proposed method was tested on whole-genome alignments of 12 vertebrate and 12 Drosophila species. The accuracy was evaluated for human, mouse and Drosophila melanogaster and compared to competing methods. Results suggest that our method is well-suited for annotation of (a large number of) genomes of closely related species within a clade, in particular, when RNA-Seq data are available for many of the genomes. The transfer of existing annotations from one genome to another via the genome alignment is more accurate than previous approaches that are based on protein-spliced alignments, when the genomes are at close to medium distances. The method is implemented in C ++ as part of Augustus and available open source at http://bioinf.uni-greifswald.de/augustus/ CONTACT: stefaniekoenig@ymail.com or mario.stanke@uni-greifswald.deSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  15. IMPROVED PERFORMANCES IN SUBSONIC FLOWS OF AN SPH SCHEME WITH GRADIENTS ESTIMATED USING AN INTEGRAL APPROACH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valdarnini, R., E-mail: valda@sissa.it

    In this paper, we present results from a series of hydrodynamical tests aimed at validating the performance of a smoothed particle hydrodynamics (SPH) formulation in which gradients are derived from an integral approach. We specifically investigate the code behavior with subsonic flows, where it is well known that zeroth-order inconsistencies present in standard SPH make it particularly problematic to correctly model the fluid dynamics. In particular, we consider the Gresho–Chan vortex problem, the growth of Kelvin–Helmholtz instabilities, the statistics of driven subsonic turbulence and the cold Keplerian disk problem. We compare simulation results for the different tests with those obtained,more » for the same initial conditions, using standard SPH. We also compare the results with the corresponding ones obtained previously with other numerical methods, such as codes based on a moving-mesh scheme or Godunov-type Lagrangian meshless methods. We quantify code performances by introducing error norms and spectral properties of the particle distribution, in a way similar to what was done in other works. We find that the new SPH formulation exhibits strongly reduced gradient errors and outperforms standard SPH in all of the tests considered. In fact, in terms of accuracy, we find good agreement between the simulation results of the new scheme and those produced using other recently proposed numerical schemes. These findings suggest that the proposed method can be successfully applied for many astrophysical problems in which the presence of subsonic flows previously limited the use of SPH, with the new scheme now being competitive in these regimes with other numerical methods.« less

  16. Current results with slow freezing and vitrification of the human oocyte.

    PubMed

    Boldt, Jeffrey

    2011-09-01

    The past decade has witnessed renewed interest in human oocyte cryopreservation (OCP). This article reviews the two general methods used for OCP, slow freezing and vitrification, compares the outcomes associated with each technique and discusses the factors that might influence success with OCP (such as oocyte selection or day of transfer). Based on available data, OCP offers a reliable, reproducible method for preservation of the female gamete and will find increasing application in assisted reproductive technology. Oocyte cryopreservation can provide a number of advantages to couples undergoing assisted reproduction or to women interested in fertility preservation. Two methods, slow freezing and vitrification, have been used successfully for oocyte cryopreservation. This article reviews and compares these methods, and discusses various factors that can impact upon success of oocyte cryopreservation. Copyright © 2011 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.

  17. A comparison of heuristic and model-based clustering methods for dietary pattern analysis.

    PubMed

    Greve, Benjamin; Pigeot, Iris; Huybrechts, Inge; Pala, Valeria; Börnhorst, Claudia

    2016-02-01

    Cluster analysis is widely applied to identify dietary patterns. A new method based on Gaussian mixture models (GMM) seems to be more flexible compared with the commonly applied k-means and Ward's method. In the present paper, these clustering approaches are compared to find the most appropriate one for clustering dietary data. The clustering methods were applied to simulated data sets with different cluster structures to compare their performance knowing the true cluster membership of observations. Furthermore, the three methods were applied to FFQ data assessed in 1791 children participating in the IDEFICS (Identification and Prevention of Dietary- and Lifestyle-Induced Health Effects in Children and Infants) Study to explore their performance in practice. The GMM outperformed the other methods in the simulation study in 72 % up to 100 % of cases, depending on the simulated cluster structure. Comparing the computationally less complex k-means and Ward's methods, the performance of k-means was better in 64-100 % of cases. Applied to real data, all methods identified three similar dietary patterns which may be roughly characterized as a 'non-processed' cluster with a high consumption of fruits, vegetables and wholemeal bread, a 'balanced' cluster with only slight preferences of single foods and a 'junk food' cluster. The simulation study suggests that clustering via GMM should be preferred due to its higher flexibility regarding cluster volume, shape and orientation. The k-means seems to be a good alternative, being easier to use while giving similar results when applied to real data.

  18. Comparison of haemoglobin estimates using direct & indirect cyanmethaemoglobin methods.

    PubMed

    Bansal, Priyanka Gupta; Toteja, Gurudayal Singh; Bhatia, Neena; Gupta, Sanjeev; Kaur, Manpreet; Adhikari, Tulsi; Garg, Ashok Kumar

    2016-10-01

    Estimation of haemoglobin is the most widely used method to assess anaemia. Although direct cyanmethaemoglobin method is the recommended method for estimation of haemoglobin, but it may not be feasible under field conditions. Hence, the present study was undertaken to compare indirect cyanmethaemoglobin method against the conventional direct method for haemoglobin estimation. Haemoglobin levels were estimated for 888 adolescent girls aged 11-18 yr residing in an urban slum in Delhi by both direct and indirect cyanmethaemoglobin methods, and the results were compared. The mean haemoglobin levels for 888 whole blood samples estimated by direct and indirect cyanmethaemoglobin method were 116.1 ± 12.7 and 110.5 ± 12.5 g/l, respectively, with a mean difference of 5.67 g/l (95% confidence interval: 5.45 to 5.90, P<0.001); which is equivalent to 0.567 g%. The prevalence of anaemia was reported as 59.6 and 78.2 per cent by direct and indirect methods, respectively. Sensitivity and specificity of indirect cyanmethaemoglobin method were 99.2 and 56.4 per cent, respectively. Using regression analysis, prediction equation was developed for indirect haemoglobin values. The present findings revealed that indirect cyanmethaemoglobin method overestimated the prevalence of anaemia as compared to the direct method. However, if a correction factor is applied, indirect method could be successfully used for estimating true haemoglobin level. More studies should be undertaken to establish agreement and correction factor between direct and indirect cyanmethaemoglobin methods.

  19. Search methods that people use to find owners of lost pets.

    PubMed

    Lord, Linda K; Wittum, Thomas E; Ferketich, Amy K; Funk, Julie A; Rajala-Schultz, Päivi J

    2007-06-15

    To characterize the process by which people who find lost pets search for the owners. Cross-sectional study. Sample Population-188 individuals who found a lost pet in Dayton, Ohio, between March 1 and June 30, 2006. Procedures-Potential participants were identified as a result of contact with a local animal agency or placement of an advertisement in the local newspaper. A telephone survey was conducted to identify methods participants used to find the pets' owners. 156 of 188 (83%) individuals completed the survey. Fifty-nine of the 156 (38%) pets were reunited with their owners; median time to reunification was 2 days (range, 0.5 to 45 days). Only 1 (3%) cat owner was found, compared with 58 (46%) dog owners. Pet owners were found as a result of information provided by an animal agency (25%), placement of a newspaper advertisement (24%), walking the neighborhood (19%), signs in the neighborhood (15%), information on a pet tag (10%), and other methods (7%). Most finders (87%) considered it extremely important to find the owner, yet only 13 (8%) initially surrendered the found pet to an animal agency. The primary reason people did not surrender found pets was fear of euthanasia (57%). Only 97 (62%) individuals were aware they could run a found-pet advertisement in the newspaper at no charge, and only 1 person who was unaware of the no-charge policy placed an advertisement. Veterinarians and shelters can help educate people who find lost pets about methods to search for the pets' owners.

  20. Information filtering via a scaling-based function.

    PubMed

    Qiu, Tian; Zhang, Zi-Ke; Chen, Guang

    2013-01-01

    Finding a universal description of the algorithm optimization is one of the key challenges in personalized recommendation. In this article, for the first time, we introduce a scaling-based algorithm (SCL) independent of recommendation list length based on a hybrid algorithm of heat conduction and mass diffusion, by finding out the scaling function for the tunable parameter and object average degree. The optimal value of the tunable parameter can be abstracted from the scaling function, which is heterogeneous for the individual object. Experimental results obtained from three real datasets, Netflix, MovieLens and RYM, show that the SCL is highly accurate in recommendation. More importantly, compared with a number of excellent algorithms, including the mass diffusion method, the original hybrid method, and even an improved version of the hybrid method, the SCL algorithm remarkably promotes the personalized recommendation in three other aspects: solving the accuracy-diversity dilemma, presenting a high novelty, and solving the key challenge of cold start problem.

  1. An Analysis of Polynomial Chaos Approximations for Modeling Single-Fluid-Phase Flow in Porous Medium Systems

    PubMed Central

    Rupert, C.P.; Miller, C.T.

    2008-01-01

    We examine a variety of polynomial-chaos-motivated approximations to a stochastic form of a steady state groundwater flow model. We consider approaches for truncating the infinite dimensional problem and producing decoupled systems. We discuss conditions under which such decoupling is possible and show that to generalize the known decoupling by numerical cubature, it would be necessary to find new multivariate cubature rules. Finally, we use the acceleration of Monte Carlo to compare the quality of polynomial models obtained for all approaches and find that in general the methods considered are more efficient than Monte Carlo for the relatively small domains considered in this work. A curse of dimensionality in the series expansion of the log-normal stochastic random field used to represent hydraulic conductivity provides a significant impediment to efficient approximations for large domains for all methods considered in this work, other than the Monte Carlo method. PMID:18836519

  2. Identifying multiple influential spreaders based on generalized closeness centrality

    NASA Astrophysics Data System (ADS)

    Liu, Huan-Li; Ma, Chuang; Xiang, Bing-Bing; Tang, Ming; Zhang, Hai-Feng

    2018-02-01

    To maximize the spreading influence of multiple spreaders in complex networks, one important fact cannot be ignored: the multiple spreaders should be dispersively distributed in networks, which can effectively reduce the redundance of information spreading. For this purpose, we define a generalized closeness centrality (GCC) index by generalizing the closeness centrality index to a set of nodes. The problem converts to how to identify multiple spreaders such that an objective function has the minimal value. By comparing with the K-means clustering algorithm, we find that the optimization problem is very similar to the problem of minimizing the objective function in the K-means method. Therefore, how to find multiple nodes with the highest GCC value can be approximately solved by the K-means method. Two typical transmission dynamics-epidemic spreading process and rumor spreading process are implemented in real networks to verify the good performance of our proposed method.

  3. Does Lymphocytic Colitis Always Present with Normal Endoscopic Findings?

    PubMed Central

    Park, Hye Sun; Han, Dong Soo; Ro, Youngouk; Eun, Chang Soo; Yoo, Kyo-Sang

    2015-01-01

    Background/Aims Although normal endoscopic findings are, as a rule, part of the diagnosis of microscopic colitis, several cases of macroscopic lesions (MLs) have been reported in collagenous colitis, but hardly in lymphocytic colitis (LC). The aim of this study was to investigate the endoscopic, clinical, and histopathologic features of LC with MLs. Methods A total of 14 patients with LC who were diagnosed between 2005 and 2010 were enrolled in the study. Endoscopic, clinical, and histopathologic findings were compared retrospectively according to the presence or absence of MLs. Results MLs were observed in seven of the 14 LC cases. Six of the MLs exhibited hypervascularity, three exhibited exudative bleeding and one exhibited edema. The patients with MLs had more severe diarrhea and were taking aspirin or proton pump inhibitors. More intraepithelial lymphocytes were observed during histologic examination in the patients with MLs compared to the patients without MLs, although this difference was not significant. The numbers of mononuclear cells and neutrophils in the lamina propria were independent of the presence or absence of MLs. Conclusions LC does not always present with normal endoscopic findings. Hypervascularity and exudative bleeding are frequent endoscopic findings in patients with MLs. PMID:25167800

  4. An improved set of standards for finding cost for cost-effectiveness analysis.

    PubMed

    Barnett, Paul G

    2009-07-01

    Guidelines have helped standardize methods of cost-effectiveness analysis, allowing different interventions to be compared and enhancing the generalizability of study findings. There is agreement that all relevant services be valued from the societal perspective using a long-term time horizon and that more exact methods be used to cost services most affected by the study intervention. Guidelines are not specific enough with respect to costing methods, however. The literature was reviewed to identify the problems associated with the 4 principal methods of cost determination. Microcosting requires direct measurement and is ordinarily reserved to cost novel interventions. Analysts should include nonwage labor cost, person-level and institutional overhead, and the cost of development, set-up activities, supplies, space, and screening. Activity-based cost systems have promise of finding accurate costs of all services provided, but are not widely adopted. Quality must be evaluated and the generalizability of cost estimates to other settings must be considered. Administrative cost estimates, chiefly cost-adjusted charges, are widely used, but the analyst must consider items excluded from the available system. Gross costing methods determine quantity of services used and employ a unit cost. If the intervention will affect the characteristics of a service, the method should not assume that the service is homogeneous. Questions are posed for future reviews of the quality of costing methods. The analyst must avoid inappropriate assumptions, especially those that bias the analysis by exclusion of costs that are affected by the intervention under study.

  5. Method and apparatus for biological sequence comparison

    DOEpatents

    Marr, T.G.; Chang, W.I.

    1997-12-23

    A method and apparatus are disclosed for comparing biological sequences from a known source of sequences, with a subject (query) sequence. The apparatus takes as input a set of target similarity levels (such as evolutionary distances in units of PAM), and finds all fragments of known sequences that are similar to the subject sequence at each target similarity level, and are long enough to be statistically significant. The invention device filters out fragments from the known sequences that are too short, or have a lower average similarity to the subject sequence than is required by each target similarity level. The subject sequence is then compared only to the remaining known sequences to find the best matches. The filtering member divides the subject sequence into overlapping blocks, each block being sufficiently large to contain a minimum-length alignment from a known sequence. For each block, the filter member compares the block with every possible short fragment in the known sequences and determines a best match for each comparison. The determined set of short fragment best matches for the block provide an upper threshold on alignment values. Regions of a certain length from the known sequences that have a mean alignment value upper threshold greater than a target unit score are concatenated to form a union. The current block is compared to the union and provides an indication of best local alignment with the subject sequence. 5 figs.

  6. Method and apparatus for biological sequence comparison

    DOEpatents

    Marr, Thomas G.; Chang, William I-Wei

    1997-01-01

    A method and apparatus for comparing biological sequences from a known source of sequences, with a subject (query) sequence. The apparatus takes as input a set of target similarity levels (such as evolutionary distances in units of PAM), and finds all fragments of known sequences that are similar to the subject sequence at each target similarity level, and are long enough to be statistically significant. The invention device filters out fragments from the known sequences that are too short, or have a lower average similarity to the subject sequence than is required by each target similarity level. The subject sequence is then compared only to the remaining known sequences to find the best matches. The filtering member divides the subject sequence into overlapping blocks, each block being sufficiently large to contain a minimum-length alignment from a known sequence. For each block, the filter member compares the block with every possible short fragment in the known sequences and determines a best match for each comparison. The determined set of short fragment best matches for the block provide an upper threshold on alignment values. Regions of a certain length from the known sequences that have a mean alignment value upper threshold greater than a target unit score are concatenated to form a union. The current block is compared to the union and provides an indication of best local alignment with the subject sequence.

  7. Hand hygiene among healthcare workers: A qualitative meta summary using the GRADE-CERQual process

    PubMed Central

    Chatfield, Sheryl L.; DeBois, Kristen; Nolan, Rachael; Crawford, Hannah; Hallam, Jeffrey S.

    2016-01-01

    Background: Hand hygiene is considered an effective and potentially modifiable infection control behaviour among healthcare workers (HCW). Several meta-studies have been published that compare quantitatively expressed findings, but limited efforts have been made to synthesise qualitative research. Objectives: This paper provides the first report of integrated findings from qualitative research reports on hand hygiene compliance among HCW worldwide that employs the GRADE-CERQual process of quality assessment. Methods: We conducted database searches and identified 36 reports in which authors conducted qualitative or mixed methods research on hand hygiene compliance among HCW. We used Dedoose analysis software to facilitate extraction of relevant excerpts. We applied the GRADE-CERQual process to describe relative confidence as high, moderate or low for nine aggregate findings. Findings: Highest confidence findings included that HCW believe they have access to adequate training, and that management and resource support are sometimes lacking. Individual, subjective criteria also influence hand hygiene. Discussion: These results suggest the need for further investigation into healthcare cultures that are perceived as supportive for infection control. Surveillance processes have potential, especially if information is perceived by HCW as timely and relevant. PMID:28989515

  8. Incorporating Functional Annotations for Fine-Mapping Causal Variants in a Bayesian Framework Using Summary Statistics.

    PubMed

    Chen, Wenan; McDonnell, Shannon K; Thibodeau, Stephen N; Tillmans, Lori S; Schaid, Daniel J

    2016-11-01

    Functional annotations have been shown to improve both the discovery power and fine-mapping accuracy in genome-wide association studies. However, the optimal strategy to incorporate the large number of existing annotations is still not clear. In this study, we propose a Bayesian framework to incorporate functional annotations in a systematic manner. We compute the maximum a posteriori solution and use cross validation to find the optimal penalty parameters. By extending our previous fine-mapping method CAVIARBF into this framework, we require only summary statistics as input. We also derived an exact calculation of Bayes factors using summary statistics for quantitative traits, which is necessary when a large proportion of trait variance is explained by the variants of interest, such as in fine mapping expression quantitative trait loci (eQTL). We compared the proposed method with PAINTOR using different strategies to combine annotations. Simulation results show that the proposed method achieves the best accuracy in identifying causal variants among the different strategies and methods compared. We also find that for annotations with moderate effects from a large annotation pool, screening annotations individually and then combining the top annotations can produce overly optimistic results. We applied these methods on two real data sets: a meta-analysis result of lipid traits and a cis-eQTL study of normal prostate tissues. For the eQTL data, incorporating annotations significantly increased the number of potential causal variants with high probabilities. Copyright © 2016 by the Genetics Society of America.

  9. Flow and Turbulence Modeling and Computation of Shock Buffet Onset for Conventional and Supercritical Airfoils

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    1998-01-01

    Flow and turbulence models applied to the problem of shock buffet onset are studied. The accuracy of the interactive boundary layer and the thin-layer Navier-Stokes equations solved with recent upwind techniques using similar transport field equation turbulence models is assessed for standard steady test cases, including conditions having significant shock separation. The two methods are found to compare well in the shock buffet onset region of a supercritical airfoil that involves strong trailing-edge separation. A computational analysis using the interactive-boundary layer has revealed a Reynolds scaling effect in the shock buffet onset of the supercritical airfoil, which compares well with experiment. The methods are next applied to a conventional airfoil. Steady shock-separated computations of the conventional airfoil with the two methods compare well with experiment. Although the interactive boundary layer computations in the shock buffet region compare well with experiment for the conventional airfoil, the thin-layer Navier-Stokes computations do not. These findings are discussed in connection with possible mechanisms important in the onset of shock buffet and the constraints imposed by current numerical modeling techniques.

  10. Mindfulness and headache: A "new" old treatment, with new findings.

    PubMed

    Andrasik, Frank; Grazzi, Licia; D'Amico, Domenico; Sansone, Emanuela; Leonardi, Matilde; Raggi, Alberto; Salgado-García, Francisco

    2016-10-01

    Background Mindfulness refers to a host of procedures that have been practiced for centuries, but only recently have begun to be applied to varied pain conditions, with the most recent being headache. Methods We reviewed research that incorporated components of mindfulness for treating pain, with a more in depth focus on headache disorders. We also examined literature that has closely studied potential physiological processes in the brain that might mediate the effects of mindfulness. We report as well preliminary findings of our ongoing trial comparing mindfulness alone to pharmacological treatment alone for treating chronic migraine accompanied by medication overuse. Results Although research remains in its infancy, the initial findings support the utility of varied mindfulness approaches for enhancing usual care for headache management. Our preliminary findings suggest mindfulness by itself may produce effects comparable to that of medication alone for patients with chronic migraine and medication overuse. Conclusions Much work remains to more fully document the role and long term value of mindfulness for specific headache types. Areas in need of further investigation are discussed.

  11. Teledermatology in the United States: An Update in a Dynamic Era.

    PubMed

    Yim, Kaitlyn M; Florek, Aleksandra G; Oh, Dennis H; McKoy, Karen; Armstrong, April W

    2018-01-22

    Teledermatology is rapidly advancing in the United States. The last comprehensive survey of U.S. teledermatology programs was conducted in 2011. This article provides an update regarding the state of teledermatology programs in the United States. Active programs were identified and surveyed from November 2014 to January 2017. Findings regarding practice settings, consult volumes, payment methods, and delivery modalities were compared to those from the 2011 survey. Findings from the Veterans Affairs (VA) were reported as an aggregate. There were 40 active nongovernmental programs, amounting to a 48% increase and 30% discontinuation rate over five years. Academia remained the most common practice setting (50%). Median annual consultation volume was comparable with 263 consultations, but maximum annual consultation volume increased (range: 20-20,000). The most frequent payment method was self-pay (53%). Store-and-forward continued to be the most common delivery modality. In Fiscal Year 2016, the VA System consisted of 62 consultation sites and performed a total of 101,507 consultations. The limitations of this study were that consult volume and payment methods were not available from all programs. U.S. teledermatology programs have increased in number and annual consultation volume. Academia is the most prevalent practice setting, and self-pay is the dominant accepted payment method. Innovative platforms and the provision of direct-to-patient care are changing the practice of teledermatology.

  12. Generalizing Observational Study Results: Applying Propensity Score Methods to Complex Surveys

    PubMed Central

    DuGoff, Eva H; Schuler, Megan; Stuart, Elizabeth A

    2014-01-01

    ObjectiveTo provide a tutorial for using propensity score methods with complex survey data. Data SourcesSimulated data and the 2008 Medical Expenditure Panel Survey. Study DesignUsing simulation, we compared the following methods for estimating the treatment effect: a naïve estimate (ignoring both survey weights and propensity scores), survey weighting, propensity score methods (nearest neighbor matching, weighting, and subclassification), and propensity score methods in combination with survey weighting. Methods are compared in terms of bias and 95 percent confidence interval coverage. In Example 2, we used these methods to estimate the effect on health care spending of having a generalist versus a specialist as a usual source of care. Principal FindingsIn general, combining a propensity score method and survey weighting is necessary to achieve unbiased treatment effect estimates that are generalizable to the original survey target population. ConclusionsPropensity score methods are an essential tool for addressing confounding in observational studies. Ignoring survey weights may lead to results that are not generalizable to the survey target population. This paper clarifies the appropriate inferences for different propensity score methods and suggests guidelines for selecting an appropriate propensity score method based on a researcher’s goal. PMID:23855598

  13. Test versus analysis: A discussion of methods

    NASA Technical Reports Server (NTRS)

    Butler, T. G.

    1986-01-01

    Some techniques for comparing structural vibration data determined from test and analysis are discussed. Orthogonality is a general category of one group, correlation is a second, synthesis is a third and matrix improvement is a fourth. Advantages and short-comings of the methods are explored with suggestions as to how they can complement one another. The purpose for comparing vibration data from test and analysis for a given structure is to find out whether each is representing the dynamic properties of the structure in the same way. Specifically, whether: mode shapes are alike; the frequencies of the modes are alike; modes appear in the same frequency sequence; and if they are not alike, how to judge which to believe.

  14. Multiple network alignment via multiMAGNA+.

    PubMed

    Vijayan, Vipin; Milenkovic, Tijana

    2017-08-21

    Network alignment (NA) aims to find a node mapping that identifies topologically or functionally similar network regions between molecular networks of different species. Analogous to genomic sequence alignment, NA can be used to transfer biological knowledge from well- to poorly-studied species between aligned network regions. Pairwise NA (PNA) finds similar regions between two networks while multiple NA (MNA) can align more than two networks. We focus on MNA. Existing MNA methods aim to maximize total similarity over all aligned nodes (node conservation). Then, they evaluate alignment quality by measuring the amount of conserved edges, but only after the alignment is constructed. Directly optimizing edge conservation during alignment construction in addition to node conservation may result in superior alignments. Thus, we present a novel MNA method called multiMAGNA++ that can achieve this. Indeed, multiMAGNA++ outperforms or is on par with existing MNA methods, while often completing faster than existing methods. That is, multiMAGNA++ scales well to larger network data and can be parallelized effectively. During method evaluation, we also introduce new MNA quality measures to allow for more fair MNA method comparison compared to the existing alignment quality measures. MultiMAGNA++ code is available on the method's web page at http://nd.edu/~cone/multiMAGNA++/.

  15. A range of complex probabilistic models for RNA secondary structure prediction that includes the nearest-neighbor model and more.

    PubMed

    Rivas, Elena; Lang, Raymond; Eddy, Sean R

    2012-02-01

    The standard approach for single-sequence RNA secondary structure prediction uses a nearest-neighbor thermodynamic model with several thousand experimentally determined energy parameters. An attractive alternative is to use statistical approaches with parameters estimated from growing databases of structural RNAs. Good results have been reported for discriminative statistical methods using complex nearest-neighbor models, including CONTRAfold, Simfold, and ContextFold. Little work has been reported on generative probabilistic models (stochastic context-free grammars [SCFGs]) of comparable complexity, although probabilistic models are generally easier to train and to use. To explore a range of probabilistic models of increasing complexity, and to directly compare probabilistic, thermodynamic, and discriminative approaches, we created TORNADO, a computational tool that can parse a wide spectrum of RNA grammar architectures (including the standard nearest-neighbor model and more) using a generalized super-grammar that can be parameterized with probabilities, energies, or arbitrary scores. By using TORNADO, we find that probabilistic nearest-neighbor models perform comparably to (but not significantly better than) discriminative methods. We find that complex statistical models are prone to overfitting RNA structure and that evaluations should use structurally nonhomologous training and test data sets. Overfitting has affected at least one published method (ContextFold). The most important barrier to improving statistical approaches for RNA secondary structure prediction is the lack of diversity of well-curated single-sequence RNA secondary structures in current RNA databases.

  16. A range of complex probabilistic models for RNA secondary structure prediction that includes the nearest-neighbor model and more

    PubMed Central

    Rivas, Elena; Lang, Raymond; Eddy, Sean R.

    2012-01-01

    The standard approach for single-sequence RNA secondary structure prediction uses a nearest-neighbor thermodynamic model with several thousand experimentally determined energy parameters. An attractive alternative is to use statistical approaches with parameters estimated from growing databases of structural RNAs. Good results have been reported for discriminative statistical methods using complex nearest-neighbor models, including CONTRAfold, Simfold, and ContextFold. Little work has been reported on generative probabilistic models (stochastic context-free grammars [SCFGs]) of comparable complexity, although probabilistic models are generally easier to train and to use. To explore a range of probabilistic models of increasing complexity, and to directly compare probabilistic, thermodynamic, and discriminative approaches, we created TORNADO, a computational tool that can parse a wide spectrum of RNA grammar architectures (including the standard nearest-neighbor model and more) using a generalized super-grammar that can be parameterized with probabilities, energies, or arbitrary scores. By using TORNADO, we find that probabilistic nearest-neighbor models perform comparably to (but not significantly better than) discriminative methods. We find that complex statistical models are prone to overfitting RNA structure and that evaluations should use structurally nonhomologous training and test data sets. Overfitting has affected at least one published method (ContextFold). The most important barrier to improving statistical approaches for RNA secondary structure prediction is the lack of diversity of well-curated single-sequence RNA secondary structures in current RNA databases. PMID:22194308

  17. A pilot exploratory investigation on pregnant women's views regarding STan fetal monitoring technology.

    PubMed

    Bryson, Kate; Wilkinson, Chris; Kuah, Sabrina; Matthews, Geoff; Turnbull, Deborah

    2017-12-29

    Women's views are critical for informing the planning and delivery of maternity care services. ST segment analysis (STan) is a promising method to more accurately detect when unborn babies are at risk of brain damage or death during labour that is being trialled for the first time in Australia. This is the first study to examine women's views about STan monitoring in this context. Semi-structured interviews were conducted with pregnant women recruited across a range of clinical locations at the study hospital. The interviews included hypothetical scenarios to assess women's prospective views about STan monitoring (as an adjunct to cardiotocography, (CTG)) compared to the existing fetal monitoring method of CTG alone. This article describes findings from an inductive and descriptive thematic analysis. Most women preferred the existing fetal monitoring method compared to STan monitoring; women's decision-making was multifaceted. Analysis yielded four themes relating to women's views towards fetal monitoring in labour: a) risk and labour b) mobility in labour c) autonomy and choice in labour d) trust in maternity care providers. Findings suggest that women's views towards CTG and STan monitoring are multifaceted, and appear to be influenced by individual labour preferences and the information being received and understood. This underlies the importance of clear communication between maternity care providers and women about technology use in intrapartum care. This research is now being used to inform the implementation of the first properly powered Australian randomised trial comparing STan and CTG monitoring.

  18. Comparing Value of Urban Green Space Using Contingent Valuation and Travel Cost Methods

    NASA Astrophysics Data System (ADS)

    Chintantya, Dea; Maryono

    2018-02-01

    Green urban open space are an important element of the city. They gives multiple benefits for social life, human health, biodiversity, air quality, carbon sequestration, and water management. Travel Cost Method (TCM) and Contingent Valuation Method (CVM) are the most frequently used method in various studies that assess environmental good and services in monetary term for valuing urban green space. Both of those method are determined the value of urban green space through willingness to pay (WTP) for ecosystem benefit and collected data through direct interview and questionnaire. Findings of this study showed the weaknesses and strengths of both methods for valuing urban green space and provided factors influencing the probability of user's willingness to pay in each method.

  19. Comparing 3D foot scanning with conventional measurement methods.

    PubMed

    Lee, Yu-Chi; Lin, Gloria; Wang, Mao-Jiun J

    2014-01-01

    Foot dimension information on different user groups is important for footwear design and clinical applications. Foot dimension data collected using different measurement methods presents accuracy problems. This study compared the precision and accuracy of the 3D foot scanning method with conventional foot dimension measurement methods including the digital caliper, ink footprint and digital footprint. Six commonly used foot dimensions, i.e. foot length, ball of foot length, outside ball of foot length, foot breadth diagonal, foot breadth horizontal and heel breadth were measured from 130 males and females using four foot measurement methods. Two-way ANOVA was performed to evaluate the sex and method effect on the measured foot dimensions. In addition, the mean absolute difference values and intra-class correlation coefficients (ICCs) were used for precision and accuracy evaluation. The results were also compared with the ISO 20685 criteria. The participant's sex and the measurement method were found (p < 0.05) to exert significant effects on the measured six foot dimensions. The precision of the 3D scanning measurement method with mean absolute difference values between 0.73 to 1.50 mm showed the best performance among the four measurement methods. The 3D scanning measurements showed better measurement accuracy performance than the other methods (mean absolute difference was 0.6 to 4.3 mm), except for measuring outside ball of foot length and foot breadth horizontal. The ICCs for all six foot dimension measurements among the four measurement methods were within the 0.61 to 0.98 range. Overall, the 3D foot scanner is recommended for collecting foot anthropometric data because it has relatively higher precision, accuracy and robustness. This finding suggests that when comparing foot anthropometric data among different references, it is important to consider the differences caused by the different measurement methods.

  20. Comparative Study of SVM Methods Combined with Voxel Selection for Object Category Classification on fMRI Data

    PubMed Central

    Song, Sutao; Zhan, Zhichao; Long, Zhiying; Zhang, Jiacai; Yao, Li

    2011-01-01

    Background Support vector machine (SVM) has been widely used as accurate and reliable method to decipher brain patterns from functional MRI (fMRI) data. Previous studies have not found a clear benefit for non-linear (polynomial kernel) SVM versus linear one. Here, a more effective non-linear SVM using radial basis function (RBF) kernel is compared with linear SVM. Different from traditional studies which focused either merely on the evaluation of different types of SVM or the voxel selection methods, we aimed to investigate the overall performance of linear and RBF SVM for fMRI classification together with voxel selection schemes on classification accuracy and time-consuming. Methodology/Principal Findings Six different voxel selection methods were employed to decide which voxels of fMRI data would be included in SVM classifiers with linear and RBF kernels in classifying 4-category objects. Then the overall performances of voxel selection and classification methods were compared. Results showed that: (1) Voxel selection had an important impact on the classification accuracy of the classifiers: in a relative low dimensional feature space, RBF SVM outperformed linear SVM significantly; in a relative high dimensional space, linear SVM performed better than its counterpart; (2) Considering the classification accuracy and time-consuming holistically, linear SVM with relative more voxels as features and RBF SVM with small set of voxels (after PCA) could achieve the better accuracy and cost shorter time. Conclusions/Significance The present work provides the first empirical result of linear and RBF SVM in classification of fMRI data, combined with voxel selection methods. Based on the findings, if only classification accuracy was concerned, RBF SVM with appropriate small voxels and linear SVM with relative more voxels were two suggested solutions; if users concerned more about the computational time, RBF SVM with relative small set of voxels when part of the principal components were kept as features was a better choice. PMID:21359184

  1. Multimethod Investigation of Interpersonal Functioning in Borderline Personality Disorder

    PubMed Central

    Stepp, Stephanie D.; Hallquist, Michael N.; Morse, Jennifer Q.; Pilkonis, Paul A.

    2011-01-01

    Even though interpersonal functioning is of great clinical importance for patients with borderline personality disorder (BPD), the comparative validity of different assessment methods for interpersonal dysfunction has not yet been tested. This study examined multiple methods of assessing interpersonal functioning, including self- and other-reports, clinical ratings, electronic diaries, and social cognitions in three groups of psychiatric patients (N=138): patients with (1) BPD, (2) another personality disorder, and (3) Axis I psychopathology only. Using dominance analysis, we examined the predictive validity of each method in detecting changes in symptom distress and social functioning six months later. Across multiple methods, the BPD group often reported higher interpersonal dysfunction scores compared to other groups. Predictive validity results demonstrated that self-report and electronic diary ratings were the most important predictors of distress and social functioning. Our findings suggest that self-report scores and electronic diary ratings have high clinical utility, as these methods appear most sensitive to change. PMID:21808661

  2. A comparative epidemiologic study of specific antibodies (IgM and IgA) and parasitological findings in an endemic area of low transmission of schistosoma mansoni.

    PubMed

    Kanamura, H Y; Dias, L C; da Silva, R M; Glasser, C M; Patucci, R M; Vellosa, S A; Antunes, J L

    1998-01-01

    The diagnostic potential of circulating IgM and IgA antibodies against Schistosoma mansoni gut-associated antigens detected by the immunofluorescence test (IFT) on adult worm paraffin sections was evaluated comparatively to the fecal parasitological method, for epidemiological purposes in low endemic areas for schistosomiasis. Blood samples were collected on filter paper from two groups of schoolchildren living in two different localities of the municipality of Itariri (São Paulo, Brazil) with different histories and prevalences of schistosomiasis. The parasitological and serological data were compared to those obtained for another group of schoolchildren from a non-endemic area for schistosomiasis. The results showed poor sensitivity of the parasitological method in detecting individuals with low worm burden and indicate the potential of the serological method as an important tool to be incorporated into schistosomiasis control and vigilance programs for determining the real situation of schistosomiasis in low endemic areas.

  3. Comparative effectiveness of instructional methods: oral and pharyngeal cancer examination.

    PubMed

    Clark, Nereyda P; Marks, John G; Sandow, Pamela R; Seleski, Christine E; Logan, Henrietta L

    2014-04-01

    This study compared the effectiveness of different methods of instruction for the oral and pharyngeal cancer examination. A group of thirty sophomore students at the University of Florida College of Dentistry were randomly assigned to three training groups: video instruction, a faculty-led hands-on instruction, or both video and hands-on instruction. The training intervention involved attending two sessions spaced two weeks apart. The first session used a pretest to assess students' baseline didactic knowledge and clinical examination technique. The second session utilized two posttests to assess the comparative effectiveness of the training methods on didactic knowledge and clinical technique. The key findings were that students performed the clinical examination significantly better with the combination of video and faculty-led hands-on instruction (p<0.01). All students improved their clinical exam skills, knowledge, and confidence in performing the oral and pharyngeal cancer examination independent of which training group they were assigned. Utilizing both video and interactive practice promoted greater performance of the clinical technique on the oral and pharyngeal cancer examination.

  4. A new optimization method using a compressed sensing inspired solver for real-time LDR-brachytherapy treatment planning

    NASA Astrophysics Data System (ADS)

    Guthier, C.; Aschenbrenner, K. P.; Buergy, D.; Ehmann, M.; Wenz, F.; Hesser, J. W.

    2015-03-01

    This work discusses a novel strategy for inverse planning in low dose rate brachytherapy. It applies the idea of compressed sensing to the problem of inverse treatment planning and a new solver for this formulation is developed. An inverse planning algorithm was developed incorporating brachytherapy dose calculation methods as recommended by AAPM TG-43. For optimization of the functional a new variant of a matching pursuit type solver is presented. The results are compared with current state-of-the-art inverse treatment planning algorithms by means of real prostate cancer patient data. The novel strategy outperforms the best state-of-the-art methods in speed, while achieving comparable quality. It is able to find solutions with comparable values for the objective function and it achieves these results within a few microseconds, being up to 542 times faster than competing state-of-the-art strategies, allowing real-time treatment planning. The sparse solution of inverse brachytherapy planning achieved with methods from compressed sensing is a new paradigm for optimization in medical physics. Through the sparsity of required needles and seeds identified by this method, the cost of intervention may be reduced.

  5. A new optimization method using a compressed sensing inspired solver for real-time LDR-brachytherapy treatment planning.

    PubMed

    Guthier, C; Aschenbrenner, K P; Buergy, D; Ehmann, M; Wenz, F; Hesser, J W

    2015-03-21

    This work discusses a novel strategy for inverse planning in low dose rate brachytherapy. It applies the idea of compressed sensing to the problem of inverse treatment planning and a new solver for this formulation is developed. An inverse planning algorithm was developed incorporating brachytherapy dose calculation methods as recommended by AAPM TG-43. For optimization of the functional a new variant of a matching pursuit type solver is presented. The results are compared with current state-of-the-art inverse treatment planning algorithms by means of real prostate cancer patient data. The novel strategy outperforms the best state-of-the-art methods in speed, while achieving comparable quality. It is able to find solutions with comparable values for the objective function and it achieves these results within a few microseconds, being up to 542 times faster than competing state-of-the-art strategies, allowing real-time treatment planning. The sparse solution of inverse brachytherapy planning achieved with methods from compressed sensing is a new paradigm for optimization in medical physics. Through the sparsity of required needles and seeds identified by this method, the cost of intervention may be reduced.

  6. Quantifying the biases in metagenome mining for realistic assessment of microbial ecology of naturally fermented foods.

    PubMed

    Keisam, Santosh; Romi, Wahengbam; Ahmed, Giasuddin; Jeyaram, Kumaraswamy

    2016-09-27

    Cultivation-independent investigation of microbial ecology is biased by the DNA extraction methods used. We aimed to quantify those biases by comparative analysis of the metagenome mined from four diverse naturally fermented foods (bamboo shoot, milk, fish, soybean) using eight different DNA extraction methods with different cell lysis principles. Our findings revealed that the enzymatic lysis yielded higher eubacterial and yeast metagenomic DNA from the food matrices compared to the widely used chemical and mechanical lysis principles. Further analysis of the bacterial community structure by Illumina MiSeq amplicon sequencing revealed a high recovery of lactic acid bacteria by the enzymatic lysis in all food types. However, Bacillaceae, Acetobacteraceae, Clostridiaceae and Proteobacteria were more abundantly recovered when mechanical and chemical lysis principles were applied. The biases generated due to the differential recovery of operational taxonomic units (OTUs) by different DNA extraction methods including DNA and PCR amplicons mix from different methods have been quantitatively demonstrated here. The different methods shared only 29.9-52.0% of the total OTUs recovered. Although similar comparative research has been performed on other ecological niches, this is the first in-depth investigation of quantifying the biases in metagenome mining from naturally fermented foods.

  7. A new multi-spectral feature level image fusion method for human interpretation

    NASA Astrophysics Data System (ADS)

    Leviner, Marom; Maltz, Masha

    2009-03-01

    Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in a three-task experiment using MSSF against two established methods: averaging and principle components analysis (PCA), and against its two source bands, visible and infrared. The three tasks that we studied were: (1) simple target detection, (2) spatial orientation, and (3) camouflaged target detection. MSSF proved superior to the other fusion methods in all three tests; MSSF also outperformed the source images in the spatial orientation and camouflaged target detection tasks. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.

  8. Efficient and Robust Optimization for Building Energy Simulation

    PubMed Central

    Pourarian, Shokouh; Kearsley, Anthony; Wen, Jin; Pertzborn, Amanda

    2016-01-01

    Efficiently, robustly and accurately solving large sets of structured, non-linear algebraic and differential equations is one of the most computationally expensive steps in the dynamic simulation of building energy systems. Here, the efficiency, robustness and accuracy of two commonly employed solution methods are compared. The comparison is conducted using the HVACSIM+ software package, a component based building system simulation tool. The HVACSIM+ software presently employs Powell’s Hybrid method to solve systems of nonlinear algebraic equations that model the dynamics of energy states and interactions within buildings. It is shown here that the Powell’s method does not always converge to a solution. Since a myriad of other numerical methods are available, the question arises as to which method is most appropriate for building energy simulation. This paper finds considerable computational benefits result from replacing the Powell’s Hybrid method solver in HVACSIM+ with a solver more appropriate for the challenges particular to numerical simulations of buildings. Evidence is provided that a variant of the Levenberg-Marquardt solver has superior accuracy and robustness compared to the Powell’s Hybrid method presently used in HVACSIM+. PMID:27325907

  9. Efficient and Robust Optimization for Building Energy Simulation.

    PubMed

    Pourarian, Shokouh; Kearsley, Anthony; Wen, Jin; Pertzborn, Amanda

    2016-06-15

    Efficiently, robustly and accurately solving large sets of structured, non-linear algebraic and differential equations is one of the most computationally expensive steps in the dynamic simulation of building energy systems. Here, the efficiency, robustness and accuracy of two commonly employed solution methods are compared. The comparison is conducted using the HVACSIM+ software package, a component based building system simulation tool. The HVACSIM+ software presently employs Powell's Hybrid method to solve systems of nonlinear algebraic equations that model the dynamics of energy states and interactions within buildings. It is shown here that the Powell's method does not always converge to a solution. Since a myriad of other numerical methods are available, the question arises as to which method is most appropriate for building energy simulation. This paper finds considerable computational benefits result from replacing the Powell's Hybrid method solver in HVACSIM+ with a solver more appropriate for the challenges particular to numerical simulations of buildings. Evidence is provided that a variant of the Levenberg-Marquardt solver has superior accuracy and robustness compared to the Powell's Hybrid method presently used in HVACSIM+.

  10. Excavatability Assessment of Weathered Sedimentary Rock Mass Using Seismic Velocity Method

    NASA Astrophysics Data System (ADS)

    Bin Mohamad, Edy Tonnizam; Saad, Rosli; Noor, Muhazian Md; Isa, Mohamed Fauzi Bin Md.; Mazlan, Ain Naadia

    2010-12-01

    Seismic refraction method is one of the most popular methods in assessing surface excavation. The main objective of the seismic data acquisition is to delineate the subsurface into velocity profiles as different velocity can be correlated to identify different materials. The physical principal used for the determination of excavatability is that seismic waves travel faster through denser material as compared to less consolidated material. In general, a lower velocity indicates material that is soft and a higher velocity indicates more difficult to be excavated. However, a few researchers have noted that seismic velocity method alone does not correlate well with the excavatability of the material. In this study, a seismic velocity method was used in Nusajaya, Johor to assess the accuracy of this seismic velocity method with excavatability of the weathered sedimentary rock mass. A direct ripping run by monitoring the actual production of ripping has been employed at later stage and compared to the ripper manufacturer's recommendation. This paper presents the findings of the seismic velocity tests in weathered sedimentary area. The reliability of using this method with the actual rippability trials is also presented.

  11. High-order interactions observed in multi-task intrinsic networks are dominant indicators of aberrant brain function in schizophrenia

    PubMed Central

    Plis, Sergey M; Sui, Jing; Lane, Terran; Roy, Sushmita; Clark, Vincent P; Potluru, Vamsi K; Huster, Rene J; Michael, Andrew; Sponheim, Scott R; Weisend, Michael P; Calhoun, Vince D

    2013-01-01

    Identifying the complex activity relationships present in rich, modern neuroimaging data sets remains a key challenge for neuroscience. The problem is hard because (a) the underlying spatial and temporal networks may be nonlinear and multivariate and (b) the observed data may be driven by numerous latent factors. Further, modern experiments often produce data sets containing multiple stimulus contexts or tasks processed by the same subjects. Fusing such multi-session data sets may reveal additional structure, but raises further statistical challenges. We present a novel analysis method for extracting complex activity networks from such multifaceted imaging data sets. Compared to previous methods, we choose a new point in the trade-off space, sacrificing detailed generative probability models and explicit latent variable inference in order to achieve robust estimation of multivariate, nonlinear group factors (“network clusters”). We apply our method to identify relationships of task-specific intrinsic networks in schizophrenia patients and control subjects from a large fMRI study. After identifying network-clusters characterized by within- and between-task interactions, we find significant differences between patient and control groups in interaction strength among networks. Our results are consistent with known findings of brain regions exhibiting deviations in schizophrenic patients. However, we also find high-order, nonlinear interactions that discriminate groups but that are not detected by linear, pair-wise methods. We additionally identify high-order relationships that provide new insights into schizophrenia but that have not been found by traditional univariate or second-order methods. Overall, our approach can identify key relationships that are missed by existing analysis methods, without losing the ability to find relationships that are known to be important. PMID:23876245

  12. Intubation methods by novice intubators in a manikin model.

    PubMed

    O'Carroll, Darragh C; Barnes, Robert L; Aratani, Ashley K; Lee, Dane C; Lau, Christopher A; Morton, Paul N; Yamamoto, Loren G; Berg, Benjamin W

    2013-10-01

    Tracheal Intubation is an important yet difficult skill to learn with many possible methods and techniques. Direct laryngoscopy is the standard method of tracheal intubation, but several instruments have been shown to be less difficult and have better performance characteristics than the traditional direct method. We compared 4 different intubation methods performed by novice intubators on manikins: conventional direct laryngoscopy, video laryngoscopy, Airtraq® laryngoscopy, and fiberoptic laryngoscopy. In addition, we attempted to find a correlation between playing videogames and intubation times in novice intubators. Video laryngoscopy had the best results for both our normal and difficult airway (cervical spine immobilization) manikin scenarios. When video was compared to direct in the normal airway scenario, it had a significantly higher success rate (100% vs 83% P=.02) and shorter intubation times (29.1 ± 27.4 sec vs 45.9 ± 39.5 sec, P=.03). In the difficult airway scenario video laryngoscopy maintained a significantly higher success rate (91% vs 71% P=0.04) and likelihood of success (3.2 ± 1.0 95%CI [2.9-3.5] vs 2.4 ± 0.9 95%CI [2.1-2.7]) when compared to direct laryngoscopy. Participants also reported significantly higher rates of self-confidence (3.5 ± 0.6 95%CI [3.3-3.7]) and ease of use (1.5 ± 0.7 95%CI [1.3-1.8]) with video laryngoscopy compared to all other methods. We found no correlation between videogame playing and intubation methods.

  13. Worldwide F(ST) estimates relative to five continental-scale populations.

    PubMed

    Steele, Christopher D; Court, Denise Syndercombe; Balding, David J

    2014-11-01

    We estimate the population genetics parameter FST (also referred to as the fixation index) from short tandem repeat (STR) allele frequencies, comparing many worldwide human subpopulations at approximately the national level with continental-scale populations. FST is commonly used to measure population differentiation, and is important in forensic DNA analysis to account for remote shared ancestry between a suspect and an alternative source of the DNA. We estimate FST comparing subpopulations with a hypothetical ancestral population, which is the approach most widely used in population genetics, and also compare a subpopulation with a sampled reference population, which is more appropriate for forensic applications. Both estimation methods are likelihood-based, in which FST is related to the variance of the multinomial-Dirichlet distribution for allele counts. Overall, we find low FST values, with posterior 97.5 percentiles < 3% when comparing a subpopulation with the most appropriate population, and even for inter-population comparisons we find FST < 5%. These are much smaller than single nucleotide polymorphism-based inter-continental FST estimates, and are also about half the magnitude of STR-based estimates from population genetics surveys that focus on distinct ethnic groups rather than a general population. Our findings support the use of FST up to 3% in forensic calculations, which corresponds to some current practice.

  14. The Ca II infrared triplet's performance as an activity indicator compared to Ca II H and K. Empirical relations to convert Ca II infrared triplet measurements to common activity indices

    NASA Astrophysics Data System (ADS)

    Martin, J.; Fuhrmeister, B.; Mittag, M.; Schmidt, T. O. B.; Hempelmann, A.; González-Pérez, J. N.; Schmitt, J. H. M. M.

    2017-09-01

    Aims: A large number of Calcium infrared triplet (IRT) spectra are expected from the Gaia and CARMENES missions. Conversion of these spectra into known activity indicators will allow analysis of their temporal evolution to a better degree. We set out to find such a conversion formula and to determine its robustness. Methods: We have compared 2274 Ca II IRT spectra of active main-sequence F to K stars taken by the TIGRE telescope with those of inactive stars of the same spectral type. After normalizing and applying rotational broadening, we subtracted the comparison spectra to find the chromospheric excess flux caused by activity. We obtained the total excess flux, and compared it to established activity indices derived from the Ca II H and K lines, the spectra of which were obtained simultaneously to the infrared spectra. Results: The excess flux in the Ca II IRT is found to correlate well with R'HK and R+HK, as well as SMWO, if the B - V-dependency is taken into account. We find an empirical conversion formula to calculate the corresponding value of one activity indicator from the measurement of another, by comparing groups of datapoints of stars with similar B - V.

  15. Inventory control of raw material using silver meal heuristic method in PR. Trubus Alami Malang

    NASA Astrophysics Data System (ADS)

    Ikasari, D. M.; Lestari, E. R.; Prastya, E.

    2018-03-01

    The purpose of this study was to compare the total inventory cost calculated using the method applied by PR. Trubus Alami and Silver Meal Heuristic (SMH) method. The study was started by forecasting the cigarette demand from July 2016 to June 2017 (48 weeks) using additive decomposition forecasting method. The additive decomposition was used because it has the lowest value of Mean Abosolute Deviation (MAD) and Mean Squared Deviation (MSD) compared to other methods such as multiplicative decomposition, moving average, single exponential smoothing, and double exponential smoothing. The forcasting results was then converted as a raw material needs and further calculated using SMH method to obtain inventory cost. As expected, the result shows that the order frequency of using SMH methods was smaller than that of using the method applied by Trubus Alami. This affected the total inventory cost. The result suggests that using SMH method gave a 29.41% lower inventory cost, giving the cost different of IDR 21,290,622. The findings, is therefore, indicated that the PR. Trubus Alami should apply the SMH method if the company wants to reduce the total inventory cost.

  16. Recognition of a person named entity from the text written in a natural language

    NASA Astrophysics Data System (ADS)

    Dolbin, A. V.; Rozaliev, V. L.; Orlova, Y. A.

    2017-01-01

    This work is devoted to the semantic analysis of texts, which were written in a natural language. The main goal of the research was to compare latent Dirichlet allocation and latent semantic analysis to identify elements of the human appearance in the text. The completeness of information retrieval was chosen as the efficiency criteria for methods comparison. However, it was insufficient to choose only one method for achieving high recognition rates. Thus, additional methods were used for finding references to the personality in the text. All these methods are based on the created information model, which represents person’s appearance.

  17. Nasal symptoms and clinical findings in adult patients treated for unilateral cleft lip and palate.

    PubMed

    Morén, Staffan; Mani, Maria; Lundberg, Kristina; Holmström, Mats

    2013-10-01

    The aim of the study was to investigate self-experienced nasal symptoms among adults treated for UCLP and the association to clinical findings, and to evaluate whether palate closure in one-stage or two-stages affected the symptoms or clinical findings. All people with UCLP born between 1960-1987, treated at Uppsala University Hospital, were considered for participation in this cross-sectional population study with long-term follow-up. Eighty-three patients (76% participation rate) participated, a mean of 37 years after the first operation. Fifty-two patients were treated with one-stage palate closure and 31 with two-stage palate closure. An age-matched group of 67 non-cleft controls completed the same study protocol, which included a questionnaire regarding nasal symptoms, nasal inspection, anterior rhinoscopy, and nasal endoscopy. Patients reported a higher frequency of nasal symptoms compared with the control group, e.g., nasal obstruction (81% compared with 60%) and mouth breathing (20% compared with 5%). Patients also rated their nasal symptoms as having a more negative impact on their daily life and physical activities than controls. Nasal examination revealed higher frequencies of nasal deformities among patients. No positive correlation was found between nasal symptoms and severity of findings at nasal examination. No differences were identified between patients treated with one-stage and two-stage palate closure regarding symptoms or nasal findings. Adult patients treated for UCLP suffer from more nasal symptoms than controls. However, symptoms are not associated with findings at clinical nasal examination or method of palate closure.

  18. Search Methods Used to Locate Missing Cats and Locations Where Missing Cats Are Found

    PubMed Central

    Huang, Liyan; Coradini, Marcia; Rand, Jacquie; Morton, John; Albrecht, Kat; Wasson, Brigid; Robertson, Danielle

    2018-01-01

    Simple Summary A least 15% of cat owners lose their pet in a five-year period and some are never found. This paper reports on data gathered from an online questionnaire that asked questions regarding search methods used to locate missing cats and locations where missing cats were found. The most important finding from this retrospective case series was that approximately one third of cats were recovered within 7 days. Secondly, a physical search increased the chances of finding cats alive and 75% of cats were found within a 500 m radius of their point of escape. Thirdly, those cats that were indoor-outdoor and allowed outside unsupervised traveled longer distances compared with indoor cats that were never allowed outside. Lastly, cats considered to be highly curious in nature were more likely to be found inside someone else’s house compared to other personality types. These findings suggest that a physical search within the first week of a cat going missing could be a useful strategy. In light of these findings, further research into this field may show whether programs such as shelter, neuter and return would improve the chances of owners searching and finding their missing cats as well as decreasing euthanasia rates in shelters. Abstract Missing pet cats are often not found by their owners, with many being euthanized at shelters. This study aimed to describe times that lost cats were missing for, search methods associated with their recovery, locations where found and distances travelled. A retrospective case series was conducted where self-selected participants whose cat had gone missing provided data in an online questionnaire. Of the 1210 study cats, only 61% were found within one year, with 34% recovered alive by the owner within 7 days. Few cats were found alive after 90 days. There was evidence that physical searching increased the chance of finding the cat alive (p = 0.073), and 75% of cats were found within 500 m of the point of escape. Up to 75% of cats with outdoor access traveled 1609 m, further than the distance traveled by indoor-only cats (137 m; p ≤ 0.001). Cats considered to be highly curious were more likely to be found inside someone else’s house compared to other personality types. These findings suggest that thorough physical searching is a useful strategy, and should be conducted within the first week after cats go missing. They also support further investigation into whether shelter, neuter and return programs improve the chance of owners recovering missing cats and decrease numbers of cats euthanized in shelters. PMID:29301322

  19. Spatial cluster detection using dynamic programming.

    PubMed

    Sverchkov, Yuriy; Jiang, Xia; Cooper, Gregory F

    2012-03-25

    The task of spatial cluster detection involves finding spatial regions where some property deviates from the norm or the expected value. In a probabilistic setting this task can be expressed as finding a region where some event is significantly more likely than usual. Spatial cluster detection is of interest in fields such as biosurveillance, mining of astronomical data, military surveillance, and analysis of fMRI images. In almost all such applications we are interested both in the question of whether a cluster exists in the data, and if it exists, we are interested in finding the most accurate characterization of the cluster. We present a general dynamic programming algorithm for grid-based spatial cluster detection. The algorithm can be used for both Bayesian maximum a-posteriori (MAP) estimation of the most likely spatial distribution of clusters and Bayesian model averaging over a large space of spatial cluster distributions to compute the posterior probability of an unusual spatial clustering. The algorithm is explained and evaluated in the context of a biosurveillance application, specifically the detection and identification of Influenza outbreaks based on emergency department visits. A relatively simple underlying model is constructed for the purpose of evaluating the algorithm, and the algorithm is evaluated using the model and semi-synthetic test data. When compared to baseline methods, tests indicate that the new algorithm can improve MAP estimates under certain conditions: the greedy algorithm we compared our method to was found to be more sensitive to smaller outbreaks, while as the size of the outbreaks increases, in terms of area affected and proportion of individuals affected, our method overtakes the greedy algorithm in spatial precision and recall. The new algorithm performs on-par with baseline methods in the task of Bayesian model averaging. We conclude that the dynamic programming algorithm performs on-par with other available methods for spatial cluster detection and point to its low computational cost and extendability as advantages in favor of further research and use of the algorithm.

  20. Mean-field approximation for spacing distribution functions in classical systems

    NASA Astrophysics Data System (ADS)

    González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.

    2012-01-01

    We propose a mean-field method to calculate approximately the spacing distribution functions p(n)(s) in one-dimensional classical many-particle systems. We compare our method with two other commonly used methods, the independent interval approximation and the extended Wigner surmise. In our mean-field approach, p(n)(s) is calculated from a set of Langevin equations, which are decoupled by using a mean-field approximation. We find that in spite of its simplicity, the mean-field approximation provides good results in several systems. We offer many examples illustrating that the three previously mentioned methods give a reasonable description of the statistical behavior of the system. The physical interpretation of each method is also discussed.

  1. Solution of second order quasi-linear boundary value problems by a wavelet method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Lei; Zhou, Youhe; Wang, Jizeng, E-mail: jzwang@lzu.edu.cn

    2015-03-10

    A wavelet Galerkin method based on expansions of Coiflet-like scaling function bases is applied to solve second order quasi-linear boundary value problems which represent a class of typical nonlinear differential equations. Two types of typical engineering problems are selected as test examples: one is about nonlinear heat conduction and the other is on bending of elastic beams. Numerical results are obtained by the proposed wavelet method. Through comparing to relevant analytical solutions as well as solutions obtained by other methods, we find that the method shows better efficiency and accuracy than several others, and the rate of convergence can evenmore » reach orders of 5.8.« less

  2. Pedometer Readings and Self-Reported Walking Distances in a Rural Hutterite Population

    ERIC Educational Resources Information Center

    Samra, Haifa Abou; Beare, Tianna; Specker, Bonny

    2008-01-01

    Purpose: This study assessed the accuracy with which a rural population reported daily walking distances using a 7-day activity recall questionnaire obtained quarterly compared to pedometer readings. Methods: Study participants were 48 Hutterite men and women aged 11-66 years. Findings: Pedometer-miles quartiles were associated with self-reported…

  3. Methods and Mechanisms in the Efficacy of Psychodynamic Psychotherapy

    ERIC Educational Resources Information Center

    McKay, Dean

    2011-01-01

    Comments on the original article, "The efficacy of psychodynamic psychotherapy," by J. Shedler. Shedler summarized a large body of research that shows psychodynamic therapy to have a substantial effect size, comparable to that for many empirically supported treatments. This is an important finding, in part refuting the concerns raised by Bornstein…

  4. Computer-Based Instruction and Health Professions Education: A Meta-Analysis of Outcomes.

    ERIC Educational Resources Information Center

    Cohen, Peter A.; Dacanay, Lakshmi S.

    1992-01-01

    The meta-analytic techniques of G. V. Glass were used to statistically integrate findings from 47 comparative studies on computer-based instruction (CBI) in health professions education. A clear majority of the studies favored CBI over conventional methods of instruction. Results show higher-order applications of computers to be especially…

  5. Online or Face to Face? A Comparison of Two Methods of Training Professionals

    ERIC Educational Resources Information Center

    Dillon, Kristin; Dworkin, Jodi; Gengler, Colleen; Olson, Kathleen

    2008-01-01

    Online courses offer benefits over face-to-face courses such as accessibility, affordability, and flexibility. Literature assessing the effectiveness of face-to-face and online courses is growing, but findings remain inconclusive. This study compared evaluations completed by professionals who had taken a research update short course either face to…

  6. Comparing Societies from the 1500s in the Sixth Grade

    ERIC Educational Resources Information Center

    Matson, Trista; Henning, Mary Beth

    2008-01-01

    Inquiry is the process by which teachers give students an open-ended question, and then students investigate the evidence and draw conclusions based upon their findings. This method promotes critical thinking, as students cite evidence to support their opinions. Inquiry is most effective when it builds upon students' prior knowledge. To promote…

  7. Engaging Pre-Service Teachers in Multinational, Multi-Campus Scientific and Mathematical Inquiry

    ERIC Educational Resources Information Center

    Wilhelm, Jennifer Anne; Smith, Walter S.; Walters, Kendra L.; Sherrod, Sonya E.; Mulholland, Judith

    2008-01-01

    Pre-service teachers from Texas and Indiana in the United States and from Queensland, Australia, observed the Moon for a semester and compared and contrasted their findings in asynchronous Internet discussion groups. The 188 pre-service teachers were required to conduct inquiry investigations for their methods coursework which included an initial…

  8. Flipped @ SBU: Student Satisfaction and the College Classroom

    ERIC Educational Resources Information Center

    Gross, Benjamin; Marinari, Maddalena; Hoffman, Mike; DeSimone, Kimberly; Burke, Peggy

    2015-01-01

    In this paper, the authors find empirical support for the effectiveness of the flipped classroom model. Using a quasi-experimental method, the authors compared students enrolled in flipped courses to their counterparts in more traditional lecture-based ones. A survey instrument was constructed to study how these two different groups of students…

  9. The Luria-Nebraska Neuropsychological Battery and the WAIS-R in Assessment of Adults with Specific Learning Disabilities.

    ERIC Educational Resources Information Center

    Katz, Lynda; Goldstein, Gerald

    1993-01-01

    Compared intellectual (Wechsler Adult Intelligence Scale for Adults-Revised) and neuropsychological (Luria-Nebraska Neuropsychological Battery) assessment as valid methods of identifying learning disabilities in adults. Findings from 155 subjects revealed that both instruments were able to distinguish adults with and without learning disabilities.…

  10. The Socratic Method in the Introductory PR Course: An Alternative Pedagogy.

    ERIC Educational Resources Information Center

    Parkinson, Michael G.; Ekachai, Daradirek

    2002-01-01

    Presents the results of a study comparing student reactions to and perceptions of learning in introductory public relations courses using a traditional lecture format and a Socratic approach. Finds significant differences in the two groups showing that students who received the Socratic instruction reported more opportunities in practicing their…

  11. The "Primitive Mode of Representation" and the Evolution of Interactive Multimedia.

    ERIC Educational Resources Information Center

    Plowman, Lydia

    1994-01-01

    Findings from fieldwork analyzing children's use of four interactive multimedia programs are compared with a description of early film features and used as the basis to consider problems faced by an audience encountering a nascent medium. Methods adopted to facilitate understanding of films and their suitability for adaptation to multimedia…

  12. On mixed derivatives type high dimensional multi-term fractional partial differential equations approximate solutions

    NASA Astrophysics Data System (ADS)

    Talib, Imran; Belgacem, Fethi Bin Muhammad; Asif, Naseer Ahmad; Khalil, Hammad

    2017-01-01

    In this research article, we derive and analyze an efficient spectral method based on the operational matrices of three dimensional orthogonal Jacobi polynomials to solve numerically the mixed partial derivatives type multi-terms high dimensions generalized class of fractional order partial differential equations. We transform the considered fractional order problem to an easily solvable algebraic equations with the aid of the operational matrices. Being easily solvable, the associated algebraic system leads to finding the solution of the problem. Some test problems are considered to confirm the accuracy and validity of the proposed numerical method. The convergence of the method is ensured by comparing our Matlab software simulations based obtained results with the exact solutions in the literature, yielding negligible errors. Moreover, comparative results discussed in the literature are extended and improved in this study.

  13. An experiment with content distribution methods in touchscreen mobile devices.

    PubMed

    Garcia-Lopez, Eva; Garcia-Cabot, Antonio; de-Marcos, Luis

    2015-09-01

    This paper compares the usability of three different content distribution methods (scrolling, paging and internal links) in touchscreen mobile devices as means to display web documents. Usability is operationalized in terms of effectiveness, efficiency and user satisfaction. These dimensions are then measured in an experiment (N = 23) in which users are required to find words in regular-length web documents. Results suggest that scrolling is statistically better in terms of efficiency and user satisfaction. It is also found to be more effective but results were not significant. Our findings are also compared with existing literature to propose the following guideline: "try to use vertical scrolling in web pages for mobile devices instead of paging or internal links, except when the content is too large, then paging is recommended". With an ever increasing number of touchscreen web-enabled mobile devices, this new guideline can be relevant for content developers targeting the mobile web as well as institutions trying to improve the usability of their content for mobile platforms. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  14. Experimental aspect of solid-state nuclear magnetic resonance studies of biomaterials such as bones.

    PubMed

    Singh, Chandan; Rai, Ratan Kumar; Sinha, Neeraj

    2013-01-01

    Solid-state nuclear magnetic resonance (SSNMR) spectroscopy is increasingly becoming a popular technique to probe micro-structural details of biomaterial such as bone with pico-meter resolution. Due to high-resolution structural details probed by SSNMR methods, handling of bone samples and experimental protocol are very crucial aspects of study. We present here first report of the effect of various experimental protocols and handling methods of bone samples on measured SSNMR parameters. Various popular SSNMR experiments were performed on intact cortical bone sample collected from fresh animal, immediately after removal from animal systems, and results were compared with bone samples preserved in different conditions. We find that the best experimental conditions for SSNMR parameters of bones correspond to preservation at -20 °C and in 70% ethanol solution. Various other SSNMR parameters were compared corresponding to different experimental conditions. Our study has helped in finding best experimental protocol for SSNMR studies of bone. This study will be of further help in the application of SSNMR studies on large bone disease related animal model systems for statistically significant results. © 2013 Elsevier Inc. All rights reserved.

  15. Comparison of Time-to-First Event and Recurrent Event Methods in Randomized Clinical Trials.

    PubMed

    Claggett, Brian; Pocock, Stuart; Wei, L J; Pfeffer, Marc A; McMurray, John J V; Solomon, Scott D

    2018-03-27

    Background -Most Phase-3 trials feature time-to-first event endpoints for their primary and/or secondary analyses. In chronic diseases where a clinical event can occur more than once, recurrent-event methods have been proposed to more fully capture disease burden and have been assumed to improve statistical precision and power compared to conventional "time-to-first" methods. Methods -To better characterize factors that influence statistical properties of recurrent-events and time-to-first methods in the evaluation of randomized therapy, we repeatedly simulated trials with 1:1 randomization of 4000 patients to active vs control therapy, with true patient-level risk reduction of 20% (i.e. RR=0.80). For patients who discontinued active therapy after a first event, we assumed their risk reverted subsequently to their original placebo-level risk. Through simulation, we varied a) the degree of between-patient heterogeneity of risk and b) the extent of treatment discontinuation. Findings were compared with those from actual randomized clinical trials. Results -As the degree of between-patient heterogeneity of risk was increased, both time-to-first and recurrent-events methods lost statistical power to detect a true risk reduction and confidence intervals widened. The recurrent-events analyses continued to estimate the true RR=0.80 as heterogeneity increased, while the Cox model produced estimates that were attenuated. The power of recurrent-events methods declined as the rate of study drug discontinuation post-event increased. Recurrent-events methods provided greater power than time-to-first methods in scenarios where drug discontinuation was ≤30% following a first event, lesser power with drug discontinuation rates of ≥60%, and comparable power otherwise. We confirmed in several actual trials in chronic heart failure that treatment effect estimates were attenuated when estimated via the Cox model and that increased statistical power from recurrent-events methods was most pronounced in trials with lower treatment discontinuation rates. Conclusions -We find that the statistical power of both recurrent-events and time-to-first methods are reduced by increasing heterogeneity of patient risk, a parameter not included in conventional power and sample size formulas. Data from real clinical trials are consistent with simulation studies, confirming that the greatest statistical gains from use of recurrent-events methods occur in the presence of high patient heterogeneity and low rates of study drug discontinuation.

  16. Satisfying positivity requirement in the Beyond Complex Langevin approach

    NASA Astrophysics Data System (ADS)

    Wyrzykowski, Adam; Ruba, Błażej Ruba

    2018-03-01

    The problem of finding a positive distribution, which corresponds to a given complex density, is studied. By the requirement that the moments of the positive distribution and of the complex density are equal, one can reduce the problem to solving the matching conditions. These conditions are a set of quadratic equations, thus Groebner basis method was used to find its solutions when it is restricted to a few lowest-order moments. For a Gaussian complex density, these approximate solutions are compared with the exact solution, that is known in this special case.

  17. Sampling and pyrosequencing methods for characterizing bacterial communities in the human gut using 16S sequence tags.

    PubMed

    Wu, Gary D; Lewis, James D; Hoffmann, Christian; Chen, Ying-Yu; Knight, Rob; Bittinger, Kyle; Hwang, Jennifer; Chen, Jun; Berkowsky, Ronald; Nessel, Lisa; Li, Hongzhe; Bushman, Frederic D

    2010-07-30

    Intense interest centers on the role of the human gut microbiome in health and disease, but optimal methods for analysis are still under development. Here we present a study of methods for surveying bacterial communities in human feces using 454/Roche pyrosequencing of 16S rRNA gene tags. We analyzed fecal samples from 10 individuals and compared methods for storage, DNA purification and sequence acquisition. To assess reproducibility, we compared samples one cm apart on a single stool specimen for each individual. To analyze storage methods, we compared 1) immediate freezing at -80 degrees C, 2) storage on ice for 24 or 3) 48 hours. For DNA purification methods, we tested three commercial kits and bead beating in hot phenol. Variations due to the different methodologies were compared to variation among individuals using two approaches--one based on presence-absence information for bacterial taxa (unweighted UniFrac) and the other taking into account their relative abundance (weighted UniFrac). In the unweighted analysis relatively little variation was associated with the different analytical procedures, and variation between individuals predominated. In the weighted analysis considerable variation was associated with the purification methods. Particularly notable was improved recovery of Firmicutes sequences using the hot phenol method. We also carried out surveys of the effects of different 454 sequencing methods (FLX versus Titanium) and amplification of different 16S rRNA variable gene segments. Based on our findings we present recommendations for protocols to collect, process and sequence bacterial 16S rDNA from fecal samples--some major points are 1) if feasible, bead-beating in hot phenol or use of the PSP kit improves recovery; 2) storage methods can be adjusted based on experimental convenience; 3) unweighted (presence-absence) comparisons are less affected by lysis method.

  18. What is the impact of shift work on the psychological functioning and resilience of nurses? An integrative review.

    PubMed

    Tahghighi, Mozhdeh; Rees, Clare S; Brown, Janie A; Breen, Lauren J; Hegney, Desley

    2017-09-01

    To synthesize existing research to determine if nurses who work shifts have poorer psychological functioning and resilience than nurses who do not work shifts. Research exploring the impact of shift work on the psychological functioning and resilience of nurses is limited compared with research investigating the impact of shifts on physical outcomes. Integrative literature review. Relevant databases were searched from January 1995-August 2016 using the combination of keywords: nurse, shift work; rotating roster; night shift; resilient; hardiness; coping; well-being; burnout; mental health; occupational stress; compassion fatigue; compassion satisfaction; stress; anxiety; depression. Two authors independently performed the integrative review processes proposed by Whittemore and Knafl and a quality assessment using the mixed-methods appraisal tool by Pluye et al. A total of 37 articles were included in the review (32 quantitative, 4 qualitative and 1 mixed-methods). Approximately half of the studies directly compared nurse shift workers with non-shift workers. Findings were grouped according to the following main outcomes: (1) general psychological well-being/quality of life; (2) Job satisfaction/burnout; (3) Depression, anxiety and stress; and (4) Resilience/coping. We did not find definitive evidence that shift work is associated with poorer psychological functioning in nurses. Overall, the findings suggest that the impact of shift work on nurse psychological functioning is dependent on several contextual and individual factors. More studies are required which directly compare the psychological outcomes and resilience of nurse shift workers with non-shift workers. © 2017 John Wiley & Sons Ltd.

  19. Estimating genome-wide regulatory activity from multi-omics data sets using mathematical optimization.

    PubMed

    Trescher, Saskia; Münchmeyer, Jannes; Leser, Ulf

    2017-03-27

    Gene regulation is one of the most important cellular processes, indispensable for the adaptability of organisms and closely interlinked with several classes of pathogenesis and their progression. Elucidation of regulatory mechanisms can be approached by a multitude of experimental methods, yet integration of the resulting heterogeneous, large, and noisy data sets into comprehensive and tissue or disease-specific cellular models requires rigorous computational methods. Recently, several algorithms have been proposed which model genome-wide gene regulation as sets of (linear) equations over the activity and relationships of transcription factors, genes and other factors. Subsequent optimization finds those parameters that minimize the divergence of predicted and measured expression intensities. In various settings, these methods produced promising results in terms of estimating transcription factor activity and identifying key biomarkers for specific phenotypes. However, despite their common root in mathematical optimization, they vastly differ in the types of experimental data being integrated, the background knowledge necessary for their application, the granularity of their regulatory model, the concrete paradigm used for solving the optimization problem and the data sets used for evaluation. Here, we review five recent methods of this class in detail and compare them with respect to several key properties. Furthermore, we quantitatively compare the results of four of the presented methods based on publicly available data sets. The results show that all methods seem to find biologically relevant information. However, we also observe that the mutual result overlaps are very low, which contradicts biological intuition. Our aim is to raise further awareness of the power of these methods, yet also to identify common shortcomings and necessary extensions enabling focused research on the critical points.

  20. Comparisons between tokamak fueling of gas puffing and supersonic molecular beam injection in 2D simulations

    DOE PAGES

    Zhou, Y. L.; Wang, Z. H.; Xu, X. Q.; ...

    2015-01-09

    Plasma fueling with high efficiency and deep injection is very important to enable fusion power performance requirements. It is a powerful and efficient way to study neutral transport dynamics and find methods of improving the fueling performance by doing large scale simulations. Furthermore, two basic fueling methods, gas puffing (GP) and supersonic molecular beam injection (SMBI), are simulated and compared in realistic divertor geometry of the HL-2A tokamak with a newly developed module, named trans-neut, within the framework of BOUT++ boundary plasma turbulence code [Z. H. Wang et al., Nucl. Fusion 54, 043019 (2014)]. The physical model includes plasma density,more » heat and momentum transport equations along with neutral density, and momentum transport equations. In transport dynamics and profile evolutions of both plasma and neutrals are simulated and compared between GP and SMBI in both poloidal and radial directions, which are quite different from one and the other. It finds that the neutrals can penetrate about four centimeters inside the last closed (magnetic) flux surface during SMBI, while they are all deposited outside of the LCF during GP. Moreover, it is the radial convection and larger inflowing flux which lead to the deeper penetration depth of SMBI and higher fueling efficiency compared to GP.« less

  1. Comparisons between tokamak fueling of gas puffing and supersonic molecular beam injection in 2D simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Y. L.; Wang, Z. H.; Xu, X. Q.

    Plasma fueling with high efficiency and deep injection is very important to enable fusion power performance requirements. It is a powerful and efficient way to study neutral transport dynamics and find methods of improving the fueling performance by doing large scale simulations. Furthermore, two basic fueling methods, gas puffing (GP) and supersonic molecular beam injection (SMBI), are simulated and compared in realistic divertor geometry of the HL-2A tokamak with a newly developed module, named trans-neut, within the framework of BOUT++ boundary plasma turbulence code [Z. H. Wang et al., Nucl. Fusion 54, 043019 (2014)]. The physical model includes plasma density,more » heat and momentum transport equations along with neutral density, and momentum transport equations. In transport dynamics and profile evolutions of both plasma and neutrals are simulated and compared between GP and SMBI in both poloidal and radial directions, which are quite different from one and the other. It finds that the neutrals can penetrate about four centimeters inside the last closed (magnetic) flux surface during SMBI, while they are all deposited outside of the LCF during GP. Moreover, it is the radial convection and larger inflowing flux which lead to the deeper penetration depth of SMBI and higher fueling efficiency compared to GP.« less

  2. Comparisons between tokamak fueling of gas puffing and supersonic molecular beam injection in 2D simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Y. L.; Southwestern Institute of Physics, Chengdu 610041; Wang, Z. H., E-mail: zhwang@swip.ac.cn

    Plasma fueling with high efficiency and deep injection is very important to enable fusion power performance requirements. It is a powerful and efficient way to study neutral transport dynamics and find methods of improving the fueling performance by doing large scale simulations. Two basic fueling methods, gas puffing (GP) and supersonic molecular beam injection (SMBI), are simulated and compared in realistic divertor geometry of the HL-2A tokamak with a newly developed module, named trans-neut, within the framework of BOUT++ boundary plasma turbulence code [Z. H. Wang et al., Nucl. Fusion 54, 043019 (2014)]. The physical model includes plasma density, heatmore » and momentum transport equations along with neutral density, and momentum transport equations. Transport dynamics and profile evolutions of both plasma and neutrals are simulated and compared between GP and SMBI in both poloidal and radial directions, which are quite different from one and the other. It finds that the neutrals can penetrate about four centimeters inside the last closed (magnetic) flux surface during SMBI, while they are all deposited outside of the LCF during GP. It is the radial convection and larger inflowing flux which lead to the deeper penetration depth of SMBI and higher fueling efficiency compared to GP.« less

  3. Dynamic profiling of different ready-to-drink fermented dairy products: A comparative study using Temporal Check-All-That-Apply (TCATA), Temporal Dominance of Sensations (TDS) and Progressive Profile (PP).

    PubMed

    Esmerino, Erick A; Castura, John C; Ferraz, Juliana P; Tavares Filho, Elson R; Silva, Ramon; Cruz, Adriano G; Freitas, Mônica Q; Bolini, Helena M A

    2017-11-01

    Despite the several differences in ingredients, processes and nutritional values, dairy foods as yogurts, fermented milks and milk beverages are widely accepted worldwide, and although they have their sensory profiling normally covered by descriptive analyses, the temporal perception involved during the consumption are rarely considered. In this sense, the present work aimed to assess the dynamic sensory profile of three categories of fermented dairy products using different temporal methodologies: Temporal Dominance of Sensations (TDS), Progressive Profiling (PP), Temporal CATA (TCATA), and compare the results obtained. The findings showed that the different sensory characteristics among the products are basically related to their commercial identity. Regarding the methods, all of them collected the variations between samples with great correlation between data. In addition, to detect differences in intensities, TCATA showed to be the most sensitive method in detecting textural changes. When using PP, a balanced experimental design considering the number of attributes, time intervals, and food matrix must be weighed. The findings are of interest to guide sensory and consumer practitioners involved in the dairy production to formulate/reformulate their products and help them choosing the most suitable dynamic method to temporally evaluate them. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. A diffusion tensor imaging study of suicide attempters

    PubMed Central

    Thapa-Chhetry, Binod; Sublette, M. Elizabeth; Sullivan, Gregory M.; Oquendo, Maria A.; Mann, J. John; Parsey, Ramin V.

    2014-01-01

    Background Few studies have examined white matter abnormalities in suicide attempters using diffusion tensor imaging (DTI). This study sought to identify white matter regions altered in individuals with a prior suicide attempt. Methods DTI scans were acquired in 13 suicide attempters with major depressive disorder (MDD), 39 non-attempters with MDD, and 46 healthy participants (HP). Fractional anisotropy (FA) and apparent diffusion coefficient (ADC) was determined in the brain using two methods: region of interest (ROI) and tract-based spatial statistics (TBSS). ROIs were limited a priori to white matter adjacent to the caudal anterior cingulate cortex, rostral anterior cingulate cortex, dorsomedial prefrontal cortex, and medial orbitofrontal cortex. Results Using the ROI approach, suicide attempters had lower FA than MDD non-attempters and HP in the dorsomedial prefrontal cortex. Uncorrected TBSS results confirmed a significant cluster within the right dorsomedial prefrontal cortex indicating lower FA in suicide attempters compared to non-attempters. There were no differences in ADC when comparing suicide attempters, non-attempters and HP groups using ROI or TBSS methods. Conclusions Low FA in the dorsomedial prefrontal cortex was associated with a suicide attempt history. Converging findings from other imaging modalities support this finding, making this region of potential interest in determining the diathesis for suicidal behavior. PMID:24462041

  5. Forensic age estimation based on magnetic resonance imaging of third molars: converting 2D staging into 3D staging.

    PubMed

    De Tobel, Jannick; Hillewig, Elke; Verstraete, Koenraad

    2017-03-01

    Established methods to stage development of third molars for forensic age estimation are based on the evaluation of radiographs, which show a 2D projection. It has not been investigated whether these methods require any adjustments in order to apply them to stage third molars on magnetic resonance imaging (MRI), which shows 3D information. To prospectively study root stage assessment of third molars in age estimation using 3 Tesla MRI and to compare this with panoramic radiographs, in order to provide considerations for converting 2D staging into 3D staging and to determine the decisive root. All third molars were evaluated in 52 healthy participants aged 14-26 years using MRI in three planes. Three staging methods were investigated by two observers. In sixteen of the participants, MRI findings were compared with findings on panoramic radiographs. Decisive roots were palatal in upper third molars and distal in lower third molars. Fifty-seven per cent of upper third molars were not assessable on the radiograph, while 96.9% were on MRI. Upper third molars were more difficult to evaluate on radiographs than on MRI (p < .001). Lower third molars were equally assessable on both imaging techniques (93.8% MRI, 98.4% radiograph), with no difference in level of difficulty (p = .375). Inter- and intra-observer agreement for evaluation was higher in MRI than in radiographs. In both imaging techniques lower third molars showed greater inter- and intra-observer agreement compared to upper third molars. MR images in the sagittal plane proved to be essential for staging. In age estimation, 3T MRI of third molars could be valuable. Some considerations are, however, necessary to transfer known staging methods to this 3D technique.

  6. Functional magnetic resonance imaging activation detection: fuzzy cluster analysis in wavelet and multiwavelet domains.

    PubMed

    Jahanian, Hesamoddin; Soltanian-Zadeh, Hamid; Hossein-Zadeh, Gholam-Ali

    2005-09-01

    To present novel feature spaces, based on multiscale decompositions obtained by scalar wavelet and multiwavelet transforms, to remedy problems associated with high dimension of functional magnetic resonance imaging (fMRI) time series (when they are used directly in clustering algorithms) and their poor signal-to-noise ratio (SNR) that limits accurate classification of fMRI time series according to their activation contents. Using randomization, the proposed method finds wavelet/multiwavelet coefficients that represent the activation content of fMRI time series and combines them to define new feature spaces. Using simulated and experimental fMRI data sets, the proposed feature spaces are compared to the cross-correlation (CC) feature space and their performances are evaluated. In these studies, the false positive detection rate is controlled using randomization. To compare different methods, several points of the receiver operating characteristics (ROC) curves, using simulated data, are estimated and compared. The proposed features suppress the effects of confounding signals and improve activation detection sensitivity. Experimental results show improved sensitivity and robustness of the proposed method compared to the conventional CC analysis. More accurate and sensitive activation detection can be achieved using the proposed feature spaces compared to CC feature space. Multiwavelet features show superior detection sensitivity compared to the scalar wavelet features. (c) 2005 Wiley-Liss, Inc.

  7. Contrasting RCC, RVU, and ABC for managed care decisions. A case study compares three widely used costing methods and finds one superior.

    PubMed

    West, T D; Balas, E A; West, D A

    1996-08-01

    To obtain cost data needed to improve managed care decisions and negotiate profitable capitation contracts, most healthcare provider organizations use one of three costing methods: the ratio-of-costs-to-charges method, the relative value unit method, or the activity-based costing method. Although the ratio-of-costs to charges is used by a majority of provider organizations, a case study that applied these three methods in a renal dialysis clinic found that the activity-based costing method provided the most accurate cost data. By using this costing method, healthcare financial managers can obtain the data needed to make optimal decisions regarding resource allocation and cost containment, thus assuring the longterm financial viability of their organizations.

  8. Perception of fore-and-aft whole-body vibration intensity measured by two methods.

    PubMed

    Forta, Nazım Gizem; Schust, Marianne

    2015-01-01

    This experimental study investigated the perception of fore-and-aft whole-body vibration intensity using cross-modality matching (CM) and magnitude estimation (ME) methods. Thirteen subjects were seated on a rigid seat without a backrest and exposed to sinusoidal stimuli from 0.8 to 12.5 Hz and 0.4 to 1.6 ms(-2) r.m.s. The Stevens exponents did not significantly depend on vibration frequency or the measurement method. The ME frequency weightings depended significantly on vibration frequency, but the CM weightings did not. Using the CM and ME weightings would result in higher weighted exposures than those calculated using the ISO (2631-1, 1997) Wd. Compared with ISO Wk, the CM and ME-weighted exposures would be greater at 1.6 Hz and lesser above that frequency. The CM and ME frequency weightings based on the median ratings for the reference vibration condition did not differ significantly. The lack of a method effect for weightings and for Stevens exponents suggests that the findings from the two methods are comparable. Frequency weighting curves for seated subjects for x-axis whole-body vibration were derived from an experiment using two different measurement methods and were compared with the Wd and Wk weighting curves in ISO 2631-1 (1997).

  9. Applications of a General Finite-Difference Method for Calculating Bending Deformations of Solid Plates

    NASA Technical Reports Server (NTRS)

    Walton, William C., Jr.

    1960-01-01

    This paper reports the findings of an investigation of a finite - difference method directly applicable to calculating static or simple harmonic flexures of solid plates and potentially useful in other problems of structural analysis. The method, which was proposed in doctoral thesis by John C. Houbolt, is based on linear theory and incorporates the principle of minimum potential energy. Full realization of its advantages requires use of high-speed computing equipment. After a review of Houbolt's method, results of some applications are presented and discussed. The applications consisted of calculations of the natural modes and frequencies of several uniform-thickness cantilever plates and, as a special case of interest, calculations of the modes and frequencies of the uniform free-free beam. Computed frequencies and nodal patterns for the first five or six modes of each plate are compared with existing experiments, and those for one plate are compared with another approximate theory. Beam computations are compared with exact theory. On the basis of the comparisons it is concluded that the method is accurate and general in predicting plate flexures, and additional applications are suggested. An appendix is devoted t o computing procedures which evolved in the progress of the applications and which facilitate use of the method in conjunction with high-speed computing equipment.

  10. A novel method of forceps biopsy improves the diagnosis of proximal biliary malignancies.

    PubMed

    Kulaksiz, Hasan; Strnad, Pavel; Römpp, Achim; von Figura, Guido; Barth, Thomas; Esposito, Irene; Schirmacher, Peter; Henne-Bruns, Doris; Adler, Guido; Stiehl, Adolf

    2011-02-01

    Tissue specimen collection represents a cornerstone in diagnosis of proximal biliary tract malignancies offering great specificity, but only limited sensitivity. To improve the tumor detection rate, we developed a new method of forceps biopsy and compared it prospectively with endoscopic transpapillary brush cytology. 43 patients with proximal biliary stenoses, which were suspect for malignancy, undergoing endoscopic retrograde cholangiography were prospectively recruited and subjected to both biopsy [using a double-balloon enteroscopy (DBE) forceps under a guidance of a pusher and guiding catheter with guidewire] and transpapillary brush cytology. The cytological/histological findings were compared with the final clinical diagnosis. 35 out of 43 patients had a malignant disease (33 cholangiocarcinomas, 1 hepatocellular carcinoma, 1 gallbladder carcinoma). The sensitivity of cytology and biopsy in these patients was 49 and 69%, respectively. The method with DBE forceps allowed a pinpoint biopsy of the biliary stenoses. Both methods had 100% specificity, and, when combined, 80% of malignant processes were detected. All patients with non-malignant conditions were correctly assigned by both methods. No clinically relevant complications were observed. The combination of forceps biopsy and transpapillary brush cytology is safe and offers superior detection rates compared to both methods alone, and therefore represents a promising approach in evaluation of proximal biliary tract processes.

  11. Seismic data fusion anomaly detection

    NASA Astrophysics Data System (ADS)

    Harrity, Kyle; Blasch, Erik; Alford, Mark; Ezekiel, Soundararajan; Ferris, David

    2014-06-01

    Detecting anomalies in non-stationary signals has valuable applications in many fields including medicine and meteorology. These include uses such as identifying possible heart conditions from an Electrocardiography (ECG) signals or predicting earthquakes via seismographic data. Over the many choices of anomaly detection algorithms, it is important to compare possible methods. In this paper, we examine and compare two approaches to anomaly detection and see how data fusion methods may improve performance. The first approach involves using an artificial neural network (ANN) to detect anomalies in a wavelet de-noised signal. The other method uses a perspective neural network (PNN) to analyze an arbitrary number of "perspectives" or transformations of the observed signal for anomalies. Possible perspectives may include wavelet de-noising, Fourier transform, peak-filtering, etc.. In order to evaluate these techniques via signal fusion metrics, we must apply signal preprocessing techniques such as de-noising methods to the original signal and then use a neural network to find anomalies in the generated signal. From this secondary result it is possible to use data fusion techniques that can be evaluated via existing data fusion metrics for single and multiple perspectives. The result will show which anomaly detection method, according to the metrics, is better suited overall for anomaly detection applications. The method used in this study could be applied to compare other signal processing algorithms.

  12. Substance abuse treatment for women who are under correctional supervision in the community: a systematic review of qualitative findings.

    PubMed

    Finfgeld-Connett, Deborah; Johnson, E Diane

    2011-01-01

    This systematic review was conducted to more fully analyze qualitative research findings relating to community-based court-supervised substance abuse treatment for women and to make recommendations regarding treatment enhancement. Five reports of qualitative research met the inclusion criteria. Findings from these reports were extracted and analyzed using constant comparative methods. Women who are referred to court-sanctioned substance abuse treatment programs may initially be reluctant to participate. Once engaged, however, they advocate for a full complement of well-financed comprehensive services. To optimize treatment effectiveness, women recommend gender-specific programs in which ambivalence is diminished, hope is instilled, and care is individualized.

  13. Quantitative analysis of regional myocardial performance in coronary artery disease

    NASA Technical Reports Server (NTRS)

    Stewart, D. K.; Dodge, H. T.; Frimer, M.

    1975-01-01

    Findings from a group of subjects with significant coronary artery stenosis are given. A group of controls determined by use of a quantitative method for the study of regional myocardial performance based on the frame-by-frame analysis of biplane left ventricular angiograms are presented. Particular emphasis was placed upon the analysis of wall motion in terms of normalized segment dimensions, timing and velocity of contraction. The results were compared with the method of subjective assessment used clinically.

  14. Apparatus and method for the determination of grain size in thin films

    DOEpatents

    Maris, Humphrey J

    2000-01-01

    A method for the determination of grain size in a thin film sample comprising the steps of measuring first and second changes in the optical response of the thin film, comparing the first and second changes to find the attenuation of a propagating disturbance in the film and associating the attenuation of the disturbance to the grain size of the film. The second change in optical response is time delayed from the first change in optical response.

  15. Apparatus and method for the determination of grain size in thin films

    DOEpatents

    Maris, Humphrey J

    2001-01-01

    A method for the determination of grain size in a thin film sample comprising the steps of measuring first and second changes in the optical response of the thin film, comparing the first and second changes to find the attenuation of a propagating disturbance in the film and associating the attenuation of the disturbance to the grain size of the film. The second change in optical response is time delayed from the first change in optical response.

  16. U.S. Citizen Children of Undocumented Parents: The Link Between State Immigration Policy and the Health of Latino Children

    PubMed Central

    Vargas, Edward D.; Ybarra, Vickie D.

    2016-01-01

    Background We examine Latino citizen children in mixed-status families and how their physical health status compares to their U.S. citizen, co-ethnic counterparts. We also examine Latino parents’ perceptions of state immigration policy and its implications for child health status. Methods Using the 2015 Latino National Health and Immigration Survey (n=1493), we estimate a series of multivariate ordered logistic regression models with mixed-status family and perceptions of state immigration policy as primary predictors. Results We find that mixed-status families report worse physical health for their children as compared to their U.S. citizen co-ethnics. We also find that parental perceptions of their states’ immigration status further exacerbate health disparities between families. Discussion These findings have implications for scholars and policy makers interested in immigrant health, family wellbeing, and health disparities in complex family structures. They contribute to the scholarship on Latino child health and on the erosion of the Latino immigrant health advantage. PMID:27435476

  17. Finding Influential Users in Social Media Using Association Rule Learning

    NASA Astrophysics Data System (ADS)

    Erlandsson, Fredrik; Bródka, Piotr; Borg, Anton; Johnson, Henric

    2016-04-01

    Influential users play an important role in online social networks since users tend to have an impact on one other. Therefore, the proposed work analyzes users and their behavior in order to identify influential users and predict user participation. Normally, the success of a social media site is dependent on the activity level of the participating users. For both online social networking sites and individual users, it is of interest to find out if a topic will be interesting or not. In this article, we propose association learning to detect relationships between users. In order to verify the findings, several experiments were executed based on social network analysis, in which the most influential users identified from association rule learning were compared to the results from Degree Centrality and Page Rank Centrality. The results clearly indicate that it is possible to identify the most influential users using association rule learning. In addition, the results also indicate a lower execution time compared to state-of-the-art methods.

  18. The non-parametric Parzen's window in stereo vision matching.

    PubMed

    Pajares, G; de la Cruz, J

    2002-01-01

    This paper presents an approach to the local stereovision matching problem using edge segments as features with four attributes. From these attributes we compute a matching probability between pairs of features of the stereo images. A correspondence is said true when such a probability is maximum. We introduce a nonparametric strategy based on Parzen's window (1962) to estimate a probability density function (PDF) which is used to obtain the matching probability. This is the main finding of the paper. A comparative analysis of other recent matching methods is included to show that this finding can be justified theoretically. A generalization of the proposed method is made in order to give guidelines about its use with the similarity constraint and also in different environments where other features and attributes are more suitable.

  19. Annoyance survey by means of social media.

    PubMed

    Silva, Bruno; Santos, Gustavo; Eller, Rogeria; Gjestland, Truls

    2017-02-01

    Social surveys have been the conventional means of evaluating the annoyance caused by transportation noise. Sampling and interviewing by telephone, mail, or in person are often costly and time consuming, however. Data collection by web-based survey methods are less costly and may be completed more quickly, and hence, could be conducted in countries with fewer resources. Such methods, however, raise issues about the generalizability and comparability of findings. These issues were investigated in a study of the annoyance of aircraft noise exposure around Brazil's Guarulhos Airport. The findings of 547 interviews obtained with the aid of Facebook advertisements and web-based forms were analysed with respect to estimated aircraft noise exposure levels at respondents' residences. The results were analysed to assess whether and how web-based surveys might yield generalizable noise dose-response relationships.

  20. Relative contributions of three descriptive methods: implications for behavioral assessment.

    PubMed

    Pence, Sacha T; Roscoe, Eileen M; Bourret, Jason C; Ahearn, William H

    2009-01-01

    This study compared the outcomes of three descriptive analysis methods-the ABC method, the conditional probability method, and the conditional and background probability method-to each other and to the results obtained from functional analyses. Six individuals who had been diagnosed with developmental delays and exhibited problem behavior participated. Functional analyses indicated that participants' problem behavior was maintained by social positive reinforcement (n = 2), social negative reinforcement (n = 2), or automatic reinforcement (n = 2). Results showed that for all but 1 participant, descriptive analysis outcomes were similar across methods. In addition, for all but 1 participant, the descriptive analysis outcome differed substantially from the functional analysis outcome. This supports the general finding that descriptive analysis is a poor means of determining functional relations.

  1. Quantifying and Comparing Effects of Climate Engineering Methods on the Earth System

    NASA Astrophysics Data System (ADS)

    Sonntag, Sebastian; Ferrer González, Miriam; Ilyina, Tatiana; Kracher, Daniela; Nabel, Julia E. M. S.; Niemeier, Ulrike; Pongratz, Julia; Reick, Christian H.; Schmidt, Hauke

    2018-02-01

    To contribute to a quantitative comparison of climate engineering (CE) methods, we assess atmosphere-, ocean-, and land-based CE measures with respect to Earth system effects consistently within one comprehensive model. We use the Max Planck Institute Earth System Model (MPI-ESM) with prognostic carbon cycle to compare solar radiation management (SRM) by stratospheric sulfur injection and two carbon dioxide removal methods: afforestation and ocean alkalinization. The CE model experiments are designed to offset the effect of fossil-fuel burning on global mean surface air temperature under the RCP8.5 scenario to follow or get closer to the RCP4.5 scenario. Our results show the importance of feedbacks in the CE effects. For example, as a response to SRM the land carbon uptake is enhanced by 92 Gt by the year 2100 compared to the reference RCP8.5 scenario due to reduced soil respiration thus reducing atmospheric CO2. Furthermore, we show that normalizations allow for a better comparability of different CE methods. For example, we find that due to compensating processes such as biogeophysical effects of afforestation more carbon needs to be removed from the atmosphere by afforestation than by alkalinization to reach the same global warming reduction. Overall, we illustrate how different CE methods affect the components of the Earth system; we identify challenges arising in a CE comparison, and thereby contribute to developing a framework for a comparative assessment of CE.

  2. [Correlates between Munich Functional Development Diagnostics and postural reactivity findings based on seven provovoked postural reactions modus Vojta during the first period of child's life].

    PubMed

    Gajewska, Ewa; Sobieska, Magdalena; Samborski, Włodzimierz

    2006-01-01

    This work presents two diagnostic methods which were used to examine 57 children during their first three months of life. By classifying abnormalities of central nervous coordination we compared seven postural reactions according to Vojta with spontaneous behaviour of the child according to Munich Functional Development Diagnostics. It was demonstrated that both methods for the detection of early lesions in the central nervous system are sensitive. Good coherence of the results suggests that both methods may be used interchangeably.

  3. Computational methods for vortex dominated compressible flows

    NASA Technical Reports Server (NTRS)

    Murman, Earll M.

    1987-01-01

    The principal objectives were to: understand the mechanisms by which Euler equation computations model leading edge vortex flows; understand the vortical and shock wave structures that may exist for different wing shapes, angles of incidence, and Mach numbers; and compare calculations with experiments in order to ascertain the limitations and advantages of Euler equation models. The initial approach utilized the cell centered finite volume Jameson scheme. The final calculation utilized a cell vertex finite volume method on an unstructured grid. Both methods used Runge-Kutta four stage schemes for integrating the equations. The principal findings are briefly summarized.

  4. Cosmic 21 cm delensing of microwave background polarization and the minimum detectable energy scale of inflation.

    PubMed

    Sigurdson, Kris; Cooray, Asantha

    2005-11-18

    We propose a new method for removing gravitational lensing from maps of cosmic microwave background (CMB) polarization anisotropies. Using observations of anisotropies or structures in the cosmic 21 cm radiation, emitted or absorbed by neutral hydrogen atoms at redshifts 10 to 200, the CMB can be delensed. We find this method could allow CMB experiments to have increased sensitivity to a background of inflationary gravitational waves (IGWs) compared to methods relying on the CMB alone and may constrain models of inflation which were heretofore considered to have undetectable IGW amplitudes.

  5. Improving hot region prediction by parameter optimization of density clustering in PPI.

    PubMed

    Hu, Jing; Zhang, Xiaolong

    2016-11-01

    This paper proposed an optimized algorithm which combines density clustering of parameter selection with feature-based classification for hot region prediction. First, all the residues are classified by SVM to remove non-hot spot residues, then density clustering of parameter selection is used to find hot regions. In the density clustering, this paper studies how to select input parameters. There are two parameters radius and density in density-based incremental clustering. We firstly fix density and enumerate radius to find a pair of parameters which leads to maximum number of clusters, and then we fix radius and enumerate density to find another pair of parameters which leads to maximum number of clusters. Experiment results show that the proposed method using both two pairs of parameters provides better prediction performance than the other method, and compare these two predictive results, the result by fixing radius and enumerating density have slightly higher prediction accuracy than that by fixing density and enumerating radius. Copyright © 2016. Published by Elsevier Inc.

  6. Flow and Force Equations for a Body Revolving in a Fluid

    NASA Technical Reports Server (NTRS)

    Zahm, A F

    1930-01-01

    Part I gives a general method for finding the steady-flow velocity relative to a body in plane curvilinear motion, whence the pressure is found by Bernoulli's energy principle. Integration of the pressure supplies basic formulas for the zonal forces and moments on the revolving body. Part II, applying this steady-flow method, finds the velocity and pressure at all points of the flow inside and outside an ellipsoid and some of its limiting forms, and graphs those quantities for the latter forms. Part III finds the pressure, and thence the zonal force and moment, on hulls in plane curvilinear flight. Part IV derives general equations for the resultant fluid forces and moments on trisymmetrical bodies moving through a perfect fluid, and in some cases compares the moment values with those found for bodies moving in air. Part V furnishes ready formulas for potential coefficients and inertia coefficients for an ellipsoid and its limiting forms. Thence are derived tables giving numerical values of those coefficients for a comprehensive range of shapes.

  7. Phase I Design for Completely or Partially Ordered Treatment Schedules

    PubMed Central

    Wages, Nolan A.; O’Quigley, John; Conaway, Mark R.

    2013-01-01

    The majority of methods for the design of Phase I trials in oncology are based upon a single course of therapy, yet in actual practice it may be the case that there is more than one treatment schedule for any given dose. Therefore, the probability of observing a dose-limiting toxicity (DLT) may depend upon both the total amount of the dose given, as well as the frequency with which it is administered. The objective of the study then becomes to find an acceptable combination of both dose and schedule. Past literature on designing these trials has entailed the assumption that toxicity increases monotonically with both dose and schedule. In this article, we relax this assumption for schedules and present a dose-schedule finding design that can be generalized to situations in which we know the ordering between all schedules and those in which we do not. We present simulation results that compare our method to other suggested dose-schedule finding methodology. PMID:24114957

  8. Ensemble stacking mitigates biases in inference of synaptic connectivity.

    PubMed

    Chambers, Brendan; Levy, Maayan; Dechery, Joseph B; MacLean, Jason N

    2018-01-01

    A promising alternative to directly measuring the anatomical connections in a neuronal population is inferring the connections from the activity. We employ simulated spiking neuronal networks to compare and contrast commonly used inference methods that identify likely excitatory synaptic connections using statistical regularities in spike timing. We find that simple adjustments to standard algorithms improve inference accuracy: A signing procedure improves the power of unsigned mutual-information-based approaches and a correction that accounts for differences in mean and variance of background timing relationships, such as those expected to be induced by heterogeneous firing rates, increases the sensitivity of frequency-based methods. We also find that different inference methods reveal distinct subsets of the synaptic network and each method exhibits different biases in the accurate detection of reciprocity and local clustering. To correct for errors and biases specific to single inference algorithms, we combine methods into an ensemble. Ensemble predictions, generated as a linear combination of multiple inference algorithms, are more sensitive than the best individual measures alone, and are more faithful to ground-truth statistics of connectivity, mitigating biases specific to single inference methods. These weightings generalize across simulated datasets, emphasizing the potential for the broad utility of ensemble-based approaches.

  9. Use of the landmark method to address immortal person-time bias in comparative effectiveness research: a simulation study.

    PubMed

    Mi, Xiaojuan; Hammill, Bradley G; Curtis, Lesley H; Lai, Edward Chia-Cheng; Setoguchi, Soko

    2016-11-20

    Observational comparative effectiveness and safety studies are often subject to immortal person-time, a period of follow-up during which outcomes cannot occur because of the treatment definition. Common approaches, like excluding immortal time from the analysis or naïvely including immortal time in the analysis, are known to result in biased estimates of treatment effect. Other approaches, such as the Mantel-Byar and landmark methods, have been proposed to handle immortal time. Little is known about the performance of the landmark method in different scenarios. We conducted extensive Monte Carlo simulations to assess the performance of the landmark method compared with other methods in settings that reflect realistic scenarios. We considered four landmark times for the landmark method. We found that the Mantel-Byar method provided unbiased estimates in all scenarios, whereas the exclusion and naïve methods resulted in substantial bias when the hazard of the event was constant or decreased over time. The landmark method performed well in correcting immortal person-time bias in all scenarios when the treatment effect was small, and provided unbiased estimates when there was no treatment effect. The bias associated with the landmark method tended to be small when the treatment rate was higher in the early follow-up period than it was later. These findings were confirmed in a case study of chronic obstructive pulmonary disease. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Science knowledge and cognitive strategy use among culturally and linguistically diverse students

    NASA Astrophysics Data System (ADS)

    Lee, Okhee; Fradd, Sandra H.; Sutman, Frank X.

    Science performance is determined, to a large extent, by what students already know about science (i.e., science knowledge) and what techniques or methods students use in performing science tasks (i.e., cognitive strategies). This study describes and compares science knowledge, science vocabulary, and cognitive strategy use among four diverse groups of elementary students: (a) monolingual English Caucasian, (b) African-American, (c) bilingual Spanish, and (d) bilingual Haitian Creole. To facilitate science performance in culturally and linguistically congruent settings, the study included student dyads and teachers of the same language, culture, and gender. Science performance was observed using three science tasks: weather phenomena, simple machines, and buoyancy. Data analysis involved a range of qualitative methods focusing on major themes and patterns, and quantitative methods using coding systems to summarize frequencies and total scores. The findings reveal distinct patterns of science knowledge, science vocabulary, and cognitive strategy use among the four language and culture groups. The findings also indicate relationships among science knowledge, science vocabulary, and cognitive strategy use. These findings raise important issues about science instruction for culturally and linguistically diverse groups of students.Received: 3 January 1995;

  11. The Impact of Childhood Obesity on Health and Health Service Use.

    PubMed

    Kinge, Jonas Minet; Morris, Stephen

    2018-06-01

    To test the impact of obesity on health and health care use in children, by the use of various methods to account for reverse causality and omitted variables. Fifteen rounds of the Health Survey for England (1998-2013), which is representative of children and adolescents in England. We use three methods to account for reverse causality and omitted variables in the relationship between BMI and health/health service use: regression with individual, parent, and household control variables; sibling fixed effects; and instrumental variables based on genetic variation in weight. We include all children and adolescents aged 4-18 years old. We find that obesity has a statistically significant and negative impact on self-rated health and a positive impact on health service use in girls, boys, younger children (aged 4-12), and adolescents (aged 13-18). The findings are comparable in each model in both boys and girls. Using econometric methods, we have mitigated several confounding factors affecting the impact of obesity in childhood on health and health service use. Our findings suggest that obesity has severe consequences for health and health service use even among children. © Health Research and Educational Trust.

  12. A comparative study on effect of e-learning and instructor-led methods on nurses’ documentation competency

    PubMed Central

    Abbaszadeh, Abbas; Sabeghi, Hakimeh; Borhani, Fariba; Heydari, Abbas

    2011-01-01

    BACKGROUND: Accurate recording of the nursing care indicates the care performance and its quality, so that, any failure in documentation can be a reason for inadequate patient care. Therefore, improving nurses’ skills in this field using effective educational methods is of high importance. Since traditional teaching methods are not suitable for communities with rapid knowledge expansion and constant changes, e-learning methods can be a viable alternative. To show the importance of e-learning methods on nurses’ care reporting skills, this study was performed to compare the e-learning methods with the traditional instructor-led methods. METHODS: This was a quasi-experimental study aimed to compare the effect of two teaching methods (e-learning and lecture) on nursing documentation and examine the differences in acquiring competency on documentation between nurses who participated in the e-learning (n = 30) and nurses in a lecture group (n = 31). RESULTS: The results of the present study indicated that statistically there was no significant difference between the two groups. The findings also revealed that statistically there was no significant correlation between the two groups toward demographic variables. However, we believe that due to benefits of e-learning against traditional instructor-led method, and according to their equal effect on nurses’ documentation competency, it can be a qualified substitute for traditional instructor-led method. CONCLUSIONS: E-learning as a student-centered method as well as lecture method equally promote competency of the nurses on documentation. Therefore, e-learning can be used to facilitate the implementation of nursing educational programs. PMID:22224113

  13. A Comparison of Different Methods for Evaluating Diet, Physical Activity, and Long-Term Weight Gain in 3 Prospective Cohort Studies123

    PubMed Central

    Smith, Jessica D; Hou, Tao; Hu, Frank B; Rimm, Eric B; Spiegelman, Donna; Willett, Walter C; Mozaffarian, Dariush

    2015-01-01

    Background: The insidious pace of long-term weight gain (∼1 lb/y or 0.45 kg/y) makes it difficult to study in trials; long-term prospective cohorts provide crucial evidence on its key contributors. Most previous studies have evaluated how prevalent lifestyle habits relate to future weight gain rather than to lifestyle changes, which may be more temporally and physiologically relevant. Objective: Our objective was to evaluate and compare different methodological approaches for investigating diet, physical activity (PA), and long-term weight gain. Methods: In 3 prospective cohorts (total n = 117,992), we assessed how lifestyle relates to long-term weight change (up to 24 y of follow-up) in 4-y periods by comparing 3 analytic approaches: 1) prevalent diet and PA and 4-y weight change (prevalent analysis); 2) 4-y changes in diet and PA with a 4-y weight change (change analysis); and 3) 4-y change in diet and PA with weight change in the subsequent 4 y (lagged-change analysis). We compared these approaches and evaluated the consistency across cohorts, magnitudes of associations, and biological plausibility of findings. Results: Across the 3 methods, consistent, robust, and biologically plausible associations were seen only for the change analysis. Results for prevalent or lagged-change analyses were less consistent across cohorts, smaller in magnitude, and biologically implausible. For example, for each serving of a sugar-sweetened beverage, the observed weight gain was 0.01 lb (95% CI: −0.08, 0.10) [0.005 kg (95% CI: −0.04, 0.05)] based on prevalent analysis; 0.99 lb (95% CI: 0.83, 1.16) [0.45 kg (95% CI: 0.38, 0.53)] based on change analysis; and 0.05 lb (95% CI: −0.10, 0.21) [0.02 kg (95% CI: −0.05, 0.10)] based on lagged-change analysis. Findings were similar for other foods and PA. Conclusions: Robust, consistent, and biologically plausible relations between lifestyle and long-term weight gain are seen when evaluating lifestyle changes and weight changes in discrete periods rather than in prevalent lifestyle or lagged changes. These findings inform the optimal methods for evaluating lifestyle and long-term weight gain and the potential for bias when other methods are used. PMID:26377763

  14. Myocardial infarction size and location: a comparative study of epicardial isopotential mapping, thallium-201 scintigraphy, electrocardiography and vectorcardiography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toyama, S.; Suzuki, K.; Takahashi, T.

    1987-07-01

    Based on epicardial isopotential mapping (the Ep Map), which was calculated from body surface isopotential mapping (the Body Map) with Yamashita's method, using the finite element technique, we predicted the location and size of the abnormal depolarized area (the infarcted area) in 19 clinical cases of anterior and 18 cases of inferoposterior infarction. The prediction was done using Toyama's diagnostic method, previously reported. The accuracy of the prediction by the Ep Map was assessed by comparing it with findings from thallium-201 scintigraphy (SCG), electrocardiography (ECG) and vectorcardiography (VCG). In all cases of anterior infarction, the location of the abnormal depolarizedmore » areas determined on the Ep Map, which was localized at the anterior wall along the anterior intraventricular septum, agreed with the location of the abnormal findings obtained by SCG, ECG and VCG. For all inferoposterior infarction cases, the abnormal depolarized areas were localized at the posterior wall and the location also coincided with that of the abnormal findings obtained by SCG, ECG and VCG. Furthermore, we ranked and ordered the size of the abnormal depolarized areas, which were predicted by the Ep Map for both anterior and inferoposterior infarction cases. In the cases of anterior infarction, the order of the size of the abnormal depolarized area by the Ep Map was correlated to the size of the abnormal findings by SCG, as well as to the results from Selvester's QRS scoring system in ECG and to the angle of the maximum QRS vector in the horizontal plane in VCG.« less

  15. Rapid Diagnosis of Tuberculosis with the Xpert MTB/RIF Assay in High Burden Countries: A Cost-Effectiveness Analysis

    PubMed Central

    Vassall, Anna; van Kampen, Sanne; Sohn, Hojoon; Michael, Joy S.; John, K. R.; den Boon, Saskia; Davis, J. Lucian; Whitelaw, Andrew; Nicol, Mark P.; Gler, Maria Tarcela; Khaliqov, Anar; Zamudio, Carlos; Perkins, Mark D.; Boehme, Catharina C.; Cobelens, Frank

    2011-01-01

    Background Xpert MTB/RIF (Xpert) is a promising new rapid diagnostic technology for tuberculosis (TB) that has characteristics that suggest large-scale roll-out. However, because the test is expensive, there are concerns among TB program managers and policy makers regarding its affordability for low- and middle-income settings. Methods and Findings We estimate the impact of the introduction of Xpert on the costs and cost-effectiveness of TB care using decision analytic modelling, comparing the introduction of Xpert to a base case of smear microscopy and clinical diagnosis in India, South Africa, and Uganda. The introduction of Xpert increases TB case finding in all three settings; from 72%–85% to 95%–99% of the cohort of individuals with suspected TB, compared to the base case. Diagnostic costs (including the costs of testing all individuals with suspected TB) also increase: from US$28–US$49 to US$133–US$146 and US$137–US$151 per TB case detected when Xpert is used “in addition to” and “as a replacement of” smear microscopy, respectively. The incremental cost effectiveness ratios (ICERs) for using Xpert “in addition to” smear microscopy, compared to the base case, range from US$41–$110 per disability adjusted life year (DALY) averted. Likewise the ICERS for using Xpert “as a replacement of” smear microscopy range from US$52–$138 per DALY averted. These ICERs are below the World Health Organization (WHO) willingness to pay threshold. Conclusions Our results suggest that Xpert is a cost-effective method of TB diagnosis, compared to a base case of smear microscopy and clinical diagnosis of smear-negative TB in low- and middle-income settings where, with its ability to substantially increase case finding, it has important potential for improving TB diagnosis and control. The extent of cost-effectiveness gain to TB programmes from deploying Xpert is primarily dependent on current TB diagnostic practices. Further work is required during scale-up to validate these findings. Please see later in the article for the Editors' Summary PMID:22087078

  16. Four points function fitted and first derivative procedure for determining the end points in potentiometric titration curves: statistical analysis and method comparison.

    PubMed

    Kholeif, S A

    2001-06-01

    A new method that belongs to the differential category for determining the end points from potentiometric titration curves is presented. It uses a preprocess to find first derivative values by fitting four data points in and around the region of inflection to a non-linear function, and then locate the end point, usually as a maximum or minimum, using an inverse parabolic interpolation procedure that has an analytical solution. The behavior and accuracy of the sigmoid and cumulative non-linear functions used are investigated against three factors. A statistical evaluation of the new method using linear least-squares method validation and multifactor data analysis are covered. The new method is generally applied to symmetrical and unsymmetrical potentiometric titration curves, and the end point is calculated using numerical procedures only. It outperforms the "parent" regular differential method in almost all factors levels and gives accurate results comparable to the true or estimated true end points. Calculated end points from selected experimental titration curves compatible with the equivalence point category of methods, such as Gran or Fortuin, are also compared with the new method.

  17. Is Best-Worst Scaling Suitable for Health State Valuation? A Comparison with Discrete Choice Experiments.

    PubMed

    Krucien, Nicolas; Watson, Verity; Ryan, Mandy

    2017-12-01

    Health utility indices (HUIs) are widely used in economic evaluation. The best-worst scaling (BWS) method is being used to value dimensions of HUIs. However, little is known about the properties of this method. This paper investigates the validity of the BWS method to develop HUI, comparing it to another ordinal valuation method, the discrete choice experiment (DCE). Using a parametric approach, we find a low level of concordance between the two methods, with evidence of preference reversals. BWS responses are subject to decision biases, with significant effects on individuals' preferences. Non parametric tests indicate that BWS data has lower stability, monotonicity and continuity compared to DCE data, suggesting that the BWS provides lower quality data. As a consequence, for both theoretical and technical reasons, practitioners should be cautious both about using the BWS method to measure health-related preferences, and using HUI based on BWS data. Given existing evidence, it seems that the DCE method is a better method, at least because its limitations (and measurement properties) have been extensively researched. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. Validation of automated white matter hyperintensity segmentation.

    PubMed

    Smart, Sean D; Firbank, Michael J; O'Brien, John T

    2011-01-01

    Introduction. White matter hyperintensities (WMHs) are a common finding on MRI scans of older people and are associated with vascular disease. We compared 3 methods for automatically segmenting WMHs from MRI scans. Method. An operator manually segmented WMHs on MRI images from a 3T scanner. The scans were also segmented in a fully automated fashion by three different programmes. The voxel overlap between manual and automated segmentation was compared. Results. Between observer overlap ratio was 63%. Using our previously described in-house software, we had overlap of 62.2%. We investigated the use of a modified version of SPM segmentation; however, this was not successful, with only 14% overlap. Discussion. Using our previously reported software, we demonstrated good segmentation of WMHs in a fully automated fashion.

  19. A Comparison between Different Methods of Estimating Anaerobic Energy Production

    PubMed Central

    Andersson, Erik P.; McGawley, Kerry

    2018-01-01

    Purpose: The present study aimed to compare four methods of estimating anaerobic energy production during supramaximal exercise. Methods: Twenty-one junior cross-country skiers competing at a national and/or international level were tested on a treadmill during uphill (7°) diagonal-stride (DS) roller-skiing. After a 4-minute warm-up, a 4 × 4-min continuous submaximal protocol was performed followed by a 600-m time trial (TT). For the maximal accumulated O2 deficit (MAOD) method the V.O2-speed regression relationship was used to estimate the V.O2 demand during the TT, either including (4+Y, method 1) or excluding (4-Y, method 2) a fixed Y-intercept for baseline V.O2. The gross efficiency (GE) method (method 3) involved calculating metabolic rate during the TT by dividing power output by submaximal GE, which was then converted to a V.O2 demand. An alternative method based on submaximal energy cost (EC, method 4) was also used to estimate V.O2 demand during the TT. Results: The GE/EC remained constant across the submaximal stages and the supramaximal TT was performed in 185 ± 24 s. The GE and EC methods produced identical V.O2 demands and O2 deficits. The V.O2 demand was ~3% lower for the 4+Y method compared with the 4-Y and GE/EC methods, with corresponding O2 deficits of 56 ± 10, 62 ± 10, and 63 ± 10 mL·kg−1, respectively (P < 0.05 for 4+Y vs. 4-Y and GE/EC). The mean differences between the estimated O2 deficits were −6 ± 5 mL·kg−1 (4+Y vs. 4-Y, P < 0.05), −7 ± 1 mL·kg−1 (4+Y vs. GE/EC, P < 0.05) and −1 ± 5 mL·kg−1 (4-Y vs. GE/EC), with respective typical errors of 5.3, 1.9, and 6.0%. The mean difference between the O2 deficit estimated with GE/EC based on the average of four submaximal stages compared with the last stage was 1 ± 2 mL·kg−1, with a typical error of 3.2%. Conclusions: These findings demonstrate a disagreement in the O2 deficits estimated using current methods. In addition, the findings suggest that a valid estimate of the O2 deficit may be possible using data from only one submaximal stage in combination with the GE/EC method. PMID:29472871

  20. Aligning Metabolic Pathways Exploiting Binary Relation of Reactions.

    PubMed

    Huang, Yiran; Zhong, Cheng; Lin, Hai Xiang; Huang, Jing

    2016-01-01

    Metabolic pathway alignment has been widely used to find one-to-one and/or one-to-many reaction mappings to identify the alternative pathways that have similar functions through different sets of reactions, which has important applications in reconstructing phylogeny and understanding metabolic functions. The existing alignment methods exhaustively search reaction sets, which may become infeasible for large pathways. To address this problem, we present an effective alignment method for accurately extracting reaction mappings between two metabolic pathways. We show that connected relation between reactions can be formalized as binary relation of reactions in metabolic pathways, and the multiplications of zero-one matrices for binary relations of reactions can be accomplished in finite steps. By utilizing the multiplications of zero-one matrices for binary relation of reactions, we efficiently obtain reaction sets in a small number of steps without exhaustive search, and accurately uncover biologically relevant reaction mappings. Furthermore, we introduce a measure of topological similarity of nodes (reactions) by comparing the structural similarity of the k-neighborhood subgraphs of the nodes in aligning metabolic pathways. We employ this similarity metric to improve the accuracy of the alignments. The experimental results on the KEGG database show that when compared with other state-of-the-art methods, in most cases, our method obtains better performance in the node correctness and edge correctness, and the number of the edges of the largest common connected subgraph for one-to-one reaction mappings, and the number of correct one-to-many reaction mappings. Our method is scalable in finding more reaction mappings with better biological relevance in large metabolic pathways.

  1. Actigraphic assessment of motor activity in acutely admitted inpatients with bipolar disorder.

    PubMed

    Krane-Gartiser, Karoline; Henriksen, Tone Elise Gjotterud; Morken, Gunnar; Vaaler, Arne; Fasmer, Ole Bernt

    2014-01-01

    Mania is associated with increased activity, whereas psychomotor retardation is often found in bipolar depression. Actigraphy is a promising tool for monitoring phase shifts and changes following treatment in bipolar disorder. The aim of this study was to compare recordings of motor activity in mania, bipolar depression and healthy controls, using linear and nonlinear analytical methods. Recordings from 18 acutely hospitalized inpatients with mania were compared to 12 recordings from bipolar depression inpatients and 28 healthy controls. 24-hour actigraphy recordings and 64-minute periods of continuous motor activity in the morning and evening were analyzed. Mean activity and several measures of variability and complexity were calculated. Patients with depression had a lower mean activity level compared to controls, but higher variability shown by increased standard deviation (SD) and root mean square successive difference (RMSSD) over 24 hours and in the active morning period. The patients with mania had lower first lag autocorrelation compared to controls, and Fourier analysis showed higher variance in the high frequency part of the spectrum corresponding to the period from 2-8 minutes. Both patient groups had a higher RMSSD/SD ratio compared to controls. In patients with mania we found an increased complexity of time series in the active morning period, compared to patients with depression. The findings in the patients with mania are similar to previous findings in patients with schizophrenia and healthy individuals treated with a glutamatergic antagonist. We have found distinctly different activity patterns in hospitalized patients with bipolar disorder in episodes of mania and depression, assessed by actigraphy and analyzed with linear and nonlinear mathematical methods, as well as clear differences between the patients and healthy comparison subjects.

  2. Validation of a Tablet Application for Assessing Dietary Intakes Compared with the Measured Food Intake/Food Waste Method in Military Personnel Consuming Field Rations.

    PubMed

    Ahmed, Mavra; Mandic, Iva; Lou, Wendy; Goodman, Len; Jacobs, Ira; L'Abbé, Mary R

    2017-02-27

    The collection of accurate dietary intakes using traditional dietary assessment methods (e.g., food records) from military personnel is challenging due to the demanding physiological and psychological conditions of training or operations. In addition, these methods are burdensome, time consuming, and prone to measurement errors. Adopting smart-phone/tablet technology could overcome some of these barriers. The objective was to assess the validity of a tablet app, modified to contain detailed nutritional composition data, in comparison to a measured food intake/waste method. A sample of Canadian Armed Forces personnel, randomized to either a tablet app ( n = 9) or a weighed food record (wFR) ( n = 9), recorded the consumption of standard military rations for a total of 8 days. Compared to the gold standard measured food intake/waste method, the difference in mean energy intake was small (-73 kcal/day for tablet app and -108 kcal/day for wFR) ( p > 0.05). Repeated Measures Bland-Altman plots indicated good agreement for both methods (tablet app and wFR) with the measured food intake/waste method. These findings demonstrate that the tablet app, with added nutritional composition data, is comparable to the traditional dietary assessment method (wFR) and performs satisfactorily in relation to the measured food intake/waste method to assess energy, macronutrient, and selected micronutrient intakes in a sample of military personnel.

  3. Validation of a Tablet Application for Assessing Dietary Intakes Compared with the Measured Food Intake/Food Waste Method in Military Personnel Consuming Field Rations

    PubMed Central

    Ahmed, Mavra; Mandic, Iva; Lou, Wendy; Goodman, Len; Jacobs, Ira; L’Abbé, Mary R.

    2017-01-01

    The collection of accurate dietary intakes using traditional dietary assessment methods (e.g., food records) from military personnel is challenging due to the demanding physiological and psychological conditions of training or operations. In addition, these methods are burdensome, time consuming, and prone to measurement errors. Adopting smart-phone/tablet technology could overcome some of these barriers. The objective was to assess the validity of a tablet app, modified to contain detailed nutritional composition data, in comparison to a measured food intake/waste method. A sample of Canadian Armed Forces personnel, randomized to either a tablet app (n = 9) or a weighed food record (wFR) (n = 9), recorded the consumption of standard military rations for a total of 8 days. Compared to the gold standard measured food intake/waste method, the difference in mean energy intake was small (−73 kcal/day for tablet app and −108 kcal/day for wFR) (p > 0.05). Repeated Measures Bland-Altman plots indicated good agreement for both methods (tablet app and wFR) with the measured food intake/waste method. These findings demonstrate that the tablet app, with added nutritional composition data, is comparable to the traditional dietary assessment method (wFR) and performs satisfactorily in relation to the measured food intake/waste method to assess energy, macronutrient, and selected micronutrient intakes in a sample of military personnel. PMID:28264428

  4. A comparative study on effect of e-learning and instructor-led methods on nurses' documentation competency.

    PubMed

    Abbaszadeh, Abbas; Sabeghi, Hakimeh; Borhani, Fariba; Heydari, Abbas

    2011-01-01

    Accurate recording of the nursing care indicates the care performance and its quality, so that, any failure in documentation can be a reason for inadequate patient care. Therefore, improving nurses' skills in this field using effective educational methods is of high importance. Since traditional teaching methods are not suitable for communities with rapid knowledge expansion and constant changes, e-learning methods can be a viable alternative. To show the importance of e-learning methods on nurses' care reporting skills, this study was performed to compare the e-learning methods with the traditional instructor-led methods. This was a quasi-experimental study aimed to compare the effect of two teaching methods (e-learning and lecture) on nursing documentation and examine the differences in acquiring competency on documentation between nurses who participated in the e-learning (n = 30) and nurses in a lecture group (n = 31). The results of the present study indicated that statistically there was no significant difference between the two groups. The findings also revealed that statistically there was no significant correlation between the two groups toward demographic variables. However, we believe that due to benefits of e-learning against traditional instructor-led method, and according to their equal effect on nurses' documentation competency, it can be a qualified substitute for traditional instructor-led method. E-learning as a student-centered method as well as lecture method equally promote competency of the nurses on documentation. Therefore, e-learning can be used to facilitate the implementation of nursing educational programs.

  5. Well-Being of Divorced Elderly and Their Dependency on Adult Children.

    ERIC Educational Resources Information Center

    Hennon, Charles B.; Burton, John R.

    Given current and projected divorce rates, many people will find themselves faced with being divorced in their later years. To determine if the way of becoming single affects well being, a matched sample of 40 divorced and widowed elderly women were compared. Results showed that the method of becoming single had some differential impact but…

  6. Nanostructured bioactive polymers used in food-packaging.

    PubMed

    Mateescu, Andreea L; Dimov, Tatiana V; Grumezescu, Alexandru M; Gestal, Monica C; Chifiriuc, Mariana C

    2015-01-01

    The development of effective packaging materials is crucial, because food microorganisms determine economic and public health issues. The current paper describes some of the most recent findings in regards of food preservation through novel packaging methods, using biodegradable polymers, efficient antimicrobial agents and nanocomposites with improved mechanical and oxidation stability, increased biodegradability and barrier effect comparatively with conventional polymeric matrices.

  7. A Mathematics Education Comparative Analysis of ALEKS Technology and Direct Classroom Instruction

    ERIC Educational Resources Information Center

    Mertes, Emily Sue

    2013-01-01

    Assessment and LEarning in Knowledge Spaces (ALEKS), a technology-based mathematics curriculum, was piloted in the 2012-2013 school year at a Minnesota rural public middle school. The goal was to find an equivalent or more effective mathematics teaching method than traditional direct instruction. The purpose of this quantitative study was to…

  8. How Generalizable Is Your Experiment? An Index for Comparing Samples and Populations

    ERIC Educational Resources Information Center

    Tipton, Elizabeth

    2013-01-01

    Recent research on the design of social experiments has highlighted the effects of different design choices on research findings. Since experiments rarely collect their samples using random selection, in order to address these external validity problems and design choices, recent research has focused on two areas. The first area is on methods for…

  9. Surrogacy Families: Parental Functioning, Parent-Child Relationships and Children's Psychological Development at Age 2

    ERIC Educational Resources Information Center

    Golombok, Susan; MacCallum, Fiona; Murray, Clare; Lycett, Emma; Jadva, Vasanti

    2006-01-01

    Background: Findings are presented of the second phase of a longitudinal study of families created through surrogacy. Methods: At the time of the child's 2nd birthday, 37 surrogacy families were compared with 48 egg donation families and 68 natural conception families on standardised interview and questionnaire measures of the psychological…

  10. Cognitive and Neuroimaging Findings in Physically Abused Preschoolers

    ERIC Educational Resources Information Center

    Prasad, M. R.; Kramer, L. A.; Ewing-Cobbs, L.

    2005-01-01

    Aims: To characterise the cognitive, motor, and language skills of toddlers and preschoolers who had been physically abused and to obtain concurrent MRIs of the brain. Methods: A between groups design was used to compare of sample of 19 children, aged 14-77 months, who had been hospitalised for physical abuse with no evidence of neurological…

  11. Oral Health Condition and Treatment Needs of a Group of Nigerian Individuals with Down Syndrome

    ERIC Educational Resources Information Center

    Oredugba, Folakemi A.

    2007-01-01

    Objective: This study was carried out to determine the oral health condition and treatment needs of a group of individuals with Down syndrome in Nigeria. Method: Participants were examined for oral hygiene status, dental caries, malocclusion, hypoplasia, missing teeth, crowding and treatment needs. Findings were compared with controls across age…

  12. Impact of the Olweus Bullying Prevention Program on a Middle School Environment

    ERIC Educational Resources Information Center

    Purugulla, Vijay

    2011-01-01

    This mixed methods research study sought to find if the implementation of the Olweus Bullying Prevention Program (OBPP) would reduce incidences of bullying in a suburban Atlanta middle school. Data was collected and compared over a two year period with Year 1 data representing pre-implementation of the OBPP. Discipline records associated with…

  13. An Empirical Study of the Career Paths of Senior Educational Administrators in Manitoba, Canada: Implications for Career Development

    ERIC Educational Resources Information Center

    Wallin, Dawn C.

    2012-01-01

    This paper conceptualizes queue theory (Tallerico & Blount, 2004) to discuss a mixed-methods study that determined the career patterns of senior educational administrators in public school divisions in Manitoba, Canada, compared by position, context and sex. Findings indicate that queue theory has merit for describing the career paths of…

  14. Wisdom for the Ages from the Sages: Manitoba Senior Administrators Offer Advice to Aspirants

    ERIC Educational Resources Information Center

    Wallin, Dawn C.

    2010-01-01

    This paper discusses a portion of the findings of a mixed-methods study that examined the career patterns of senior educational administrators in public school divisions in Manitoba, Canada. Data based on the career paths of senior administrators from both a survey and interviews of senior administrators were analyzed and compared along three…

  15. World Wide Web Indexes and Hierarchical Lists: Finding Tools for the Internet.

    ERIC Educational Resources Information Center

    Munson, Kurt I.

    1996-01-01

    In World Wide Web indexing: (1) the creation process is automated; (2) the indexes are merely descriptive, not analytical of document content; (3) results may be sorted differently depending on the search engine; and (4) indexes link directly to the resources. This article compares the indexing methods and querying options of the search engines…

  16. Measuring Disability: Application of the Rasch Model to Activities of Daily Living (ADL/IADL).

    ERIC Educational Resources Information Center

    Sheehan, T. Joseph; DeChello, Laurie M.; Garcia, Ramon; Fifield, Judith; Rothfield, Naomi; Reisine, Susan

    2001-01-01

    Performed a comparative analysis of Activities of Daily Living (ADL) items administered to 4,430 older adults and Instrumental Activities of Daily Living administered to 605 people with rheumatoid arthritis scoring both with Likert and Rasch measurement models. Findings show the superiority of the Rasch approach over the Likert method. (SLD)

  17. Vocabulary Development in European Portuguese: A Replication Study Using the Language Development Survey

    ERIC Educational Resources Information Center

    Rescorla, Leslie; Nyame, Josephine; Dias, Pedro

    2016-01-01

    Purpose: Our objective was to replicate previous cross­linguistic findings by comparing Portuguese and U.S. children with respect to (a) effects of language, gender, and age on vocabulary size; (b) lexical composition; and (c) late talking. Method: We used the Language Development Survey (LDS; Rescorla, 1989) with children (18-35 months) learning…

  18. Bacterial whole genome-based phylogeny: construction of a new benchmarking dataset and assessment of some existing methods.

    PubMed

    Ahrenfeldt, Johanne; Skaarup, Carina; Hasman, Henrik; Pedersen, Anders Gorm; Aarestrup, Frank Møller; Lund, Ole

    2017-01-05

    Whole genome sequencing (WGS) is increasingly used in diagnostics and surveillance of infectious diseases. A major application for WGS is to use the data for identifying outbreak clusters, and there is therefore a need for methods that can accurately and efficiently infer phylogenies from sequencing reads. In the present study we describe a new dataset that we have created for the purpose of benchmarking such WGS-based methods for epidemiological data, and also present an analysis where we use the data to compare the performance of some current methods. Our aim was to create a benchmark data set that mimics sequencing data of the sort that might be collected during an outbreak of an infectious disease. This was achieved by letting an E. coli hypermutator strain grow in the lab for 8 consecutive days, each day splitting the culture in two while also collecting samples for sequencing. The result is a data set consisting of 101 whole genome sequences with known phylogenetic relationship. Among the sequenced samples 51 correspond to internal nodes in the phylogeny because they are ancestral, while the remaining 50 correspond to leaves. We also used the newly created data set to compare three different online available methods that infer phylogenies from whole-genome sequencing reads: NDtree, CSI Phylogeny and REALPHY. One complication when comparing the output of these methods with the known phylogeny is that phylogenetic methods typically build trees where all observed sequences are placed as leafs, even though some of them are in fact ancestral. We therefore devised a method for post processing the inferred trees by collapsing short branches (thus relocating some leafs to internal nodes), and also present two new measures of tree similarity that takes into account the identity of both internal and leaf nodes. Based on this analysis we find that, among the investigated methods, CSI Phylogeny had the best performance, correctly identifying 73% of all branches in the tree and 71% of all clades. We have made all data from this experiment (raw sequencing reads, consensus whole-genome sequences, as well as descriptions of the known phylogeny in a variety of formats) publicly available, with the hope that other groups may find this data useful for benchmarking and exploring the performance of epidemiological methods. All data is freely available at: https://cge.cbs.dtu.dk/services/evolution_data.php .

  19. Comparison of Conventional Versus Spiral Computed Tomography with Three Dimensional Reconstruction in Chronic Otitis Media with Ossicular Chain Destruction.

    PubMed

    Naghibi, Saeed; Seifirad, Sirous; Adami Dehkordi, Mahboobeh; Einolghozati, Sasan; Ghaffarian Eidgahi Moghadam, Nafiseh; Akhavan Rezayat, Amir; Seifirad, Soroush

    2016-01-01

    Chronic otitis media (COM) can be treated with tympanoplasty with or without mastoidectomy. In patients who have undergone middle ear surgery, three-dimensional spiral computed tomography (CT) scan plays an important role in optimizing surgical planning. This study was performed to compare the findings of three-dimensional reconstructed spiral and conventional CT scan of ossicular chain study in patients with COM. Fifty patients enrolled in the study underwent plane and three dimensional CT scan (PHILIPS-MX 8000). Ossicles changes, mastoid cavity, tympanic cavity, and presence of cholesteatoma were evaluated. Results of the two methods were then compared and interpreted by a radiologist, recorded in questionnaires, and analyzed. Logistic regression test and Kappa coefficient of agreement were used for statistical analyses. Sixty two ears with COM were found in physical examination. A significant difference was observed between the findings of the two methods in ossicle erosion (11.3% in conventional CT vs. 37.1% in spiral CT, P = 0.0001), decrease of mastoid air cells (82.3% in conventional CT vs. 93.5% in spiral CT, P = 0.001), and tympanic cavity opacity (12.9% in conventional CT vs. 40.3% in spiral CT, P=0.0001). No significant difference was observed between the findings of the two methods in ossicle destruction (6.5% conventional CT vs. 56.4% in spiral CT, P = 0.125), and presence of cholesteatoma (3.2% in conventional CT vs. 42% in spiral CT, P = 0.172). In this study, spiral CT scan demonstrated ossicle dislocation in 9.6%, decrease of mastoid air cells in 4.8%, and decrease of volume in the tympanic cavity in 1.6%; whereas, none of these findings were reported in the patients' conventional CT scans. Spiral-CT scan is superior to conventional CT in the diagnosis of lesions in COM before operation. It can be used for detailed evaluation of ossicular chain in such patients.

  20. Dynamic sequence analysis of a decision making task of multielement target tracking and its usage as a learning method

    NASA Astrophysics Data System (ADS)

    Kang, Ziho

    This dissertation is divided into four parts: 1) Development of effective methods for comparing visual scanning paths (or scanpaths) for a dynamic task of multiple moving targets, 2) application of the methods to compare the scanpaths of experts and novices for a conflict detection task of multiple aircraft on radar screen, 3) a post-hoc analysis of other eye movement characteristics of experts and novices, and 4) finding out whether the scanpaths of experts can be used to teach the novices. In order to compare experts' and novices' scanpaths, two methods are developed. The first proposed method is the matrix comparisons using the Mantel test. The second proposed method is the maximum transition-based agglomerative hierarchical clustering (MTAHC) where comparisons of multi-level visual groupings are held out. The matrix comparison method was useful for a small number of targets during the preliminary experiment, but turned out to be inapplicable to a realistic case when tens of aircraft were presented on screen; however, MTAHC was effective with large number of aircraft on screen. The experiments with experts and novices on the aircraft conflict detection task showed that their scanpaths are different. The MTAHC result was able to explicitly show how experts visually grouped multiple aircraft based on similar altitudes while novices tended to group them based on convergence. Also, the MTAHC results showed that novices paid much attention to the converging aircraft groups even if they are safely separated by altitude; therefore, less attention was given to the actual conflicting pairs resulting in low correct conflict detection rates. Since the analysis showed the scanpath differences, experts' scanpaths were shown to novices in order to find out its effectiveness. The scanpath treatment group showed indications that they changed their visual movements from trajectory-based to altitude-based movements. Between the treatment and the non-treatment group, there were no significant differences in terms of number of correct detections; however, the treatment group made significantly fewer false alarms.

  1. A Novel Hybrid Classification Model of Genetic Algorithms, Modified k-Nearest Neighbor and Developed Backpropagation Neural Network

    PubMed Central

    Salari, Nader; Shohaimi, Shamarina; Najafi, Farid; Nallappan, Meenakshii; Karishnarajah, Isthrinayagy

    2014-01-01

    Among numerous artificial intelligence approaches, k-Nearest Neighbor algorithms, genetic algorithms, and artificial neural networks are considered as the most common and effective methods in classification problems in numerous studies. In the present study, the results of the implementation of a novel hybrid feature selection-classification model using the above mentioned methods are presented. The purpose is benefitting from the synergies obtained from combining these technologies for the development of classification models. Such a combination creates an opportunity to invest in the strength of each algorithm, and is an approach to make up for their deficiencies. To develop proposed model, with the aim of obtaining the best array of features, first, feature ranking techniques such as the Fisher's discriminant ratio and class separability criteria were used to prioritize features. Second, the obtained results that included arrays of the top-ranked features were used as the initial population of a genetic algorithm to produce optimum arrays of features. Third, using a modified k-Nearest Neighbor method as well as an improved method of backpropagation neural networks, the classification process was advanced based on optimum arrays of the features selected by genetic algorithms. The performance of the proposed model was compared with thirteen well-known classification models based on seven datasets. Furthermore, the statistical analysis was performed using the Friedman test followed by post-hoc tests. The experimental findings indicated that the novel proposed hybrid model resulted in significantly better classification performance compared with all 13 classification methods. Finally, the performance results of the proposed model was benchmarked against the best ones reported as the state-of-the-art classifiers in terms of classification accuracy for the same data sets. The substantial findings of the comprehensive comparative study revealed that performance of the proposed model in terms of classification accuracy is desirable, promising, and competitive to the existing state-of-the-art classification models. PMID:25419659

  2. Comparing the performance of FA, DFA and DMA using different synthetic long-range correlated time series

    PubMed Central

    Shao, Ying-Hui; Gu, Gao-Feng; Jiang, Zhi-Qiang; Zhou, Wei-Xing; Sornette, Didier

    2012-01-01

    Notwithstanding the significant efforts to develop estimators of long-range correlations (LRC) and to compare their performance, no clear consensus exists on what is the best method and under which conditions. In addition, synthetic tests suggest that the performance of LRC estimators varies when using different generators of LRC time series. Here, we compare the performances of four estimators [Fluctuation Analysis (FA), Detrended Fluctuation Analysis (DFA), Backward Detrending Moving Average (BDMA), and Centred Detrending Moving Average (CDMA)]. We use three different generators [Fractional Gaussian Noises, and two ways of generating Fractional Brownian Motions]. We find that CDMA has the best performance and DFA is only slightly worse in some situations, while FA performs the worst. In addition, CDMA and DFA are less sensitive to the scaling range than FA. Hence, CDMA and DFA remain “The Methods of Choice” in determining the Hurst index of time series. PMID:23150785

  3. Shot boundary detection and label propagation for spatio-temporal video segmentation

    NASA Astrophysics Data System (ADS)

    Piramanayagam, Sankaranaryanan; Saber, Eli; Cahill, Nathan D.; Messinger, David

    2015-02-01

    This paper proposes a two stage algorithm for streaming video segmentation. In the first stage, shot boundaries are detected within a window of frames by comparing dissimilarity between 2-D segmentations of each frame. In the second stage, the 2-D segments are propagated across the window of frames in both spatial and temporal direction. The window is moved across the video to find all shot transitions and obtain spatio-temporal segments simultaneously. As opposed to techniques that operate on entire video, the proposed approach consumes significantly less memory and enables segmentation of lengthy videos. We tested our segmentation based shot detection method on the TRECVID 2007 video dataset and compared it with block-based technique. Cut detection results on the TRECVID 2007 dataset indicate that our algorithm has comparable results to the best of the block-based methods. The streaming video segmentation routine also achieves promising results on a challenging video segmentation benchmark database.

  4. STAKEHOLDER INVOLVEMENT THROUGHOUT HEALTH TECHNOLOGY ASSESSMENT: AN EXAMPLE FROM PALLIATIVE CARE.

    PubMed

    Brereton, Louise; Wahlster, Philip; Mozygemba, Kati; Lysdahl, Kristin Bakke; Burns, Jake; Polus, Stephanie; Tummers, Marcia; Refolo, Pietro; Sacchini, Dario; Leppert, Wojciech; Chilcott, James; Ingleton, Christine; Gardiner, Clare; Goyder, Elizabeth

    2017-01-01

    Internationally, funders require stakeholder involvement throughout health technology assessment (HTA). We report successes, challenges, and lessons learned from extensive stakeholder involvement throughout a palliative care case study that demonstrates new concepts and methods for HTA. A 5-step "INTEGRATE-HTA Model" developed within the INTEGRATE-HTA project guided the case study. Using convenience or purposive sampling or directly / indirectly identifying and approaching individuals / groups, stakeholders participated in qualitative research or consultation meetings. During scoping, 132 stakeholders, aged ≥ 18 years in seven countries (England, Italy, Germany, The Netherlands, Norway, Lithuania, and Poland), highlighted key issues in palliative care that assisted identification of the intervention and comparator. Subsequently stakeholders in four countries participated in face-face, telephone and / or video Skype meetings to inform evidence collection and / or review assessment results. An applicability assessment to identify contextual and implementation barriers and enablers for the case study findings involved twelve professionals in the three countries. Finally, thirteen stakeholders participated in a mock decision-making meeting in England. Views about the best methods of stakeholder involvement vary internationally. Stakeholders make valuable contributions in all stages of HTA; assisting decision making about interventions, comparators, research questions; providing evidence and insights into findings, gap analyses and applicability assessments. Key challenges exist regarding inclusivity, time, and resource use. Stakeholder involvement is feasible and worthwhile throughout HTA, sometimes providing unique insights. Various methods can be used to include stakeholders, although challenges exist. Recognition of stakeholder expertise and further guidance about stakeholder consultation methods is needed.

  5. Incorporating networks in a probabilistic graphical model to find drivers for complex human diseases.

    PubMed

    Mezlini, Aziz M; Goldenberg, Anna

    2017-10-01

    Discovering genetic mechanisms driving complex diseases is a hard problem. Existing methods often lack power to identify the set of responsible genes. Protein-protein interaction networks have been shown to boost power when detecting gene-disease associations. We introduce a Bayesian framework, Conflux, to find disease associated genes from exome sequencing data using networks as a prior. There are two main advantages to using networks within a probabilistic graphical model. First, networks are noisy and incomplete, a substantial impediment to gene discovery. Incorporating networks into the structure of a probabilistic models for gene inference has less impact on the solution than relying on the noisy network structure directly. Second, using a Bayesian framework we can keep track of the uncertainty of each gene being associated with the phenotype rather than returning a fixed list of genes. We first show that using networks clearly improves gene detection compared to individual gene testing. We then show consistently improved performance of Conflux compared to the state-of-the-art diffusion network-based method Hotnet2 and a variety of other network and variant aggregation methods, using randomly generated and literature-reported gene sets. We test Hotnet2 and Conflux on several network configurations to reveal biases and patterns of false positives and false negatives in each case. Our experiments show that our novel Bayesian framework Conflux incorporates many of the advantages of the current state-of-the-art methods, while offering more flexibility and improved power in many gene-disease association scenarios.

  6. Emergence of linguistic laws in human voice

    PubMed Central

    Torre, Iván González; Luque, Bartolo; Lacasa, Lucas; Luque, Jordi; Hernández-Fernández, Antoni

    2017-01-01

    Linguistic laws constitute one of the quantitative cornerstones of modern cognitive sciences and have been routinely investigated in written corpora, or in the equivalent transcription of oral corpora. This means that inferences of statistical patterns of language in acoustics are biased by the arbitrary, language-dependent segmentation of the signal, and virtually precludes the possibility of making comparative studies between human voice and other animal communication systems. Here we bridge this gap by proposing a method that allows to measure such patterns in acoustic signals of arbitrary origin, without needs to have access to the language corpus underneath. The method has been applied to sixteen different human languages, recovering successfully some well-known laws of human communication at timescales even below the phoneme and finding yet another link between complexity and criticality in a biological system. These methods further pave the way for new comparative studies in animal communication or the analysis of signals of unknown code. PMID:28272418

  7. Emergence of linguistic laws in human voice

    NASA Astrophysics Data System (ADS)

    Torre, Iván González; Luque, Bartolo; Lacasa, Lucas; Luque, Jordi; Hernández-Fernández, Antoni

    2017-03-01

    Linguistic laws constitute one of the quantitative cornerstones of modern cognitive sciences and have been routinely investigated in written corpora, or in the equivalent transcription of oral corpora. This means that inferences of statistical patterns of language in acoustics are biased by the arbitrary, language-dependent segmentation of the signal, and virtually precludes the possibility of making comparative studies between human voice and other animal communication systems. Here we bridge this gap by proposing a method that allows to measure such patterns in acoustic signals of arbitrary origin, without needs to have access to the language corpus underneath. The method has been applied to sixteen different human languages, recovering successfully some well-known laws of human communication at timescales even below the phoneme and finding yet another link between complexity and criticality in a biological system. These methods further pave the way for new comparative studies in animal communication or the analysis of signals of unknown code.

  8. Using Gamification to Improve Productivity and Increase Knowledge Retention During Orientation.

    PubMed

    Brull, Stacey; Finlayson, Susan; Kostelec, Teresa; MacDonald, Ryan; Krenzischeck, Dina

    2017-09-01

    Nursing administrators must provide cost-effective and efficient ways of orientation training. Traditional methods including classroom lecture can be costly with low retention of the information. Gamification engages the user, provides a level of enjoyment, and uses critical thinking skills. The aim of this study is to explore the effectiveness, during orientation, of 3 different teaching methods: didactic, online modules, and gamification. Specifically, is there a difference in nurses' clinical knowledge postorientation using these learning approaches? A quasi-experimental study design with a 115-person convenience sample split nurses into 3 groups for evaluation of clinical knowledge before and after orientation. The gamification orientation group had the highest mean scores postorientation compared with the didactic and online module groups. Findings demonstrate gamification as an effective way to teach when compared with more traditional methods. Staff enjoy this type of learning and retained more knowledge when using gaming elements.

  9. Identifying the 630 nm auroral arc emission height: A comparison of the triangulation, FAC profile, and electron density methods

    NASA Astrophysics Data System (ADS)

    Megan Gillies, D.; Knudsen, D.; Donovan, E.; Jackel, B.; Gillies, R.; Spanswick, E.

    2017-08-01

    We present a comprehensive survey of 630 nm (red-line) emission discrete auroral arcs using the newly deployed Redline Emission Geospace Observatory. In this study we discuss the need for observations of 630 nm aurora and issues with the large-altitude range of the red-line aurora. We compare field-aligned currents (FACs) measured by the Swarm constellation of satellites with the location of 10 red-line (630 nm) auroral arcs observed by all-sky imagers (ASIs) and find that a characteristic emission height of 200 km applied to the ASI maps gives optimal agreement between the two observations. We also compare the new FAC method against the traditional triangulation method using pairs of all-sky imagers (ASIs), and against electron density profiles obtained from the Resolute Bay Incoherent Scatter Radar-Canadian radar, both of which are consistent with a characteristic emission height of 200 km.

  10. The effect of different calculation methods of flywheel parameters on the Wingate Anaerobic Test.

    PubMed

    Coleman, S G; Hale, T

    1998-08-01

    Researchers compared different methods of calculating kinetic parameters of friction-braked cycle ergometers, and the subsequent effects on calculating power outputs in the Wingate Anaerobic Test (WAnT). Three methods of determining flywheel moment of inertia and frictional torque were investigated, requiring "run-down" tests and segmental geometry. Parameters were used to calculate corrected power outputs from 10 males in a 30-s WAnT against a load related to body mass (0.075 kg.kg-1). Wingate Indices of maximum (5 s) power, work, and fatigue index were also compared. Significant differences were found between uncorrected and corrected power outputs and between correction methods (p < .05). The same finding was evident for all Wingate Indices (p < .05). Results suggest that WAnT must be corrected to give true power outputs and that choosing an appropriate correction calculation is important. Determining flywheel moment of inertia and frictional torque using unloaded run-down tests is recommended.

  11. Similarity Measures for Protein Ensembles

    PubMed Central

    Lindorff-Larsen, Kresten; Ferkinghoff-Borg, Jesper

    2009-01-01

    Analyses of similarities and changes in protein conformation can provide important information regarding protein function and evolution. Many scores, including the commonly used root mean square deviation, have therefore been developed to quantify the similarities of different protein conformations. However, instead of examining individual conformations it is in many cases more relevant to analyse ensembles of conformations that have been obtained either through experiments or from methods such as molecular dynamics simulations. We here present three approaches that can be used to compare conformational ensembles in the same way as the root mean square deviation is used to compare individual pairs of structures. The methods are based on the estimation of the probability distributions underlying the ensembles and subsequent comparison of these distributions. We first validate the methods using a synthetic example from molecular dynamics simulations. We then apply the algorithms to revisit the problem of ensemble averaging during structure determination of proteins, and find that an ensemble refinement method is able to recover the correct distribution of conformations better than standard single-molecule refinement. PMID:19145244

  12. Multidimensional upwind hydrodynamics on unstructured meshes using graphics processing units - I. Two-dimensional uniform meshes

    NASA Astrophysics Data System (ADS)

    Paardekooper, S.-J.

    2017-08-01

    We present a new method for numerical hydrodynamics which uses a multidimensional generalization of the Roe solver and operates on an unstructured triangular mesh. The main advantage over traditional methods based on Riemann solvers, which commonly use one-dimensional flux estimates as building blocks for a multidimensional integration, is its inherently multidimensional nature, and as a consequence its ability to recognize multidimensional stationary states that are not hydrostatic. A second novelty is the focus on graphics processing units (GPUs). By tailoring the algorithms specifically to GPUs, we are able to get speedups of 100-250 compared to a desktop machine. We compare the multidimensional upwind scheme to a traditional, dimensionally split implementation of the Roe solver on several test problems, and we find that the new method significantly outperforms the Roe solver in almost all cases. This comes with increased computational costs per time-step, which makes the new method approximately a factor of 2 slower than a dimensionally split scheme acting on a structured grid.

  13. Intubation Methods by Novice Intubators in a Manikin Model

    PubMed Central

    O'Carroll, Darragh C; Aratani, Ashley K; Lee, Dane C; Lau, Christopher A; Morton, Paul N; Yamamoto, Loren G; Berg, Benjamin W

    2013-01-01

    Tracheal Intubation is an important yet difficult skill to learn with many possible methods and techniques. Direct laryngoscopy is the standard method of tracheal intubation, but several instruments have been shown to be less difficult and have better performance characteristics than the traditional direct method. We compared 4 different intubation methods performed by novice intubators on manikins: conventional direct laryngoscopy, video laryngoscopy, Airtraq® laryngoscopy, and fiberoptic laryngoscopy. In addition, we attempted to find a correlation between playing videogames and intubation times in novice intubators. Video laryngoscopy had the best results for both our normal and difficult airway (cervical spine immobilization) manikin scenarios. When video was compared to direct in the normal airway scenario, it had a significantly higher success rate (100% vs 83% P=.02) and shorter intubation times (29.1±27.4 sec vs 45.9±39.5 sec, P=.03). In the difficult airway scenario video laryngoscopy maintained a significantly higher success rate (91% vs 71% P=0.04) and likelihood of success (3.2±1.0 95%CI [2.9–3.5] vs 2.4±0.9 95%CI [2.1–2.7]) when compared to direct laryngoscopy. Participants also reported significantly higher rates of self-confidence (3.5±0.6 95%CI [3.3–3.7]) and ease of use (1.5±0.7 95%CI [1.3–1.8]) with video laryngoscopy compared to all other methods. We found no correlation between videogame playing and intubation methods. PMID:24167768

  14. A Method of DTM Construction Based on Quadrangular Irregular Networks and Related Error Analysis

    PubMed Central

    Kang, Mengjun

    2015-01-01

    A new method of DTM construction based on quadrangular irregular networks (QINs) that considers all the original data points and has a topological matrix is presented. A numerical test and a real-world example are used to comparatively analyse the accuracy of QINs against classical interpolation methods and other DTM representation methods, including SPLINE, KRIGING and triangulated irregular networks (TINs). The numerical test finds that the QIN method is the second-most accurate of the four methods. In the real-world example, DTMs are constructed using QINs and the three classical interpolation methods. The results indicate that the QIN method is the most accurate method tested. The difference in accuracy rank seems to be caused by the locations of the data points sampled. Although the QIN method has drawbacks, it is an alternative method for DTM construction. PMID:25996691

  15. The Application of Continuous Wavelet Transform Based Foreground Subtraction Method in 21 cm Sky Surveys

    NASA Astrophysics Data System (ADS)

    Gu, Junhua; Xu, Haiguang; Wang, Jingying; An, Tao; Chen, Wen

    2013-08-01

    We propose a continuous wavelet transform based non-parametric foreground subtraction method for the detection of redshifted 21 cm signal from the epoch of reionization. This method works based on the assumption that the foreground spectra are smooth in frequency domain, while the 21 cm signal spectrum is full of saw-tooth-like structures, thus their characteristic scales are significantly different. We can distinguish them in the wavelet coefficient space easily and perform the foreground subtraction. Compared with the traditional spectral fitting based method, our method is more tolerant to complex foregrounds. Furthermore, we also find that when the instrument has uncorrected response error, our method can also work significantly better than the spectral fitting based method. Our method can obtain similar results with the Wp smoothing method, which is also a non-parametric method, but our method consumes much less computing time.

  16. Blood culture gram stain, acridine orange stain and direct sensitivity-based antimicrobial therapy of bloodstream infection in patients with trauma.

    PubMed

    Behera, B; Mathur, P; Gupta, B

    2010-01-01

    The purpose of this study was to ascertain if the simple practice of Gram stain, acridine orange stain and direct sensitivity determination of positive blood culture bottles could be used to guide early and appropriate treatment in trauma patients with clinical suspicion of sepsis. The study also aimed to evaluate the error in interpreting antimicrobial sensitivity by direct method when compared to standard method and find out if specific antibiotic-organism combination had more discrepancies. Findings from consecutive episodes of blood stream infection at an Apex Trauma centre over a 12-month period are summarized. A total of 509 consecutive positive blood cultures were subjected to Gram staining. AO staining was done in BacT/ALERT-positive Gram-stain negative blood cultures. Direct sensitivity was performed from 369 blood culture broths, showing single type of growth in Gram and acridine orange staining. Results of direct sensitivity were compared to conventional sensitivity for errors. No 'very major' discrepancy was found in this study. About 5.2 and 1.8% minor error rates were noted in gram-positive and gram-negative bacteria, respectively, while comparing the two methods. Most of the discrepancies in gram-negative bacteria were noted in beta lactam - beta lactamase inhibitor combinations. Direct sensitivity testing was not reliable for reporting of methicillin and vancomycin resistance in Staphylococci. Gram stain result together with direct sensitivity testing is required for optimizing initial antimicrobial therapy in trauma patients with clinical suspicion of sepsis. Gram staining and AO staining proved particularly helpful in the early detection of candidaemia.

  17. Interventions of the nursing diagnosis „Acute Pain“ – Evaluation of patients' experiences after total hip arthroplasty compared with the nursing record by using Q-DIO-Pain: a mixed methods study

    PubMed

    Zanon, David C; Gralher, Dieter; Müller-Staub, Maria

    2017-01-01

    Background: Pain affects patients' rehabilitation after hip replacement surgery. Aim: The study aim was to compare patients' responses, on their received pain relieving nursing interventions after hip replacement surgery, with the documented interventions in their nursing records. Method: A mixed methods design was applied. In order to evaluate quantitative data the instrument „Quality of Diagnoses, Interventions and Outcomes“ (Q-DIO) was further developed to measure pain interventions in nursing records (Q-DIO-Pain). Patients (n = 37) answered a survey on the third postoperative day. The patients' survey findings were then compared with the Q-DIO-Pain results and cross-validated by qualitative interviews. Results: The most reported pain level was „no pain“ (NRS 0 – 10 Points). However, 17 – 50 % of patients reported pain levels of three or higher and 11 – 22 % of five or higher in situations of motion / ambulation. A significant match between patients' findings and Q-DIO-Pain results was found for the intervention „helping to adapt medications“ (n = 32, ICC = 0.111, p = 0.042, CI 95 % 2-sided). Otherwise no significant matches were found. Interviews with patients and nurses confirmed that far more pain-relieving interventions affecting „Acute Pain“ were carried out, than were documented. Conclusions: Based on the results, pain assessments and effective pain-relieving interventions, especially before or after motion / ambulation should be improved and documented. It is recommended to implement a nursing standard for pain control.

  18. Biodegradable stent or balloon dilatation for benign oesophageal stricture: Pilot randomised controlled trial

    PubMed Central

    Dhar, Anjan; Close, Helen; Viswanath, Yirupaiahgari K; Rees, Colin J; Hancock, Helen C; Dwarakanath, A Deepak; Maier, Rebecca H; Wilson, Douglas; Mason, James M

    2014-01-01

    AIM: To undertake a randomised pilot study comparing biodegradable stents and endoscopic dilatation in patients with strictures. METHODS: This British multi-site study recruited seventeen symptomatic adult patients with refractory strictures. Patients were randomised using a multicentre, blinded assessor design, comparing a biodegradable stent (BS) with endoscopic dilatation (ED). The primary endpoint was the average dysphagia score during the first 6 mo. Secondary endpoints included repeat endoscopic procedures, quality of life, and adverse events. Secondary analysis included follow-up to 12 mo. Sensitivity analyses explored alternative estimation methods for dysphagia and multiple imputation of missing values. Nonparametric tests were used. RESULTS: Although both groups improved, the average dysphagia scores for patients receiving stents were higher after 6 mo: BS-ED 1.17 (95%CI: 0.63-1.78) P = 0.029. The finding was robust under different estimation methods. Use of additional endoscopic procedures and quality of life (QALY) estimates were similar for BS and ED patients at 6 and 12 mo. Concomitant use of gastrointestinal prescribed medication was greater in the stent group (BS 5.1, ED 2.0 prescriptions; P < 0.001), as were related adverse events (BS 1.4, ED 0.0 events; P = 0.024). Groups were comparable at baseline and findings were statistically significant but numbers were small due to under-recruitment. The oesophageal tract has somatic sensitivity and the process of the stent dissolving, possibly unevenly, might promote discomfort or reflux. CONCLUSION: Stenting was associated with greater dysphagia, co-medication and adverse events. Rigorously conducted and adequately powered trials are needed before widespread adoption of this technology. PMID:25561787

  19. Preventive effect of ginsenoid on chronic bacterial prostatitis.

    PubMed

    Kim, Sang Hoon; Ha, U-Syn; Sohn, Dong Wan; Lee, Seung-Ju; Kim, Hyun Woo; Han, Chang Hee; Cho, Yong-Hyun

    2012-10-01

    Empirical antibiotic therapy is the preferred primary treatment modality for chronic bacterial prostatitis (CBP). However, this method of treatment has a low success rate and long-term therapy may result in complications and the appearance of resistant strains. Therefore a new alternative method for the prevention of CBP is necessary. There are several reports that ginsenoid has a preventive effect on urinary tract infection (UTI). To evaluate the preventive effect of ginsenoid on CBP compared to conventional antibiotics, we carried out an experiment in a rat model of the disease. Four groups of adult male Wistar rats were treated with the following medications: (1) control (no medication), (2) ciprofloxacin, (3) ginsenoid, and (4) ciprofloxacin/ginsenoid. All medications were given for 4 weeks, and then we created a CBP model in the animals by injecting an Escherichia coli Z17 (O2:K1;H(-)) suspension into the prostatic urethra. After 4 weeks, results of microbiological cultures of prostate and urine samples, as well as histological findings of the prostate in each group were analyzed. The microbiological cultures of the prostate samples demonstrated reduced bacterial growth in all experimental groups compared with the control group. Histopathological examination showed a significantly decreased rate of infiltration of inflammatory cells into prostatic tissue and decreased interstitial fibrosis in the ginsenoid group compared with the control group. Inhibition of prostate infection was greater in the group receiving both ginsenoid and antibiotic than in the single-medication groups. Although the findings of this study suggest a preventive effect of ginsenoid, preventive methods for CBP are still controversial.

  20. Bayesian data augmentation methods for the synthesis of qualitative and quantitative research findings

    PubMed Central

    Crandell, Jamie L.; Voils, Corrine I.; Chang, YunKyung; Sandelowski, Margarete

    2010-01-01

    The possible utility of Bayesian methods for the synthesis of qualitative and quantitative research has been repeatedly suggested but insufficiently investigated. In this project, we developed and used a Bayesian method for synthesis, with the goal of identifying factors that influence adherence to HIV medication regimens. We investigated the effect of 10 factors on adherence. Recognizing that not all factors were examined in all studies, we considered standard methods for dealing with missing data and chose a Bayesian data augmentation method. We were able to summarize, rank, and compare the effects of each of the 10 factors on medication adherence. This is a promising methodological development in the synthesis of qualitative and quantitative research. PMID:21572970

  1. Binarization of Gray-Scaled Digital Images Via Fuzzy Reasoning

    NASA Technical Reports Server (NTRS)

    Dominquez, Jesus A.; Klinko, Steve; Voska, Ned (Technical Monitor)

    2002-01-01

    A new fast-computational technique based on fuzzy entropy measure has been developed to find an optimal binary image threshold. In this method, the image pixel membership functions are dependent on the threshold value and reflect the distribution of pixel values in two classes; thus, this technique minimizes the classification error. This new method is compared with two of the best-known threshold selection techniques, Otsu and Huang-Wang. The performance of the proposed method supersedes the performance of Huang- Wang and Otsu methods when the image consists of textured background and poor printing quality. The three methods perform well but yield different binarization approaches if the background and foreground of the image have well-separated gray-level ranges.

  2. Binarization of Gray-Scaled Digital Images Via Fuzzy Reasoning

    NASA Technical Reports Server (NTRS)

    Dominquez, Jesus A.; Klinko, Steve; Voska, Ned (Technical Monitor)

    2002-01-01

    A new fast-computational technique based on fuzzy entropy measure has been developed to find an optimal binary image threshold. In this method, the image pixel membership functions are dependent on the threshold value and reflect the distribution of pixel values in two classes; thus, this technique minimizes the classification error. This new method is compared with two of the best-known threshold selection techniques, Otsu and Huang-Wang. The performance of the proposed method supersedes the performance of Huang-Wang and Otsu methods when the image consists of textured background and poor printing quality. The three methods perform well but yield different binarization approaches if the background and foreground of the image have well-separated gray-level ranges.

  3. Mean-field approximation for spacing distribution functions in classical systems.

    PubMed

    González, Diego Luis; Pimpinelli, Alberto; Einstein, T L

    2012-01-01

    We propose a mean-field method to calculate approximately the spacing distribution functions p((n))(s) in one-dimensional classical many-particle systems. We compare our method with two other commonly used methods, the independent interval approximation and the extended Wigner surmise. In our mean-field approach, p((n))(s) is calculated from a set of Langevin equations, which are decoupled by using a mean-field approximation. We find that in spite of its simplicity, the mean-field approximation provides good results in several systems. We offer many examples illustrating that the three previously mentioned methods give a reasonable description of the statistical behavior of the system. The physical interpretation of each method is also discussed. © 2012 American Physical Society

  4. Detecting Corresponding Vertex Pairs between Planar Tessellation Datasets with Agglomerative Hierarchical Cell-Set Matching.

    PubMed

    Huh, Yong; Yu, Kiyun; Park, Woojin

    2016-01-01

    This paper proposes a method to detect corresponding vertex pairs between planar tessellation datasets. Applying an agglomerative hierarchical co-clustering, the method finds geometrically corresponding cell-set pairs from which corresponding vertex pairs are detected. Then, the map transformation is performed with the vertex pairs. Since these pairs are independently detected for each corresponding cell-set pairs, the method presents improved matching performance regardless of locally uneven positional discrepancies between dataset. The proposed method was applied to complicated synthetic cell datasets assumed as a cadastral map and a topographical map, and showed an improved result with the F-measures of 0.84 comparing to a previous matching method with the F-measure of 0.48.

  5. Shear wave speed estimation by adaptive random sample consensus method.

    PubMed

    Lin, Haoming; Wang, Tianfu; Chen, Siping

    2014-01-01

    This paper describes a new method for shear wave velocity estimation that is capable of extruding outliers automatically without preset threshold. The proposed method is an adaptive random sample consensus (ARANDSAC) and the metric used here is finding the certain percentage of inliers according to the closest distance criterion. To evaluate the method, the simulation and phantom experiment results were compared using linear regression with all points (LRWAP) and radon sum transform (RS) method. The assessment reveals that the relative biases of mean estimation are 20.00%, 4.67% and 5.33% for LRWAP, ARANDSAC and RS respectively for simulation, 23.53%, 4.08% and 1.08% for phantom experiment. The results suggested that the proposed ARANDSAC algorithm is accurate in shear wave speed estimation.

  6. Selection of suitable NDT methods for building inspection

    NASA Astrophysics Data System (ADS)

    Pauzi Ismail, Mohamad

    2017-11-01

    Construction of modern structures requires good quality concrete with adequate strength and durability. Several accidents occurred in the civil constructions and were reported in the media. Such accidents were due to poor workmanship and lack of systematic monitoring during the constructions. In addition, water leaking and cracking in residential houses was commonly reported too. Based on these facts, monitoring the quality of concrete in structures is becoming more and more important subject. This paper describes major Non-destructive Testing (NDT) methods for evaluating structural integrity of concrete building. Some interesting findings during actual NDT inspections on site are presented. The NDT methods used are explained, compared and discussed. The suitable methods are suggested as minimum NDT methods to cover parameters required in the inspection.

  7. Robust phase retrieval of complex-valued object in phase modulation by hybrid Wirtinger flow method

    NASA Astrophysics Data System (ADS)

    Wei, Zhun; Chen, Wen; Yin, Tiantian; Chen, Xudong

    2017-09-01

    This paper presents a robust iterative algorithm, known as hybrid Wirtinger flow (HWF), for phase retrieval (PR) of complex objects from noisy diffraction intensities. Numerical simulations indicate that the HWF method consistently outperforms conventional PR methods in terms of both accuracy and convergence rate in multiple phase modulations. The proposed algorithm is also more robust to low oversampling ratios, loose constraints, and noisy environments. Furthermore, compared with traditional Wirtinger flow, sample complexity is largely reduced. It is expected that the proposed HWF method will find applications in the rapidly growing coherent diffractive imaging field for high-quality image reconstruction with multiple modulations, as well as other disciplines where PR is needed.

  8. Analysis of multi lobe journal bearings with surface roughness using finite difference method

    NASA Astrophysics Data System (ADS)

    PhaniRaja Kumar, K.; Bhaskar, SUdaya; Manzoor Hussain, M.

    2018-04-01

    Multi lobe journal bearings are used for high operating speeds and high loads in machines. In this paper symmetrical multi lobe journal bearings are analyzed to find out the effect of surface roughnessduring non linear loading. Using the fourth order RungeKutta method, time transient analysis was performed to calculate and plot the journal centre trajectories. Flow factor method is used to evaluate the roughness and the finite difference method (FDM) is used to predict the pressure distribution over the bearing surface. The Transient analysis is done on the multi lobe journal bearings for threedifferent surface roughness orientations. Longitudinal surface roughness is more effective when compared with isotopic and traverse surface roughness.

  9. Experimental and Monte Carlo evaluation of Eclipse treatment planning system for effects on dose distribution of the hip prostheses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Çatlı, Serap, E-mail: serapcatli@hotmail.com; Tanır, Güneş

    2013-10-01

    The present study aimed to investigate the effects of titanium, titanium alloy, and stainless steel hip prostheses on dose distribution based on the Monte Carlo simulation method, as well as the accuracy of the Eclipse treatment planning system (TPS) at 6 and 18 MV photon energies. In the present study the pencil beam convolution (PBC) method implemented in the Eclipse TPS was compared to the Monte Carlo method and ionization chamber measurements. The present findings show that if high-Z material is used in prosthesis, large dose changes can occur due to scattering. The variance in dose observed in the presentmore » study was dependent on material type, density, and atomic number, as well as photon energy; as photon energy increased back scattering decreased. The dose perturbation effect of hip prostheses was significant and could not be predicted accurately by the PBC method for hip prostheses. The findings show that for accurate dose calculation the Monte Carlo-based TPS should be used in patients with hip prostheses.« less

  10. Coloc-stats: a unified web interface to perform colocalization analysis of genomic features.

    PubMed

    Simovski, Boris; Kanduri, Chakravarthi; Gundersen, Sveinung; Titov, Dmytro; Domanska, Diana; Bock, Christoph; Bossini-Castillo, Lara; Chikina, Maria; Favorov, Alexander; Layer, Ryan M; Mironov, Andrey A; Quinlan, Aaron R; Sheffield, Nathan C; Trynka, Gosia; Sandve, Geir K

    2018-06-05

    Functional genomics assays produce sets of genomic regions as one of their main outputs. To biologically interpret such region-sets, researchers often use colocalization analysis, where the statistical significance of colocalization (overlap, spatial proximity) between two or more region-sets is tested. Existing colocalization analysis tools vary in the statistical methodology and analysis approaches, thus potentially providing different conclusions for the same research question. As the findings of colocalization analysis are often the basis for follow-up experiments, it is helpful to use several tools in parallel and to compare the results. We developed the Coloc-stats web service to facilitate such analyses. Coloc-stats provides a unified interface to perform colocalization analysis across various analytical methods and method-specific options (e.g. colocalization measures, resolution, null models). Coloc-stats helps the user to find a method that supports their experimental requirements and allows for a straightforward comparison across methods. Coloc-stats is implemented as a web server with a graphical user interface that assists users with configuring their colocalization analyses. Coloc-stats is freely available at https://hyperbrowser.uio.no/coloc-stats/.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stark, Christopher C.; Roberge, Aki; Mandell, Avi

    ExoEarth yield is a critical science metric for future exoplanet imaging missions. Here we estimate exoEarth candidate yield using single visit completeness for a variety of mission design and astrophysical parameters. We review the methods used in previous yield calculations and show that the method choice can significantly impact yield estimates as well as how the yield responds to mission parameters. We introduce a method, called Altruistic Yield Optimization, that optimizes the target list and exposure times to maximize mission yield, adapts maximally to changes in mission parameters, and increases exoEarth candidate yield by up to 100% compared to previousmore » methods. We use Altruistic Yield Optimization to estimate exoEarth candidate yield for a large suite of mission and astrophysical parameters using single visit completeness. We find that exoEarth candidate yield is most sensitive to telescope diameter, followed by coronagraph inner working angle, followed by coronagraph contrast, and finally coronagraph contrast noise floor. We find a surprisingly weak dependence of exoEarth candidate yield on exozodi level. Additionally, we provide a quantitative approach to defining a yield goal for future exoEarth-imaging missions.« less

  12. Coupled dimensionality reduction and classification for supervised and semi-supervised multilabel learning

    PubMed Central

    Gönen, Mehmet

    2014-01-01

    Coupled training of dimensionality reduction and classification is proposed previously to improve the prediction performance for single-label problems. Following this line of research, in this paper, we first introduce a novel Bayesian method that combines linear dimensionality reduction with linear binary classification for supervised multilabel learning and present a deterministic variational approximation algorithm to learn the proposed probabilistic model. We then extend the proposed method to find intrinsic dimensionality of the projected subspace using automatic relevance determination and to handle semi-supervised learning using a low-density assumption. We perform supervised learning experiments on four benchmark multilabel learning data sets by comparing our method with baseline linear dimensionality reduction algorithms. These experiments show that the proposed approach achieves good performance values in terms of hamming loss, average AUC, macro F1, and micro F1 on held-out test data. The low-dimensional embeddings obtained by our method are also very useful for exploratory data analysis. We also show the effectiveness of our approach in finding intrinsic subspace dimensionality and semi-supervised learning tasks. PMID:24532862

  13. Coupled dimensionality reduction and classification for supervised and semi-supervised multilabel learning.

    PubMed

    Gönen, Mehmet

    2014-03-01

    Coupled training of dimensionality reduction and classification is proposed previously to improve the prediction performance for single-label problems. Following this line of research, in this paper, we first introduce a novel Bayesian method that combines linear dimensionality reduction with linear binary classification for supervised multilabel learning and present a deterministic variational approximation algorithm to learn the proposed probabilistic model. We then extend the proposed method to find intrinsic dimensionality of the projected subspace using automatic relevance determination and to handle semi-supervised learning using a low-density assumption. We perform supervised learning experiments on four benchmark multilabel learning data sets by comparing our method with baseline linear dimensionality reduction algorithms. These experiments show that the proposed approach achieves good performance values in terms of hamming loss, average AUC, macro F 1 , and micro F 1 on held-out test data. The low-dimensional embeddings obtained by our method are also very useful for exploratory data analysis. We also show the effectiveness of our approach in finding intrinsic subspace dimensionality and semi-supervised learning tasks.

  14. A comprehensive framework for data quality assessment in CER.

    PubMed

    Holve, Erin; Kahn, Michael; Nahm, Meredith; Ryan, Patrick; Weiskopf, Nicole

    2013-01-01

    The panel addresses the urgent need to ensure that comparative effectiveness research (CER) findings derived from diverse and distributed data sources are based on credible, high-quality data; and that the methods used to assess and report data quality are consistent, comprehensive, and available to data consumers. The panel consists of representatives from four teams leveraging electronic clinical data for CER, patient centered outcomes research (PCOR), and quality improvement (QI) and seeks to change the current paradigm where data quality assessment (DQA) is performed "behind the scenes" using one-off project specific methods. The panelists will present their process of harmonizing existing models for describing and measuring clinical data quality and will describe a comprehensive integrated framework for assessing and reporting DQA findings. The collaborative project is supported by the Electronic Data Methods (EDM) Forum, a three-year grant from the Agency for Healthcare Research and Quality (AHRQ) to facilitate learning and foster collaboration across a set of CER, PCOR, and QI projects designed to build infrastructure and methods for collecting and analyzing prospective data from electronic clinical data .

  15. A NEW METHOD FOR FINDING POINT SOURCES IN HIGH-ENERGY NEUTRINO DATA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Ke; Miller, M. Coleman

    The IceCube collaboration has reported the first detection of high-energy astrophysical neutrinos, including ∼50 high-energy starting events, but no individual sources have been identified. It is therefore important to develop the most sensitive and efficient possible algorithms to identify the point sources of these neutrinos. The most popular current method works by exploring a dense grid of possible directions to individual sources, and identifying the single direction with the maximum probability of having produced multiple detected neutrinos. This method has numerous strengths, but it is computationally intensive and because it focuses on the single best location for a point source,more » additional point sources are not included in the evidence. We propose a new maximum likelihood method that uses the angular separations between all pairs of neutrinos in the data. Unlike existing autocorrelation methods for this type of analysis, which also use angular separations between neutrino pairs, our method incorporates information about the point-spread function and can identify individual point sources. We find that if the angular resolution is a few degrees or better, then this approach reduces both false positive and false negative errors compared to the current method, and is also more computationally efficient up to, potentially, hundreds of thousands of detected neutrinos.« less

  16. High-tech or field techs: Radio-telemetry is a cost-effective method for reducing bias in songbird nest searching

    USGS Publications Warehouse

    Peterson, Sean M.; Streby, Henry M.; Lehman, Justin A.; Kramer, Gunnar R.; Fish, Alexander C.; Andersen, David E.

    2015-01-01

    We compared the efficacy of standard nest-searching methods with finding nests via radio-tagged birds to assess how search technique influenced our determination of nest-site characteristics and nest success for Golden-winged Warblers (Vermivora chrysoptera). We also evaluated the cost-effectiveness of using radio-tagged birds to find nests. Using standard nest-searching techniques for 3 populations, we found 111 nests in locations with habitat characteristics similar to those described in previous studies: edges between forest and relatively open areas of early successional vegetation or shrubby wetlands, with 43% within 5 m of forest edge. The 83 nests found using telemetry were about half as likely (23%) to be within 5 m of forest edge. We spent little time searching >25 m into forest because published reports state that Golden-winged Warblers do not nest there. However, 14 nests found using telemetry (18%) were >25 m into forest. We modeled nest success using nest-searching method, nest age, and distance to forest edge as explanatory variables. Nest-searching method explained nest success better than nest age alone; we estimated that nests found using telemetry were 10% more likely to fledge young than nests found using standard nest-searching methods. Although radio-telemetry was more expensive than standard nest searching, the cost-effectiveness of both methods differed depending on searcher experience, amount of equipment owned, and bird population density. Our results demonstrate that telemetry can be an effective method for reducing bias in Golden-winged Warbler nest samples, can be cost competitive with standard nest-searching methods in some situations, and is likely to be a useful approach for finding nests of other forest-nesting songbirds.

  17. A comparative analysis of the statistical properties of large mobile phone calling networks.

    PubMed

    Li, Ming-Xia; Jiang, Zhi-Qiang; Xie, Wen-Jie; Miccichè, Salvatore; Tumminello, Michele; Zhou, Wei-Xing; Mantegna, Rosario N

    2014-05-30

    Mobile phone calling is one of the most widely used communication methods in modern society. The records of calls among mobile phone users provide us a valuable proxy for the understanding of human communication patterns embedded in social networks. Mobile phone users call each other forming a directed calling network. If only reciprocal calls are considered, we obtain an undirected mutual calling network. The preferential communication behavior between two connected users can be statistically tested and it results in two Bonferroni networks with statistically validated edges. We perform a comparative analysis of the statistical properties of these four networks, which are constructed from the calling records of more than nine million individuals in Shanghai over a period of 110 days. We find that these networks share many common structural properties and also exhibit idiosyncratic features when compared with previously studied large mobile calling networks. The empirical findings provide us an intriguing picture of a representative large social network that might shed new lights on the modelling of large social networks.

  18. How Medicaid Enrollees Fare Compared with Privately Insured and Uninsured Adults: Findings from the Commonwealth Fund Biennial Health Insurance Survey, 2016.

    PubMed

    Gunja, Munira Z; Collins, Sara R; Blumenthal, David; Doty, Michelle M; Beutel, Sophie

    2017-04-01

    ISSUE: The number of Americans insured by Medicaid has climbed to more than 70 million, with an estimated 12 million gaining coverage under the Affordable Care Act’s Medicaid expansion. Still, some policymakers have questioned whether Medicaid coverage actually improves access to care, quality of care, or financial protection. GOALS: To compare the experiences of working-age adults who were either: covered all year by private employer or individual insurance; covered by Medicaid for the full year; or uninsured for some time during the year. METHOD: Analysis of the Commonwealth Fund Biennial Health Insurance Survey, 2016. FINDINGS AND CONCLUSIONS: The level of access to health care that Medicaid coverage provides is comparable to that afforded by private insurance. Adults with Medicaid coverage reported better care experiences than those who had been uninsured during the year. Medicaid enrollees have fewer problems paying medical bills than either the privately insured or the uninsured.

  19. Computation of breast ptosis from 3D surface scans of the female torso

    PubMed Central

    Li, Danni; Cheong, Audrey; Reece, Gregory P.; Crosby, Melissa A.; Fingeret, Michelle C.; Merchant, Fatima A.

    2016-01-01

    Stereophotography is now finding a niche in clinical breast surgery, and several methods for quantitatively measuring breast morphology from 3D surface images have been developed. Breast ptosis (sagging of the breast), which refers to the extent by which the nipple is lower than the inframammary fold (the contour along which the inferior part of the breast attaches to the chest wall), is an important morphological parameter that is frequently used for assessing the outcome of breast surgery. This study presents a novel algorithm that utilizes three-dimensional (3D) features such as surface curvature and orientation for the assessment of breast ptosis from 3D scans of the female torso. The performance of the computational approach proposed was compared against the consensus of manual ptosis ratings by nine plastic surgeons, and that of current 2D photogrammetric methods. Compared to the 2D methods, the average accuracy for 3D features was ~13% higher, with an increase in precision, recall, and F-score of 37%, 29%, and 33%, respectively. The computational approach proposed provides an improved and unbiased objective method for rating ptosis when compared to qualitative visualization by observers, and distance based 2D photogrammetry approaches. PMID:27643463

  20. RRW: repeated random walks on genome-scale protein networks for local cluster discovery

    PubMed Central

    Macropol, Kathy; Can, Tolga; Singh, Ambuj K

    2009-01-01

    Background We propose an efficient and biologically sensitive algorithm based on repeated random walks (RRW) for discovering functional modules, e.g., complexes and pathways, within large-scale protein networks. Compared to existing cluster identification techniques, RRW implicitly makes use of network topology, edge weights, and long range interactions between proteins. Results We apply the proposed technique on a functional network of yeast genes and accurately identify statistically significant clusters of proteins. We validate the biological significance of the results using known complexes in the MIPS complex catalogue database and well-characterized biological processes. We find that 90% of the created clusters have the majority of their catalogued proteins belonging to the same MIPS complex, and about 80% have the majority of their proteins involved in the same biological process. We compare our method to various other clustering techniques, such as the Markov Clustering Algorithm (MCL), and find a significant improvement in the RRW clusters' precision and accuracy values. Conclusion RRW, which is a technique that exploits the topology of the network, is more precise and robust in finding local clusters. In addition, it has the added flexibility of being able to find multi-functional proteins by allowing overlapping clusters. PMID:19740439

  1. An overview of very high level software design methods

    NASA Technical Reports Server (NTRS)

    Asdjodi, Maryam; Hooper, James W.

    1988-01-01

    Very High Level design methods emphasize automatic transfer of requirements to formal design specifications, and/or may concentrate on automatic transformation of formal design specifications that include some semantic information of the system into machine executable form. Very high level design methods range from general domain independent methods to approaches implementable for specific applications or domains. Applying AI techniques, abstract programming methods, domain heuristics, software engineering tools, library-based programming and other methods different approaches for higher level software design are being developed. Though one finds that a given approach does not always fall exactly in any specific class, this paper provides a classification for very high level design methods including examples for each class. These methods are analyzed and compared based on their basic approaches, strengths and feasibility for future expansion toward automatic development of software systems.

  2. Using tree diversity to compare phylogenetic heuristics.

    PubMed

    Sul, Seung-Jin; Matthews, Suzanne; Williams, Tiffani L

    2009-04-29

    Evolutionary trees are family trees that represent the relationships between a group of organisms. Phylogenetic heuristics are used to search stochastically for the best-scoring trees in tree space. Given that better tree scores are believed to be better approximations of the true phylogeny, traditional evaluation techniques have used tree scores to determine the heuristics that find the best scores in the fastest time. We develop new techniques to evaluate phylogenetic heuristics based on both tree scores and topologies to compare Pauprat and Rec-I-DCM3, two popular Maximum Parsimony search algorithms. Our results show that although Pauprat and Rec-I-DCM3 find the trees with the same best scores, topologically these trees are quite different. Furthermore, the Rec-I-DCM3 trees cluster distinctly from the Pauprat trees. In addition to our heatmap visualizations of using parsimony scores and the Robinson-Foulds distance to compare best-scoring trees found by the two heuristics, we also develop entropy-based methods to show the diversity of the trees found. Overall, Pauprat identifies more diverse trees than Rec-I-DCM3. Overall, our work shows that there is value to comparing heuristics beyond the parsimony scores that they find. Pauprat is a slower heuristic than Rec-I-DCM3. However, our work shows that there is tremendous value in using Pauprat to reconstruct trees-especially since it finds identical scoring but topologically distinct trees. Hence, instead of discounting Pauprat, effort should go in improving its implementation. Ultimately, improved performance measures lead to better phylogenetic heuristics and will result in better approximations of the true evolutionary history of the organisms of interest.

  3. Improved sampling and analysis of images in corneal confocal microscopy.

    PubMed

    Schaldemose, E L; Fontain, F I; Karlsson, P; Nyengaard, J R

    2017-10-01

    Corneal confocal microscopy (CCM) is a noninvasive clinical method to analyse and quantify corneal nerve fibres in vivo. Although the CCM technique is in constant progress, there are methodological limitations in terms of sampling of images and objectivity of the nerve quantification. The aim of this study was to present a randomized sampling method of the CCM images and to develop an adjusted area-dependent image analysis. Furthermore, a manual nerve fibre analysis method was compared to a fully automated method. 23 idiopathic small-fibre neuropathy patients were investigated using CCM. Corneal nerve fibre length density (CNFL) and corneal nerve fibre branch density (CNBD) were determined in both a manual and automatic manner. Differences in CNFL and CNBD between (1) the randomized and the most common sampling method, (2) the adjusted and the unadjusted area and (3) the manual and automated quantification method were investigated. The CNFL values were significantly lower when using the randomized sampling method compared to the most common method (p = 0.01). There was not a statistical significant difference in the CNBD values between the randomized and the most common sampling method (p = 0.85). CNFL and CNBD values were increased when using the adjusted area compared to the standard area. Additionally, the study found a significant increase in the CNFL and CNBD values when using the manual method compared to the automatic method (p ≤ 0.001). The study demonstrated a significant difference in the CNFL values between the randomized and common sampling method indicating the importance of clear guidelines for the image sampling. The increase in CNFL and CNBD values when using the adjusted cornea area is not surprising. The observed increases in both CNFL and CNBD values when using the manual method of nerve quantification compared to the automatic method are consistent with earlier findings. This study underlines the importance of improving the analysis of the CCM images in order to obtain more objective corneal nerve fibre measurements. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  4. Comparing data mining methods on the VAERS database.

    PubMed

    Banks, David; Woo, Emily Jane; Burwen, Dale R; Perucci, Phil; Braun, M Miles; Ball, Robert

    2005-09-01

    Data mining may enhance traditional surveillance of vaccine adverse events by identifying events that are reported more commonly after administering one vaccine than other vaccines. Data mining methods find signals as the proportion of times a condition or group of conditions is reported soon after the administration of a vaccine; thus it is a relative proportion compared across vaccines, and not an absolute rate for the condition. The Vaccine Adverse Event Reporting System (VAERS) contains approximately 150 000 reports of adverse events that are possibly associated with vaccine administration. We studied four data mining techniques: empirical Bayes geometric mean (EBGM), lower-bound of the EBGM's 90% confidence interval (EB05), proportional reporting ratio (PRR), and screened PRR (SPRR). We applied these to the VAERS database and compared the agreement among methods and other performance properties, particularly focusing on the vaccine-event combinations with the highest numerical scores in the various methods. The vaccine-event combinations with the highest numerical scores varied substantially among the methods. Not all combinations representing known associations appeared in the top 100 vaccine-event pairs for all methods. The four methods differ in their ranking of vaccine-COSTART pairs. A given method may be superior in certain situations but inferior in others. This paper examines the statistical relationships among the four estimators. Determining which method is best for public health will require additional analysis that focuses on the true alarm and false alarm rates using known vaccine-event associations. Evaluating the properties of these data mining methods will help determine the value of such methods in vaccine safety surveillance. (c) 2005 John Wiley & Sons, Ltd.

  5. Isotonic Regression Based-Method in Quantitative High-Throughput Screenings for Genotoxicity

    PubMed Central

    Fujii, Yosuke; Narita, Takeo; Tice, Raymond Richard; Takeda, Shunich

    2015-01-01

    Quantitative high-throughput screenings (qHTSs) for genotoxicity are conducted as part of comprehensive toxicology screening projects. The most widely used method is to compare the dose-response data of a wild-type and DNA repair gene knockout mutants, using model-fitting to the Hill equation (HE). However, this method performs poorly when the observed viability does not fit the equation well, as frequently happens in qHTS. More capable methods must be developed for qHTS where large data variations are unavoidable. In this study, we applied an isotonic regression (IR) method and compared its performance with HE under multiple data conditions. When dose-response data were suitable to draw HE curves with upper and lower asymptotes and experimental random errors were small, HE was better than IR, but when random errors were big, there was no difference between HE and IR. However, when the drawn curves did not have two asymptotes, IR showed better performance (p < 0.05, exact paired Wilcoxon test) with higher specificity (65% in HE vs. 96% in IR). In summary, IR performed similarly to HE when dose-response data were optimal, whereas IR clearly performed better in suboptimal conditions. These findings indicate that IR would be useful in qHTS for comparing dose-response data. PMID:26673567

  6. Application of the dual-kinetic-balance sets in the relativistic many-body problem of atomic structure

    NASA Astrophysics Data System (ADS)

    Beloy, Kyle; Derevianko, Andrei

    2008-09-01

    The dual-kinetic-balance (DKB) finite basis set method for solving the Dirac equation for hydrogen-like ions [V.M. Shabaev et al., Phys. Rev. Lett. 93 (2004) 130405] is extended to problems with a non-local spherically-symmetric Dirac-Hartree-Fock potential. We implement the DKB method using B-spline basis sets and compare its performance with the widely-employed approach of Notre Dame (ND) group [W.R. Johnson, S.A. Blundell, J. Sapirstein, Phys. Rev. A 37 (1988) 307-315]. We compare the performance of the ND and DKB methods by computing various properties of Cs atom: energies, hyperfine integrals, the parity-non-conserving amplitude of the 6s-7s transition, and the second-order many-body correction to the removal energy of the valence electrons. We find that for a comparable size of the basis set the accuracy of both methods is similar for matrix elements accumulated far from the nuclear region. However, for atomic properties determined by small distances, the DKB method outperforms the ND approach. In addition, we present a strategy for optimizing the size of the basis sets by choosing progressively smaller number of basis functions for increasingly higher partial waves. This strategy exploits suppression of contributions of high partial waves to typical many-body correlation corrections.

  7. A Comparison of What Is Part of Usability Testing in Three Countries

    NASA Astrophysics Data System (ADS)

    Clemmensen, Torkil

    The cultural diversity of users of technology challenges our methods for usability evaluation. In this paper we report and compare three ethnographic interview studies of what is a part of a standard (typical) usability test in a company in Mumbai, Beijing and Copenhagen. At each of these three locations, we use structural and contrast questions do a taxonomic and paradigm analysis of a how a company performs a usability test. We find similar parts across the three locations. We also find different results for each location. In Mumbai, most parts of the usability test are not related to the interactive application that is tested, but to differences in user characteristics, test preparation, method, and location. In Copenhagen, considerations about the client's needs are part of a usability test. In Beijing, the only varying factor is the communication pattern and relation to the user. These results are then contrasted in a cross cultural matrix to identify cultural themes that can help interpret results from existing laboratory research in usability test methods.

  8. Information Filtering via a Scaling-Based Function

    PubMed Central

    Qiu, Tian; Zhang, Zi-Ke; Chen, Guang

    2013-01-01

    Finding a universal description of the algorithm optimization is one of the key challenges in personalized recommendation. In this article, for the first time, we introduce a scaling-based algorithm (SCL) independent of recommendation list length based on a hybrid algorithm of heat conduction and mass diffusion, by finding out the scaling function for the tunable parameter and object average degree. The optimal value of the tunable parameter can be abstracted from the scaling function, which is heterogeneous for the individual object. Experimental results obtained from three real datasets, Netflix, MovieLens and RYM, show that the SCL is highly accurate in recommendation. More importantly, compared with a number of excellent algorithms, including the mass diffusion method, the original hybrid method, and even an improved version of the hybrid method, the SCL algorithm remarkably promotes the personalized recommendation in three other aspects: solving the accuracy-diversity dilemma, presenting a high novelty, and solving the key challenge of cold start problem. PMID:23696829

  9. Comparation of clinical and paraclinical findings among patient with Kawasaki disease in Bandar abbas Koodakan Hospital in 2011-14

    NASA Astrophysics Data System (ADS)

    Borjali, Davood

    Title: Comparation of clinical and paraclinical findings among patient with Kawasaki disease in Bandar abbas Koodakan Hospital in 2011-14 Kawasaki disease(KD) is a kind of vasculitis diagnosed by clinical manifestation and it caused acquired heart disease in children because of coronary arteries involvement. Method: patient divided to three group of American Japanese and incomplete and also study in two group according to fever days and then clinical features and laboratory data were checked. Result: A total of 150 patients were enrolled during the study period. number of patients with incomplete Kawasaki disease was 128 american group was 28 and Japanese was 4 patients, the most prevalent symptom was scaling of extremities(61 bladder most seen in group with fever more than five days. Keyword: Kawasaki , epidemiology , criteria

  10. Bipolar electrode selection for a motor imagery based brain computer interface

    NASA Astrophysics Data System (ADS)

    Lou, Bin; Hong, Bo; Gao, Xiaorong; Gao, Shangkai

    2008-09-01

    A motor imagery based brain-computer interface (BCI) provides a non-muscular communication channel that enables people with paralysis to control external devices using their motor imagination. Reducing the number of electrodes is critical to improving the portability and practicability of the BCI system. A novel method is proposed to reduce the number of electrodes to a total of four by finding the optimal positions of two bipolar electrodes. Independent component analysis (ICA) is applied to find the source components of mu and alpha rhythms, and optimal electrodes are chosen by comparing the projection weights of sources on each channel. The results of eight subjects demonstrate the better classification performance of the optimal layout compared with traditional layouts, and the stability of this optimal layout over a one week interval was further verified.

  11. High-speed high-accuracy three-dimensional shape measurement using digital binary defocusing method versus sinusoidal method

    NASA Astrophysics Data System (ADS)

    Hyun, Jae-Sang; Li, Beiwen; Zhang, Song

    2017-07-01

    This paper presents our research findings on high-speed high-accuracy three-dimensional shape measurement using digital light processing (DLP) technologies. In particular, we compare two different sinusoidal fringe generation techniques using the DLP projection devices: direct projection of computer-generated 8-bit sinusoidal patterns (a.k.a., the sinusoidal method), and the creation of sinusoidal patterns by defocusing binary patterns (a.k.a., the binary defocusing method). This paper mainly examines their performance on high-accuracy measurement applications under precisely controlled settings. Two different projection systems were tested in this study: a commercially available inexpensive projector and the DLP development kit. Experimental results demonstrated that the binary defocusing method always outperforms the sinusoidal method if a sufficient number of phase-shifted fringe patterns can be used.

  12. High-speed 3D imaging using digital binary defocusing method vs sinusoidal method

    NASA Astrophysics Data System (ADS)

    Zhang, Song; Hyun, Jae-Sang; Li, Beiwen

    2017-02-01

    This paper presents our research findings on high-speed 3D imaging using digital light processing (DLP) technologies. In particular, we compare two different sinusoidal fringe generation techniques using the DLP projection devices: direct projection of 8-bit computer generated sinusoidal patterns (a.k.a, the sinusoidal method), and the creation of sinusoidal patterns by defocusing binary patterns (a.k.a., the binary defocusing method). This paper mainly examines their performance on high-accuracy measurement applications under precisely controlled settings. Two different projection systems were tested in this study: the commercially available inexpensive projector, and the DLP development kit. Experimental results demonstrated that the binary defocusing method always outperforms the sinusoidal method if a sufficient number of phase-shifted fringe patterns can be used.

  13. Using artificial neural networks (ANN) for open-loop tomography

    NASA Astrophysics Data System (ADS)

    Osborn, James; De Cos Juez, Francisco Javier; Guzman, Dani; Butterley, Timothy; Myers, Richard; Guesalaga, Andres; Laine, Jesus

    2011-09-01

    The next generation of adaptive optics (AO) systems require tomographic techniques in order to correct for atmospheric turbulence along lines of sight separated from the guide stars. Multi-object adaptive optics (MOAO) is one such technique. Here, we present a method which uses an artificial neural network (ANN) to reconstruct the target phase given off-axis references sources. This method does not require any input of the turbulence profile and is therefore less susceptible to changing conditions than some existing methods. We compare our ANN method with a standard least squares type matrix multiplication method (MVM) in simulation and find that the tomographic error is similar to the MVM method. In changing conditions the tomographic error increases for MVM but remains constant with the ANN model and no large matrix inversions are required.

  14. Inferring Mechanisms of Compensation from E-MAP and SGA Data Using Local Search Algorithms for Max Cut

    NASA Astrophysics Data System (ADS)

    Leiserson, Mark D. M.; Tatar, Diana; Cowen, Lenore J.; Hescott, Benjamin J.

    A new method based on a mathematically natural local search framework for max cut is developed to uncover functionally coherent module and BPM motifs in high-throughput genetic interaction data. Unlike previous methods which also consider physical protein-protein interaction data, our method utilizes genetic interaction data only; this becomes increasingly important as high-throughput genetic interaction data is becoming available in settings where less is known about physical interaction data. We compare modules and BPMs obtained to previous methods and across different datasets. Despite needing no physical interaction information, the BPMs produced by our method are competitive with previous methods. Biological findings include a suggested global role for the prefoldin complex and a SWR subcomplex in pathway buffering in the budding yeast interactome.

  15. Inferring mechanisms of compensation from E-MAP and SGA data using local search algorithms for max cut.

    PubMed

    Leiserson, Mark D M; Tatar, Diana; Cowen, Lenore J; Hescott, Benjamin J

    2011-11-01

    A new method based on a mathematically natural local search framework for max cut is developed to uncover functionally coherent module and BPM motifs in high-throughput genetic interaction data. Unlike previous methods, which also consider physical protein-protein interaction data, our method utilizes genetic interaction data only; this becomes increasingly important as high-throughput genetic interaction data is becoming available in settings where less is known about physical interaction data. We compare modules and BPMs obtained to previous methods and across different datasets. Despite needing no physical interaction information, the BPMs produced by our method are competitive with previous methods. Biological findings include a suggested global role for the prefoldin complex and a SWR subcomplex in pathway buffering in the budding yeast interactome.

  16. Comparison between the standard and a new alternative format of the Summary-of-Findings tables in Cochrane review users: study protocol for a randomized controlled trial.

    PubMed

    Carrasco-Labra, Alonso; Brignardello-Petersen, Romina; Santesso, Nancy; Neumann, Ignacio; Mustafa, Reem A; Mbuagbaw, Lawrence; Ikobaltzeta, Itziar Etxeandia; De Stio, Catherine; McCullagh, Lauren J; Alonso-Coello, Pablo; Meerpohl, Joerg J; Vandvik, Per Olav; Brozek, Jan L; Akl, Elie A; Bossuyt, Patrick; Churchill, Rachel; Glenton, Claire; Rosenbaum, Sarah; Tugwell, Peter; Welch, Vivian; Guyatt, Gordon; Schünemann, Holger

    2015-04-16

    Systematic reviews represent one of the most important tools for knowledge translation but users often struggle with understanding and interpreting their results. GRADE Summary-of-Findings tables have been developed to display results of systematic reviews in a concise and transparent manner. The current format of the Summary-of-Findings tables for presenting risks and quality of evidence improves understanding and assists users with finding key information from the systematic review. However, it has been suggested that additional methods to present risks and display results in the Summary-of-Findings tables are needed. We will conduct a non-inferiority parallel-armed randomized controlled trial to determine whether an alternative format to present risks and display Summary-of-Findings tables is not inferior compared to the current standard format. We will measure participant understanding, accessibility of the information, satisfaction, and preference for both formats. We will invite systematic review users to participate (that is clinicians, guideline developers, and researchers). The data collection process will be undertaken using the online 'Survey Monkey' system. For the primary outcome understanding, non-inferiority of the alternative format (Table A) to the current standard format (Table C) of Summary-of-Findings tables will be claimed if the upper limit of a 1-sided 95% confidence interval (for the difference of proportion of participants answering correctly a given question) excluded a difference in favor of the current format of more than 10%. This study represents an effort to provide systematic reviewers with additional options to display review results using Summary-of-Findings tables. In this way, review authors will have a variety of methods to present risks and more flexibility to choose the most appropriate table features to display (that is optional columns, risks expressions, complementary methods to display continuous outcomes, and so on). NCT02022631 (21 December 2013).

  17. Ontology based molecular signatures for immune cell types via gene expression analysis

    PubMed Central

    2013-01-01

    Background New technologies are focusing on characterizing cell types to better understand their heterogeneity. With large volumes of cellular data being generated, innovative methods are needed to structure the resulting data analyses. Here, we describe an ‘Ontologically BAsed Molecular Signature’ (OBAMS) method that identifies novel cellular biomarkers and infers biological functions as characteristics of particular cell types. This method finds molecular signatures for immune cell types based on mapping biological samples to the Cell Ontology (CL) and navigating the space of all possible pairwise comparisons between cell types to find genes whose expression is core to a particular cell type’s identity. Results We illustrate this ontological approach by evaluating expression data available from the Immunological Genome project (IGP) to identify unique biomarkers of mature B cell subtypes. We find that using OBAMS, candidate biomarkers can be identified at every strata of cellular identity from broad classifications to very granular. Furthermore, we show that Gene Ontology can be used to cluster cell types by shared biological processes in order to find candidate genes responsible for somatic hypermutation in germinal center B cells. Moreover, through in silico experiments based on this approach, we have identified genes sets that represent genes overexpressed in germinal center B cells and identify genes uniquely expressed in these B cells compared to other B cell types. Conclusions This work demonstrates the utility of incorporating structured ontological knowledge into biological data analysis – providing a new method for defining novel biomarkers and providing an opportunity for new biological insights. PMID:24004649

  18. Analysis of street drugs in seized material without primary reference standards.

    PubMed

    Laks, Suvi; Pelander, Anna; Vuori, Erkki; Ali-Tolppa, Elisa; Sippola, Erkki; Ojanperä, Ilkka

    2004-12-15

    A novel approach was used to analyze street drugs in seized material without primary reference standards. Identification was performed by liquid chromatography/time-of-flight mass spectrometry (LC/TOFMS), essentially based on accurate mass determination using a target library of 735 exact monoisotopic masses. Quantification was carried out by liquid chromatography/chemiluminescence nitrogen detection (LC/CLND) with a single secondary standard (caffeine), utilizing the detector's equimolar response to nitrogen. Sample preparation comprised dilution, first with methanol and further with the LC mobile phase. Altogether 21 seized drug samples were analyzed blind by the present method, and results were compared to accredited reference methods utilizing identification by gas chromatography/mass spectrometry and quantification by gas chromatography or liquid chromatography. The 31 drug findings by LC/TOFMS comprised 19 different drugs-of-abuse, byproducts, and adulterants, including amphetamine and tryptamine designer drugs, with one unresolved pair of compounds having an identical mass. By the reference methods, 27 findings could be confirmed, and among the four unconfirmed findings, only 1 apparent false positive was found. In the quantitative analysis of 11 amphetamine, heroin, and cocaine findings, mean relative difference between the results of LC/CLND and the reference methods was 11% (range 4.2-21%), without any observable bias. Mean relative standard deviation for three parallel LC/CLND results was 6%. Results suggest that the present combination of LC/TOFMS and LC/CLND offers a simple solution for the analysis of scheduled and designer drugs in seized material, independent of the availability of primary reference standards.

  19. Direct application of Padé approximant for solving nonlinear differential equations.

    PubMed

    Vazquez-Leal, Hector; Benhammouda, Brahim; Filobello-Nino, Uriel; Sarmiento-Reyes, Arturo; Jimenez-Fernandez, Victor Manuel; Garcia-Gervacio, Jose Luis; Huerta-Chua, Jesus; Morales-Mendoza, Luis Javier; Gonzalez-Lee, Mario

    2014-01-01

    This work presents a direct procedure to apply Padé method to find approximate solutions for nonlinear differential equations. Moreover, we present some cases study showing the strength of the method to generate highly accurate rational approximate solutions compared to other semi-analytical methods. The type of tested nonlinear equations are: a highly nonlinear boundary value problem, a differential-algebraic oscillator problem, and an asymptotic problem. The high accurate handy approximations obtained by the direct application of Padé method shows the high potential if the proposed scheme to approximate a wide variety of problems. What is more, the direct application of the Padé approximant aids to avoid the previous application of an approximative method like Taylor series method, homotopy perturbation method, Adomian Decomposition method, homotopy analysis method, variational iteration method, among others, as tools to obtain a power series solutions to post-treat with the Padé approximant. 34L30.

  20. Low self-esteem during adolescence predicts poor health, criminal behavior, and limited economic prospects during adulthood.

    PubMed

    Trzesniewski, Kali H; Donnellan, M Brent; Moffitt, Terrie E; Robins, Richard W; Poulton, Richie; Caspi, Avshalom

    2006-03-01

    Using prospective data from the Dunedin Multidisciplinary Health and Development Study birth cohort, the authors found that adolescents with low self-esteem had poorer mental and physical health, worse economic prospects, and higher levels of criminal behavior during adulthood, compared with adolescents with high self-esteem. The long-term consequences of self-esteem could not be explained by adolescent depression, gender, or socioeconomic status. Moreover, the findings held when the outcome variables were assessed using objective measures and informant reports; therefore, the findings cannot be explained by shared method variance in self-report data. The findings suggest that low self-esteem during adolescence predicts negative real-world consequences during adulthood. Copyright (c) 2006 APA, all rights reserved.

  1. A comparison of diagnostic performance of vacuum-assisted biopsy and core needle biopsy for breast microcalcification: a systematic review and meta-analysis.

    PubMed

    Huang, Xu Chen; Hu, Xu Hua; Wang, Xiao Ran; Zhou, Chao Xi; Wang, Fei Fei; Yang, Shan; Wang, Gui Ying

    2018-03-16

    Core needle biopsy (CNB) and vacuum-assisted biopsy (VAB) are both popularly used breast percutaneous biopsies. Both of them have become reliable alternatives to open surgical biopsy (OSB) for breast microcalcification (BM). It is controversial that which biopsy method is more accurate and safer for BM. Hence, we conducted this meta-analysis to compare the diagnostic performance between CNB and VAB for BM, aiming to find out the better method. Articles according with including and excluding criteria were collected from the databases, PubMed, Embase, and the Cochrane Library. Preset outcomes were abstracted and pooled to find out the potential advantages in CNB or VAB. Seven studies were identified and entered final meta-analysis from initially found 138 studies. The rate of ductal carcinoma in situ (DCIS) underestimation was significantly lower in VAB than CNB group [risk ratio (RR) = 1.83, 95% confidence interval (CI) 1.40 to 2.40, p < 0.001]. The microcalcification retrieval rate was significantly higher in VAB than CNB group (RR = 0.89, 95% CI 0.81 to 0.98, p = 0.02), while CNB owned a significantly lower complication rate than VAB (RR = 0.18, 95% CI 0.03 to 0.93, p = 0.04). The atypical ductal hyperplasia (ADH) underestimation rates were not compared for the limited number of studies reporting this outcome. Compared with CNB, VAB shows better diagnostic performance in DCIS underestimation rate and microcalcification retrieval rate. However, CNB shows a significantly lower complication rate. More studies are needed to verify these findings.

  2. Dependence of paracentric inversion rate on tract length.

    PubMed

    York, Thomas L; Durrett, Rick; Nielsen, Rasmus

    2007-04-03

    We develop a Bayesian method based on MCMC for estimating the relative rates of pericentric and paracentric inversions from marker data from two species. The method also allows estimation of the distribution of inversion tract lengths. We apply the method to data from Drosophila melanogaster and D. yakuba. We find that pericentric inversions occur at a much lower rate compared to paracentric inversions. The average paracentric inversion tract length is approx. 4.8 Mb with small inversions being more frequent than large inversions. If the two breakpoints defining a paracentric inversion tract are uniformly and independently distributed over chromosome arms there will be more short tract-length inversions than long; we find an even greater preponderance of short tract lengths than this would predict. Thus there appears to be a correlation between the positions of breakpoints which favors shorter tract lengths. The method developed in this paper provides the first statistical estimator for estimating the distribution of inversion tract lengths from marker data. Application of this method for a number of data sets may help elucidate the relationship between the length of an inversion and the chance that it will get accepted.

  3. Applying mixed methods to pretest the Pressure Ulcer Quality of Life (PU-QOL) instrument.

    PubMed

    Gorecki, C; Lamping, D L; Nixon, J; Brown, J M; Cano, S

    2012-04-01

    Pretesting is key in the development of patient-reported outcome (PRO) instruments. We describe a mixed-methods approach based on interviews and Rasch measurement methods in the pretesting of the Pressure Ulcer Quality of Life (PU-QOL) instrument. We used cognitive interviews to pretest the PU-QOL in 35 patients with pressure ulcers with the view to identifying problematic items, followed by Rasch analysis to examine response options, appropriateness of the item series and biases due to question ordering (item fit). We then compared findings in an interactive and iterative process to identify potential strengths and weaknesses of PU-QOL items, and guide decision-making about further revisions to items and design/layout. Although cognitive interviews largely supported items, they highlighted problems with layout, response options and comprehension. Findings from the Rasch analysis identified problems with response options through reversed thresholds. The use of a mixed-methods approach in pretesting the PU-QOL instrument proved beneficial for identifying problems with scale layout, response options and framing/wording of items. Rasch measurement methods are a useful addition to standard qualitative pretesting for evaluating strengths and weaknesses of early stage PRO instruments.

  4. Dependence of paracentric inversion rate on tract length

    PubMed Central

    York, Thomas L; Durrett, Rick; Nielsen, Rasmus

    2007-01-01

    Background We develop a Bayesian method based on MCMC for estimating the relative rates of pericentric and paracentric inversions from marker data from two species. The method also allows estimation of the distribution of inversion tract lengths. Results We apply the method to data from Drosophila melanogaster and D. yakuba. We find that pericentric inversions occur at a much lower rate compared to paracentric inversions. The average paracentric inversion tract length is approx. 4.8 Mb with small inversions being more frequent than large inversions. If the two breakpoints defining a paracentric inversion tract are uniformly and independently distributed over chromosome arms there will be more short tract-length inversions than long; we find an even greater preponderance of short tract lengths than this would predict. Thus there appears to be a correlation between the positions of breakpoints which favors shorter tract lengths. Conclusion The method developed in this paper provides the first statistical estimator for estimating the distribution of inversion tract lengths from marker data. Application of this method for a number of data sets may help elucidate the relationship between the length of an inversion and the chance that it will get accepted. PMID:17407601

  5. What is the potential for interventions designed to prevent violence against women to reduce children's exposure to violence? Findings from the SASA! study, Kampala, Uganda.

    PubMed

    Kyegombe, Nambusi; Abramsky, Tanya; Devries, Karen M; Michau, Lori; Nakuti, Janet; Starmann, Elizabeth; Musuya, Tina; Heise, Lori; Watts, Charlotte

    2015-12-01

    Intimate partner violence (IPV) and child maltreatment often co-occur in households and lead to negative outcomes for children. This article explores the extent to which SASA!, an intervention to prevent violence against women, impacted children's exposure to violence. Between 2007 and 2012 a cluster randomized controlled trial was conducted in Kampala, Uganda. An adjusted cluster-level intention to treat analysis, compares secondary outcomes in intervention and control communities at follow-up. Under the qualitative evaluation, 82 in-depth interviews were audio recorded at follow-up, transcribed verbatim, and analyzed using thematic analysis complemented by constant comparative methods. This mixed-methods article draws mainly on the qualitative data. The findings suggest that SASA! impacted on children's experience of violence in three main ways. First, quantitative data suggest that children's exposure to IPV was reduced. We estimate that reductions in IPV combined with reduced witnessing by children when IPV did occur, led to a 64% reduction in prevalence of children witnessing IPV in their home (aRR 0.36, 95% CI 0.06-2.20). Second, among couples who experienced reduced IPV, qualitative data suggests parenting and discipline practices sometimes also changed-improving parent-child relationships and for a few parents, resulting in the complete rejection of corporal punishment as a disciplinary method. Third, some participants reported intervening to prevent violence against children. The findings suggest that interventions to prevent IPV may also impact on children's exposure to violence, and improve parent-child relationships. They also point to potential synergies for violence prevention, an area meriting further exploration. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. Efficacy of Strain Elastography in Diagnosis and Staging of Acute Appendicitis in Pediatric Patients.

    PubMed

    Arslan, Harun; Akdemir, Zülküf; Yavuz, Alpaslan; Gökçal, Fahri; Parlakgümüş, Cemal; İslamoglu, Necat; Akdeniz, Hüseyin

    2018-02-11

    BACKGROUND In the present study, the role and efficiency of strain elastography (SE) were evaluated in diagnosis and staging of acute appendicitis in pediatric patients. MATERIAL AND METHODS We enrolled 225 pediatric patients with suspected clinical and laboratory findings of acute appendicitis. Gray-scale sonographic findings were recorded and staging was made by the colorization method of SE imaging. Appendectomy was performed in all patients and the results of the surgical pathology were compared with the imaging findings. The sensitivity, specificity, and accuracy of SE imaging were determined in terms of evaluating the "acute appendicitis". RESULTS Sonographic evaluation revealed acute appendicitis in 100 patients. Regarding the SE analysis, cases with appendicitis were classified into 3 groups as: mild (n=17), moderate (n=39), and severe (n=44). The pathological evaluation revealed 95 different stages of appendicitis and normal appendix in 5 cases: acute focal (n=10), acute suppurative (n=46), phlegmonous (n=27), and perforated (n=12), regarding the results of surgical pathology. Five patients with pathologically proven "normal" appendix were noted as "mild stage appendicitis" based on gray scale and SE analysis. In total, when gray-scale and SE results were compared with pathology results regardless of the stage of appendicitis, sensitivity, specificity, positive predictive value, negative predictive value, and accuracy rates were 96%, 96%, 95%, 96.8%, and 96%, respectively. No statistically significant difference was detected between other groups (P<0.05). CONCLUSIONS In acute appendicitis, the use of SE imaging as a supportive method for the clinical approach can be useful in diagnosis, and its results are closely correlated with the histopathologic stage of appendix inflammation.

  7. Australian Public Preferences for the Funding of New Health Technologies: A Comparison of Discrete Choice and Profile Case Best-Worst Scaling Methods.

    PubMed

    Whitty, Jennifer A; Ratcliffe, Julie; Chen, Gang; Scuffham, Paul A

    2014-07-01

    Ethical, economic, political, and legitimacy arguments support the consideration of public preferences in health technology decision making. The objective was to assess public preferences for funding new health technologies and to compare a profile case best-worst scaling (BWS) and traditional discrete choice experiment (DCE) method. An online survey consisting of a DCE and BWS task was completed by 930 adults recruited via an Internet panel. Respondents traded between 7 technology attributes. Participation quotas broadly reflected the population of Queensland, Australia, by gender and age. Choice data were analyzed using a generalized multinomial logit model. The findings from both the BWS and DCE were generally consistent in that respondents exhibited stronger preferences for technologies offering prevention or early diagnosis over other benefit types. Respondents also prioritized technologies that benefit younger people, larger numbers of people, those in rural areas, or indigenous Australians; that provide value for money; that have no available alternative; or that upgrade an existing technology. However, the relative preference weights and consequent preference orderings differed between the DCE and BWS models. Further, poor correlation between the DCE and BWS weights was observed. While only a minority of respondents reported difficulty completing either task (22.2% DCE, 31.9% BWS), the majority (72.6%) preferred the DCE over BWS task. This study provides reassurance that many criteria routinely used for technology decision making are considered to be relevant by the public. The findings clearly indicate the perceived importance of prevention and early diagnosis. The dissimilarity observed between DCE and profile case BWS weights is contrary to the findings of previous comparisons and raises uncertainty regarding the comparative merits of these stated preference methods in a priority-setting context. © The Author(s) 2014.

  8. Automatically finding relevant citations for clinical guideline development.

    PubMed

    Bui, Duy Duc An; Jonnalagadda, Siddhartha; Del Fiol, Guilherme

    2015-10-01

    Literature database search is a crucial step in the development of clinical practice guidelines and systematic reviews. In the age of information technology, the process of literature search is still conducted manually, therefore it is costly, slow and subject to human errors. In this research, we sought to improve the traditional search approach using innovative query expansion and citation ranking approaches. We developed a citation retrieval system composed of query expansion and citation ranking methods. The methods are unsupervised and easily integrated over the PubMed search engine. To validate the system, we developed a gold standard consisting of citations that were systematically searched and screened to support the development of cardiovascular clinical practice guidelines. The expansion and ranking methods were evaluated separately and compared with baseline approaches. Compared with the baseline PubMed expansion, the query expansion algorithm improved recall (80.2% vs. 51.5%) with small loss on precision (0.4% vs. 0.6%). The algorithm could find all citations used to support a larger number of guideline recommendations than the baseline approach (64.5% vs. 37.2%, p<0.001). In addition, the citation ranking approach performed better than PubMed's "most recent" ranking (average precision +6.5%, recall@k +21.1%, p<0.001), PubMed's rank by "relevance" (average precision +6.1%, recall@k +14.8%, p<0.001), and the machine learning classifier that identifies scientifically sound studies from MEDLINE citations (average precision +4.9%, recall@k +4.2%, p<0.001). Our unsupervised query expansion and ranking techniques are more flexible and effective than PubMed's default search engine behavior and the machine learning classifier. Automated citation finding is promising to augment the traditional literature search. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Identification and analysis of multigene families by comparison of exon fingerprints.

    PubMed

    Brown, N P; Whittaker, A J; Newell, W R; Rawlings, C J; Beck, S

    1995-06-02

    Gene families are often recognised by sequence homology using similarity searching to find relationships, however, genomic sequence data provides gene architectural information not used by conventional search methods. In particular, intron positions and phases are expected to be relatively conserved features, because mis-splicing and reading frame shifts should be selected against. A fast search technique capable of detecting possible weak sequence homologies apparent at the intron/exon level of gene organization is presented for comparing spliceosomal genes and gene fragments. FINEX compares strings of exons delimited by intron/exon boundary positions and intron phases (exon fingerprint) using a global dynamic programming algorithm with a combined intron phase identity and exon size dissimilarity score. Exon fingerprints are typically two orders of magnitude smaller than their nucleic acid sequence counterparts giving rise to fast search times: a ranked search against a library of 6755 fingerprints for a typical three exon fingerprint completes in under 30 seconds on an ordinary workstation, while a worst case largest fingerprint of 52 exons completes in just over one minute. The short "sequence" length of exon fingerprints in comparisons is compensated for by the large exon alphabet compounded of intron phase types and a wide range of exon sizes, the latter contributing the most information to alignments. FINEX performs better in some searches than conventional methods, finding matches with similar exon organization, but low sequence homology. A search using a human serum albumin finds all members of the multigene family in the FINEX database at the top of the search ranking, despite very low amino acid percentage identities between family members. The method should complement conventional sequence searching and alignment techniques, offering a means of identifying otherwise hard to detect homologies where genomic data are available.

  10. A Penalized Likelihood Framework For High-Dimensional Phylogenetic Comparative Methods And An Application To New-World Monkeys Brain Evolution.

    PubMed

    Julien, Clavel; Leandro, Aristide; Hélène, Morlon

    2018-06-19

    Working with high-dimensional phylogenetic comparative datasets is challenging because likelihood-based multivariate methods suffer from low statistical performances as the number of traits p approaches the number of species n and because some computational complications occur when p exceeds n. Alternative phylogenetic comparative methods have recently been proposed to deal with the large p small n scenario but their use and performances are limited. Here we develop a penalized likelihood framework to deal with high-dimensional comparative datasets. We propose various penalizations and methods for selecting the intensity of the penalties. We apply this general framework to the estimation of parameters (the evolutionary trait covariance matrix and parameters of the evolutionary model) and model comparison for the high-dimensional multivariate Brownian (BM), Early-burst (EB), Ornstein-Uhlenbeck (OU) and Pagel's lambda models. We show using simulations that our penalized likelihood approach dramatically improves the estimation of evolutionary trait covariance matrices and model parameters when p approaches n, and allows for their accurate estimation when p equals or exceeds n. In addition, we show that penalized likelihood models can be efficiently compared using Generalized Information Criterion (GIC). We implement these methods, as well as the related estimation of ancestral states and the computation of phylogenetic PCA in the R package RPANDA and mvMORPH. Finally, we illustrate the utility of the new proposed framework by evaluating evolutionary models fit, analyzing integration patterns, and reconstructing evolutionary trajectories for a high-dimensional 3-D dataset of brain shape in the New World monkeys. We find a clear support for an Early-burst model suggesting an early diversification of brain morphology during the ecological radiation of the clade. Penalized likelihood offers an efficient way to deal with high-dimensional multivariate comparative data.

  11. Comparative analysis of autofocus functions in digital in-line phase-shifting holography.

    PubMed

    Fonseca, Elsa S R; Fiadeiro, Paulo T; Pereira, Manuela; Pinheiro, António

    2016-09-20

    Numerical reconstruction of digital holograms relies on a precise knowledge of the original object position. However, there are a number of relevant applications where this parameter is not known in advance and an efficient autofocusing method is required. This paper addresses the problem of finding optimal focusing methods for use in reconstruction of digital holograms of macroscopic amplitude and phase objects, using digital in-line phase-shifting holography in transmission mode. Fifteen autofocus measures, including spatial-, spectral-, and sparsity-based methods, were evaluated for both synthetic and experimental holograms. The Fresnel transform and the angular spectrum reconstruction methods were compared. Evaluation criteria included unimodality, accuracy, resolution, and computational cost. Autofocusing under angular spectrum propagation tends to perform better with respect to accuracy and unimodality criteria. Phase objects are, generally, more difficult to focus than amplitude objects. The normalized variance, the standard correlation, and the Tenenbaum gradient are the most reliable spatial-based metrics, combining computational efficiency with good accuracy and resolution. A good trade-off between focus performance and computational cost was found for the Fresnelet sparsity method.

  12. A method for measuring different classes of human immunoglobulins specific for the penicilloyl group

    PubMed Central

    Wheeler, A. W.

    1971-01-01

    A method is described for the detection of human immunoglobulins of the four main classes specific for the penicilloyl group. The technique is an adaptation of the red cell linked antigen antiglobulin reaction based on the finding that benzyl penicilloylated rabbit γ-globulin, specific for human erythrocytes, reacted specifically with erythrocytes but did not agglutinate them. In turn this complex reacted specifically with human penicilloyl antibody and it was then possible to titrate each immunoglobulin class by the addition of anti-immunoglobulin sera. The method described here was used to compare titres of penicilloyl specific immunoglobulins of the same class between different sera. The test was found to be less sensitive than the hapten modified bacteriophage reduction test but had the advantage that individual immunoglobulin classes could be compared. In the absence of a reliable method for the diagnosis of pencillin allergy, it is hoped that the technique described will be a useful addition to existing in vivo and in vitro methods of determining the antibody response of the patient to the penicilloyl group. PMID:4105475

  13. Simple adaptive sparse representation based classification schemes for EEG based brain-computer interface applications.

    PubMed

    Shin, Younghak; Lee, Seungchan; Ahn, Minkyu; Cho, Hohyun; Jun, Sung Chan; Lee, Heung-No

    2015-11-01

    One of the main problems related to electroencephalogram (EEG) based brain-computer interface (BCI) systems is the non-stationarity of the underlying EEG signals. This results in the deterioration of the classification performance during experimental sessions. Therefore, adaptive classification techniques are required for EEG based BCI applications. In this paper, we propose simple adaptive sparse representation based classification (SRC) schemes. Supervised and unsupervised dictionary update techniques for new test data and a dictionary modification method by using the incoherence measure of the training data are investigated. The proposed methods are very simple and additional computation for the re-training of the classifier is not needed. The proposed adaptive SRC schemes are evaluated using two BCI experimental datasets. The proposed methods are assessed by comparing classification results with the conventional SRC and other adaptive classification methods. On the basis of the results, we find that the proposed adaptive schemes show relatively improved classification accuracy as compared to conventional methods without requiring additional computation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. The Mixed Finite Element Multigrid Method for Stokes Equations

    PubMed Central

    Muzhinji, K.; Shateyi, S.; Motsa, S. S.

    2015-01-01

    The stable finite element discretization of the Stokes problem produces a symmetric indefinite system of linear algebraic equations. A variety of iterative solvers have been proposed for such systems in an attempt to construct efficient, fast, and robust solution techniques. This paper investigates one of such iterative solvers, the geometric multigrid solver, to find the approximate solution of the indefinite systems. The main ingredient of the multigrid method is the choice of an appropriate smoothing strategy. This study considers the application of different smoothers and compares their effects in the overall performance of the multigrid solver. We study the multigrid method with the following smoothers: distributed Gauss Seidel, inexact Uzawa, preconditioned MINRES, and Braess-Sarazin type smoothers. A comparative study of the smoothers shows that the Braess-Sarazin smoothers enhance good performance of the multigrid method. We study the problem in a two-dimensional domain using stable Hood-Taylor Q 2-Q 1 pair of finite rectangular elements. We also give the main theoretical convergence results. We present the numerical results to demonstrate the efficiency and robustness of the multigrid method and confirm the theoretical results. PMID:25945361

  15. Equivalence of the equilibrium and the nonequilibrium molecular dynamics methods for thermal conductivity calculations: From bulk to nanowire silicon

    NASA Astrophysics Data System (ADS)

    Dong, Haikuan; Fan, Zheyong; Shi, Libin; Harju, Ari; Ala-Nissila, Tapio

    2018-03-01

    Molecular dynamics (MD) simulations play an important role in studying heat transport in complex materials. The lattice thermal conductivity can be computed either using the Green-Kubo formula in equilibrium MD (EMD) simulations or using Fourier's law in nonequilibrium MD (NEMD) simulations. These two methods have not been systematically compared for materials with different dimensions and inconsistencies between them have been occasionally reported in the literature. Here we give an in-depth comparison of them in terms of heat transport in three allotropes of Si: three-dimensional bulk silicon, two-dimensional silicene, and quasi-one-dimensional silicon nanowire. By multiplying the correlation time in the Green-Kubo formula with an appropriate effective group velocity, we can express the running thermal conductivity in the EMD method as a function of an effective length and directly compare it to the length-dependent thermal conductivity in the NEMD method. We find that the two methods quantitatively agree with each other for all the systems studied, firmly establishing their equivalence in computing thermal conductivity.

  16. A Cluster-then-label Semi-supervised Learning Approach for Pathology Image Classification.

    PubMed

    Peikari, Mohammad; Salama, Sherine; Nofech-Mozes, Sharon; Martel, Anne L

    2018-05-08

    Completely labeled pathology datasets are often challenging and time-consuming to obtain. Semi-supervised learning (SSL) methods are able to learn from fewer labeled data points with the help of a large number of unlabeled data points. In this paper, we investigated the possibility of using clustering analysis to identify the underlying structure of the data space for SSL. A cluster-then-label method was proposed to identify high-density regions in the data space which were then used to help a supervised SVM in finding the decision boundary. We have compared our method with other supervised and semi-supervised state-of-the-art techniques using two different classification tasks applied to breast pathology datasets. We found that compared with other state-of-the-art supervised and semi-supervised methods, our SSL method is able to improve classification performance when a limited number of labeled data instances are made available. We also showed that it is important to examine the underlying distribution of the data space before applying SSL techniques to ensure semi-supervised learning assumptions are not violated by the data.

  17. An Assessment of Phylogenetic Tools for Analyzing the Interplay Between Interspecific Interactions and Phenotypic Evolution.

    PubMed

    Drury, J P; Grether, G F; Garland, T; Morlon, H

    2018-05-01

    Much ecological and evolutionary theory predicts that interspecific interactions often drive phenotypic diversification and that species phenotypes in turn influence species interactions. Several phylogenetic comparative methods have been developed to assess the importance of such processes in nature; however, the statistical properties of these methods have gone largely untested. Focusing mainly on scenarios of competition between closely-related species, we assess the performance of available comparative approaches for analyzing the interplay between interspecific interactions and species phenotypes. We find that many currently used statistical methods often fail to detect the impact of interspecific interactions on trait evolution, that sister-taxa analyses are particularly unreliable in general, and that recently developed process-based models have more satisfactory statistical properties. Methods for detecting predictors of species interactions are generally more reliable than methods for detecting character displacement. In weighing the strengths and weaknesses of different approaches, we hope to provide a clear guide for empiricists testing hypotheses about the reciprocal effect of interspecific interactions and species phenotypes and to inspire further development of process-based models.

  18. Orthorectification by Using Gpgpu Method

    NASA Astrophysics Data System (ADS)

    Sahin, H.; Kulur, S.

    2012-07-01

    Thanks to the nature of the graphics processing, the newly released products offer highly parallel processing units with high-memory bandwidth and computational power of more than teraflops per second. The modern GPUs are not only powerful graphic engines but also they are high level parallel programmable processors with very fast computing capabilities and high-memory bandwidth speed compared to central processing units (CPU). Data-parallel computations can be shortly described as mapping data elements to parallel processing threads. The rapid development of GPUs programmability and capabilities attracted the attentions of researchers dealing with complex problems which need high level calculations. This interest has revealed the concepts of "General Purpose Computation on Graphics Processing Units (GPGPU)" and "stream processing". The graphic processors are powerful hardware which is really cheap and affordable. So the graphic processors became an alternative to computer processors. The graphic chips which were standard application hardware have been transformed into modern, powerful and programmable processors to meet the overall needs. Especially in recent years, the phenomenon of the usage of graphics processing units in general purpose computation has led the researchers and developers to this point. The biggest problem is that the graphics processing units use different programming models unlike current programming methods. Therefore, an efficient GPU programming requires re-coding of the current program algorithm by considering the limitations and the structure of the graphics hardware. Currently, multi-core processors can not be programmed by using traditional programming methods. Event procedure programming method can not be used for programming the multi-core processors. GPUs are especially effective in finding solution for repetition of the computing steps for many data elements when high accuracy is needed. Thus, it provides the computing process more quickly and accurately. Compared to the GPUs, CPUs which perform just one computing in a time according to the flow control are slower in performance. This structure can be evaluated for various applications of computer technology. In this study covers how general purpose parallel programming and computational power of the GPUs can be used in photogrammetric applications especially direct georeferencing. The direct georeferencing algorithm is coded by using GPGPU method and CUDA (Compute Unified Device Architecture) programming language. Results provided by this method were compared with the traditional CPU programming. In the other application the projective rectification is coded by using GPGPU method and CUDA programming language. Sample images of various sizes, as compared to the results of the program were evaluated. GPGPU method can be used especially in repetition of same computations on highly dense data, thus finding the solution quickly.

  19. Depth estimation and camera calibration of a focused plenoptic camera for visual odometry

    NASA Astrophysics Data System (ADS)

    Zeller, Niclas; Quint, Franz; Stilla, Uwe

    2016-08-01

    This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced. In contrast to monocular visual odometry approaches, due to the calibration of the individual depth maps, the scale of the scene can be observed. Furthermore, due to the light-field information better tracking capabilities compared to the monocular case can be expected. As result, the depth information gained by the plenoptic camera based visual odometry algorithm proposed in this paper has superior accuracy and reliability compared to the depth estimated from a single light-field image.

  20. A comparative study of alumina-supported Ni catalysts prepared by photodeposition and impregnation methods on the catalytic ozonation of 2,4-dichlorophenoxyacetic acid

    NASA Astrophysics Data System (ADS)

    Rodríguez, Julia L.; Valenzuela, Miguel A.; Tiznado, Hugo; Poznyak, Tatiana; Chairez, Isaac; Magallanes, Diana

    2017-02-01

    The heterogeneous catalytic ozonation on unsupported and supported oxides has been successfully tested for the removal of several refractory compounds in aqueous solution. In this work, alumina-supported nickel catalysts prepared by photodeposition and impregnation methods were compared in the catalytic ozonation of 2,4-dichlorophenoxyacetic acid (2,4-D). The catalysts were characterized by high-resolution electron microscopy and X-ray photoelectron spectroscopy. The photochemical decomposition of Ni acetylacetonate to produce Ni(OH)2, NiO, and traces of Ni° deposited on alumina was achieved in the presence of benzophenone as a sensitizer. A similar surface composition was found with the impregnated catalyst after its reduction with hydrogen at 500 °C and exposed to ambient air. Results indicated a higher initial activity and maleic acid (byproduct) concentration with the photodeposited catalyst (1 wt% Ni) compared to the impregnated catalyst (3 wt% Ni). These findings suggest the use of the photodeposition method as a simple and reliable procedure for the preparation of supported metal oxide/metal catalysts under mild operating conditions.

Top