Sample records for difference method results

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rowe, M.D.; Pierce, B.L.

    This report presents results of tests of different final site selection methods used for siting large-scale facilities such as nuclear power plants. Test data are adapted from a nuclear power plant siting study conducted on Long Island, New York. The purpose of the tests is to determine whether or not different final site selection methods produce different results, and to obtain some understanding of the nature of any differences found. Decision rules and weighting methods are included. Decision rules tested are Weighting Summation, Power Law, Decision Analysis, Goal Programming, and Goal Attainment; weighting methods tested are Categorization, Ranking, Rating Ratiomore » Estimation, Metfessel Allocation, Indifferent Tradeoff, Decision Analysis lottery, and Global Evaluation. Results show that different methods can, indeed, produce different results, but that the probability that they will do so is controlled by the structure of differences among the sites being evaluated. Differences in weights and suitability scores attributable to methods have reduced significance if the alternatives include one or two sites that are superior to all others in many attributes. The more tradeoffs there are among good and bad levels of different attributes at different sites, the more important are the specifics of methods to the final decision. 5 refs., 14 figs., 19 tabs.« less

  2. Examining mixing methods in an evaluation of a smoking cessation program.

    PubMed

    Betzner, Anne; Lawrenz, Frances P; Thao, Mao

    2016-02-01

    Three different methods were used in an evaluation of a smoking cessation study: surveys, focus groups, and phenomenological interviews. The results of each method were analyzed separately and then combined using both a pragmatic and dialectic stance to examine the effects of different approaches to mixing methods. Results show that the further apart the methods are philosophically, the more diverse the findings. Comparisons of decision maker opinions and costs of the different methods are provided along with recommendations for evaluators' uses of different methods. Copyright © 2015. Published by Elsevier Ltd.

  3. Improved Estimation of Cardiac Function Parameters Using a Combination of Independent Automated Segmentation Results in Cardiovascular Magnetic Resonance Imaging.

    PubMed

    Lebenberg, Jessica; Lalande, Alain; Clarysse, Patrick; Buvat, Irene; Casta, Christopher; Cochet, Alexandre; Constantinidès, Constantin; Cousty, Jean; de Cesare, Alain; Jehan-Besson, Stephanie; Lefort, Muriel; Najman, Laurent; Roullot, Elodie; Sarry, Laurent; Tilmant, Christophe; Frouin, Frederique; Garreau, Mireille

    2015-01-01

    This work aimed at combining different segmentation approaches to produce a robust and accurate segmentation result. Three to five segmentation results of the left ventricle were combined using the STAPLE algorithm and the reliability of the resulting segmentation was evaluated in comparison with the result of each individual segmentation method. This comparison was performed using a supervised approach based on a reference method. Then, we used an unsupervised statistical evaluation, the extended Regression Without Truth (eRWT) that ranks different methods according to their accuracy in estimating a specific biomarker in a population. The segmentation accuracy was evaluated by estimating six cardiac function parameters resulting from the left ventricle contour delineation using a public cardiac cine MRI database. Eight different segmentation methods, including three expert delineations and five automated methods, were considered, and sixteen combinations of the automated methods using STAPLE were investigated. The supervised and unsupervised evaluations demonstrated that in most cases, STAPLE results provided better estimates than individual automated segmentation methods. Overall, combining different automated segmentation methods improved the reliability of the segmentation result compared to that obtained using an individual method and could achieve the accuracy of an expert.

  4. Improved Estimation of Cardiac Function Parameters Using a Combination of Independent Automated Segmentation Results in Cardiovascular Magnetic Resonance Imaging

    PubMed Central

    Lebenberg, Jessica; Lalande, Alain; Clarysse, Patrick; Buvat, Irene; Casta, Christopher; Cochet, Alexandre; Constantinidès, Constantin; Cousty, Jean; de Cesare, Alain; Jehan-Besson, Stephanie; Lefort, Muriel; Najman, Laurent; Roullot, Elodie; Sarry, Laurent; Tilmant, Christophe

    2015-01-01

    This work aimed at combining different segmentation approaches to produce a robust and accurate segmentation result. Three to five segmentation results of the left ventricle were combined using the STAPLE algorithm and the reliability of the resulting segmentation was evaluated in comparison with the result of each individual segmentation method. This comparison was performed using a supervised approach based on a reference method. Then, we used an unsupervised statistical evaluation, the extended Regression Without Truth (eRWT) that ranks different methods according to their accuracy in estimating a specific biomarker in a population. The segmentation accuracy was evaluated by estimating six cardiac function parameters resulting from the left ventricle contour delineation using a public cardiac cine MRI database. Eight different segmentation methods, including three expert delineations and five automated methods, were considered, and sixteen combinations of the automated methods using STAPLE were investigated. The supervised and unsupervised evaluations demonstrated that in most cases, STAPLE results provided better estimates than individual automated segmentation methods. Overall, combining different automated segmentation methods improved the reliability of the segmentation result compared to that obtained using an individual method and could achieve the accuracy of an expert. PMID:26287691

  5. Financial time series analysis based on information categorization method

    NASA Astrophysics Data System (ADS)

    Tian, Qiang; Shang, Pengjian; Feng, Guochen

    2014-12-01

    The paper mainly applies the information categorization method to analyze the financial time series. The method is used to examine the similarity of different sequences by calculating the distances between them. We apply this method to quantify the similarity of different stock markets. And we report the results of similarity in US and Chinese stock markets in periods 1991-1998 (before the Asian currency crisis), 1999-2006 (after the Asian currency crisis and before the global financial crisis), and 2007-2013 (during and after global financial crisis) by using this method. The results show the difference of similarity between different stock markets in different time periods and the similarity of the two stock markets become larger after these two crises. Also we acquire the results of similarity of 10 stock indices in three areas; it means the method can distinguish different areas' markets from the phylogenetic trees. The results show that we can get satisfactory information from financial markets by this method. The information categorization method can not only be used in physiologic time series, but also in financial time series.

  6. Estimating the mediating effect of different biomarkers on the relation of alcohol consumption with the risk of type 2 diabetes.

    PubMed

    Beulens, Joline W J; van der Schouw, Yvonne T; Moons, Karel G M; Boshuizen, Hendriek C; van der A, Daphne L; Groenwold, Rolf H H

    2013-04-01

    Moderate alcohol consumption is associated with a reduced type 2 diabetes risk, but the biomarkers that explain this relation are unknown. The most commonly used method to estimate the proportion explained by a biomarker is the difference method. However, influence of alcohol-biomarker interaction on its results is unclear. G-estimation method is proposed to accurately assess proportion explained, but how this method compares with the difference method is unknown. In a case-cohort study of 2498 controls and 919 incident diabetes cases, we estimated the proportion explained by different biomarkers on the relation between alcohol consumption and diabetes using the difference method and sequential G-estimation method. Using the difference method, high-density lipoprotein cholesterol explained the relation between alcohol and diabetes by 78% (95% confidence interval [CI], 41-243), whereas high-sensitivity C-reactive protein (-7.5%; -36.4 to 1.8) or blood pressure (-6.9; -26.3 to -0.6) did not explain the relation. Interaction between alcohol and liver enzymes led to bias in proportion explained with different outcomes for different levels of liver enzymes. G-estimation method showed comparable results, but proportions explained were lower. The relation between alcohol consumption and diabetes may be largely explained by increased high-density lipoprotein cholesterol but not by other biomarkers. Ignoring exposure-mediator interactions may result in bias. The difference and G-estimation methods provide similar results. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. [Comparison of different methods in dealing with HIV viral load data with diversified missing value mechanism on HIV positive MSM].

    PubMed

    Jiang, Z; Dou, Z; Song, W L; Xu, J; Wu, Z Y

    2017-11-10

    Objective: To compare results of different methods: in organizing HIV viral load (VL) data with missing values mechanism. Methods We used software SPSS 17.0 to simulate complete and missing data with different missing value mechanism from HIV viral loading data collected from MSM in 16 cities in China in 2013. Maximum Likelihood Methods Using the Expectation and Maximization Algorithm (EM), regressive method, mean imputation, delete method, and Markov Chain Monte Carlo (MCMC) were used to supplement missing data respectively. The results: of different methods were compared according to distribution characteristics, accuracy and precision. Results HIV VL data could not be transferred into a normal distribution. All the methods showed good results in iterating data which is Missing Completely at Random Mechanism (MCAR). For the other types of missing data, regressive and MCMC methods were used to keep the main characteristic of the original data. The means of iterating database with different methods were all close to the original one. The EM, regressive method, mean imputation, and delete method under-estimate VL while MCMC overestimates it. Conclusion: MCMC can be used as the main imputation method for HIV virus loading missing data. The iterated data can be used as a reference for mean HIV VL estimation among the investigated population.

  8. Radiometer calibration methods and resulting irradiance differences: Radiometer calibration methods and resulting irradiance differences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin

    Accurate solar radiation measured by radiometers depends on instrument performance specifications, installation method, calibration procedure, measurement conditions, maintenance practices, location, and environmental conditions. This study addresses the effect of different calibration methodologies and resulting differences provided by radiometric calibration service providers such as the National Renewable Energy Laboratory (NREL) and manufacturers of radiometers. Some of these methods calibrate radiometers indoors and some outdoors. To establish or understand the differences in calibration methodologies, we processed and analyzed field-measured data from radiometers deployed for 10 months at NREL's Solar Radiation Research Laboratory. These different methods of calibration resulted in a difference ofmore » +/-1% to +/-2% in solar irradiance measurements. Analyzing these differences will ultimately assist in determining the uncertainties of the field radiometer data and will help develop a consensus on a standard for calibration. Further advancing procedures for precisely calibrating radiometers to world reference standards that reduce measurement uncertainties will help the accurate prediction of the output of planned solar conversion projects and improve the bankability of financing solar projects.« less

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ji, Changyoon, E-mail: changyoon@yonsei.ac.kr; Hong, Taehoon, E-mail: hong7@yonsei.ac.kr

    Previous studies have proposed several methods for integrating characterized environmental impacts as a single index in life cycle assessment. Each of them, however, may lead to different results. This study presents internal and external normalization methods, weighting factors proposed by panel methods, and a monetary valuation based on an endpoint life cycle impact assessment method as the integration methods. Furthermore, this study investigates the differences among the integration methods and identifies the causes of the differences through a case study in which five elementary school buildings were used. As a result, when using internal normalization with weighting factors, the weightingmore » factors had a significant influence on the total environmental impacts whereas the normalization had little influence on the total environmental impacts. When using external normalization with weighting factors, the normalization had more significant influence on the total environmental impacts than weighing factors. Due to such differences, the ranking of the five buildings varied depending on the integration methods. The ranking calculated by the monetary valuation method was significantly different from that calculated by the normalization and weighting process. The results aid decision makers in understanding the differences among these integration methods, and, finally, help them select the method most appropriate for the goal at hand.« less

  10. [Do different interpretative methods used for evaluation of checkerboard synergy test affect the results?].

    PubMed

    Ozseven, Ayşe Gül; Sesli Çetin, Emel; Ozseven, Levent

    2012-07-01

    In recent years, owing to the presence of multi-drug resistant nosocomial bacteria, combination therapies are more frequently applied. Thus there is more need to investigate the in vitro activity of drug combinations against multi-drug resistant bacteria. Checkerboard synergy testing is among the most widely used standard technique to determine the activity of antibiotic combinations. It is based on microdilution susceptibility testing of antibiotic combinations. Although this test has a standardised procedure, there are many different methods for interpreting the results. In many previous studies carried out with multi-drug resistant bacteria, different rates of synergy have been reported with various antibiotic combinations using checkerboard technique. These differences might be attributed to the different features of the strains. However, different synergy rates detected by checkerboard method have also been reported in other studies using the same drug combinations and same types of bacteria. It was thought that these differences in synergy rates might be due to the different methods of interpretation of synergy test results. In recent years, multi-drug resistant Acinetobacter baumannii has been the most commonly encountered nosocomial pathogen especially in intensive-care units. For this reason, multidrug resistant A.baumannii has been the subject of a considerable amount of research about antimicrobial combinations. In the present study, the in vitro activities of frequently preferred combinations in A.baumannii infections like imipenem plus ampicillin/sulbactam, and meropenem plus ampicillin/sulbactam were tested by checkerboard synergy method against 34 multi-drug resistant A.baumannii isolates. Minimum inhibitory concentration (MIC) values for imipenem, meropenem and ampicillin/sulbactam were determined by the broth microdilution method. Subsequently the activity of two different combinations were tested in the dilution range of 4 x MIC and 0.03 x MIC in 96-well checkerboard plates. The results were obtained separately using the four different interpretation methods frequently preferred by researchers. Thus, it was aimed to detect to what extent the rates of synergistic, indifferent and antagonistic interactions were affected by different interpretation methods. The differences between the interpretation methods were tested by chi-square analysis for each combination used. Statistically significant differences were detected between the four different interpretation methods for the determination of synergistic and indifferent interactions (p< 0.0001). Highest rates of synergy were observed with both combinations by the method that used the lowest fractional inhibitory concentration index of all the non-turbid wells along the turbidity/non-turbidity interface. There was no statistically significant difference between the four methods for the detection of antagonism (p> 0.05). In conclusion although there is a standard procedure for checkerboard synergy testing it fails to exhibit standard results owing to different methods of interpretation of the results. Thus, there is a need to standardise the interpretation method for checkerboard synergy testing. To determine the most appropriate method of interpretation further studies investigating the clinical benefits of synergic combinations and additionally comparing the consistency of the results obtained from the other standard combination tests like time-kill studies, are required.

  11. Different methods to analyze stepped wedge trial designs revealed different aspects of intervention effects.

    PubMed

    Twisk, J W R; Hoogendijk, E O; Zwijsen, S A; de Boer, M R

    2016-04-01

    Within epidemiology, a stepped wedge trial design (i.e., a one-way crossover trial in which several arms start the intervention at different time points) is increasingly popular as an alternative to a classical cluster randomized controlled trial. Despite this increasing popularity, there is a huge variation in the methods used to analyze data from a stepped wedge trial design. Four linear mixed models were used to analyze data from a stepped wedge trial design on two example data sets. The four methods were chosen because they have been (frequently) used in practice. Method 1 compares all the intervention measurements with the control measurements. Method 2 treats the intervention variable as a time-independent categorical variable comparing the different arms with each other. In method 3, the intervention variable is a time-dependent categorical variable comparing groups with different number of intervention measurements, whereas in method 4, the changes in the outcome variable between subsequent measurements are analyzed. Regarding the results in the first example data set, methods 1 and 3 showed a strong positive intervention effect, which disappeared after adjusting for time. Method 2 showed an inverse intervention effect, whereas method 4 did not show a significant effect at all. In the second example data set, the results were the opposite. Both methods 2 and 4 showed significant intervention effects, whereas the other two methods did not. For method 4, the intervention effect attenuated after adjustment for time. Different methods to analyze data from a stepped wedge trial design reveal different aspects of a possible intervention effect. The choice of a method partly depends on the type of the intervention and the possible time-dependent effect of the intervention. Furthermore, it is advised to combine the results of the different methods to obtain an interpretable overall result. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Finite difference and Runge-Kutta methods for solving vibration problems

    NASA Astrophysics Data System (ADS)

    Lintang Renganis Radityani, Scolastika; Mungkasi, Sudi

    2017-11-01

    The vibration of a storey building can be modelled into a system of second order ordinary differential equations. If the number of floors of a building is large, then the result is a large scale system of second order ordinary differential equations. The large scale system is difficult to solve, and if it can be solved, the solution may not be accurate. Therefore, in this paper, we seek for accurate methods for solving vibration problems. We compare the performance of numerical finite difference and Runge-Kutta methods for solving large scale systems of second order ordinary differential equations. The finite difference methods include the forward and central differences. The Runge-Kutta methods include the Euler and Heun methods. Our research results show that the central finite difference and the Heun methods produce more accurate solutions than the forward finite difference and the Euler methods do.

  13. Testing and Validation of Computational Methods for Mass Spectrometry.

    PubMed

    Gatto, Laurent; Hansen, Kasper D; Hoopmann, Michael R; Hermjakob, Henning; Kohlbacher, Oliver; Beyer, Andreas

    2016-03-04

    High-throughput methods based on mass spectrometry (proteomics, metabolomics, lipidomics, etc.) produce a wealth of data that cannot be analyzed without computational methods. The impact of the choice of method on the overall result of a biological study is often underappreciated, but different methods can result in very different biological findings. It is thus essential to evaluate and compare the correctness and relative performance of computational methods. The volume of the data as well as the complexity of the algorithms render unbiased comparisons challenging. This paper discusses some problems and challenges in testing and validation of computational methods. We discuss the different types of data (simulated and experimental validation data) as well as different metrics to compare methods. We also introduce a new public repository for mass spectrometric reference data sets ( http://compms.org/RefData ) that contains a collection of publicly available data sets for performance evaluation for a wide range of different methods.

  14. Evaluation of finite difference and FFT-based solutions of the transport of intensity equation.

    PubMed

    Zhang, Hongbo; Zhou, Wen-Jing; Liu, Ying; Leber, Donald; Banerjee, Partha; Basunia, Mahmudunnabi; Poon, Ting-Chung

    2018-01-01

    A finite difference method is proposed for solving the transport of intensity equation. Simulation results show that although slower than fast Fourier transform (FFT)-based methods, finite difference methods are able to reconstruct the phase with better accuracy due to relaxed assumptions for solving the transport of intensity equation relative to FFT methods. Finite difference methods are also more flexible than FFT methods in dealing with different boundary conditions.

  15. Comparison of preprocessing methods and storage times for touch DNA samples

    PubMed Central

    Dong, Hui; Wang, Jing; Zhang, Tao; Ge, Jian-ye; Dong, Ying-qiang; Sun, Qi-fan; Liu, Chao; Li, Cai-xia

    2017-01-01

    Aim To select appropriate preprocessing methods for different substrates by comparing the effects of four different preprocessing methods on touch DNA samples and to determine the effect of various storage times on the results of touch DNA sample analysis. Method Hand touch DNA samples were used to investigate the detection and inspection results of DNA on different substrates. Four preprocessing methods, including the direct cutting method, stubbing procedure, double swab technique, and vacuum cleaner method, were used in this study. DNA was extracted from mock samples with four different preprocessing methods. The best preprocess protocol determined from the study was further used to compare performance after various storage times. DNA extracted from all samples was quantified and amplified using standard procedures. Results The amounts of DNA and the number of alleles detected on the porous substrates were greater than those on the non-porous substrates. The performances of the four preprocessing methods varied with different substrates. The direct cutting method displayed advantages for porous substrates, and the vacuum cleaner method was advantageous for non-porous substrates. No significant degradation trend was observed as the storage times increased. Conclusion Different substrates require the use of different preprocessing method in order to obtain the highest DNA amount and allele number from touch DNA samples. This study provides a theoretical basis for explorations of touch DNA samples and may be used as a reference when dealing with touch DNA samples in case work. PMID:28252870

  16. Towards the optimal fusion of high-resolution Digital Elevation Models for detailed urban flood assessment

    NASA Astrophysics Data System (ADS)

    Leitão, J. P.; de Sousa, L. M.

    2018-06-01

    Newly available, more detailed and accurate elevation data sets, such as Digital Elevation Models (DEMs) generated on the basis of imagery from terrestrial LiDAR (Light Detection and Ranging) systems or Unmanned Aerial Vehicles (UAVs), can be used to improve flood-model input data and consequently increase the accuracy of the flood modelling results. This paper presents the first application of the MBlend merging method and assesses the impact of combining different DEMs on flood modelling results. It was demonstrated that different raster merging methods can have different and substantial impacts on these results. In addition to the influence associated with the method used to merge the original DEMs, the magnitude of the impact also depends on (i) the systematic horizontal and vertical differences of the DEMs, and (ii) the orientation between the DEM boundary and the terrain slope. The greater water depth and flow velocity differences between the flood modelling results obtained using the reference DEM and the merged DEMs ranged from -9.845 to 0.002 m, and from 0.003 to 0.024 m s-1 respectively; these differences can have a significant impact on flood hazard estimates. In most of the cases investigated in this study, the differences from the reference DEM results were smaller for the MBlend method than for the results of the two conventional methods. This study highlighted the importance of DEM merging when conducting flood modelling and provided hints on the best DEM merging methods to use.

  17. Comparison of risk assessment procedures used in OCRA and ULRA methods

    PubMed Central

    Roman-Liu, Danuta; Groborz, Anna; Tokarski, Tomasz

    2013-01-01

    The aim of this study was to analyse the convergence of two methods by comparing exposure and the assessed risk of developing musculoskeletal disorders at 18 repetitive task workstations. The already established occupational repetitive actions (OCRA) and the recently developed upper limb risk assessment (ULRA) produce correlated results (R = 0.84, p = 0.0001). A discussion of the factors that influence the values of the OCRA index and ULRA's repetitive task indicator shows that both similarities and differences in the results produced by the two methods can arise from the concepts that underlie them. The assessment procedure and mathematical calculations that the basic parameters are subjected to are crucial to the results of risk assessment. The way the basic parameters are defined influences the assessment of exposure and risk assessment to a lesser degree. The analysis also proved that not always do great differences in load indicator values result in differences in risk zones. Practitioner Summary: We focused on comparing methods that, even though based on different concepts, serve the same purpose. The results proved that different methods with different assumptions can produce similar assessment of upper limb load; sharp criteria in risk assessment are not the best solution. PMID:24041375

  18. Ensemble Methods for MiRNA Target Prediction from Expression Data

    PubMed Central

    Le, Thuc Duy; Zhang, Junpeng; Liu, Lin; Li, Jiuyong

    2015-01-01

    Background microRNAs (miRNAs) are short regulatory RNAs that are involved in several diseases, including cancers. Identifying miRNA functions is very important in understanding disease mechanisms and determining the efficacy of drugs. An increasing number of computational methods have been developed to explore miRNA functions by inferring the miRNA-mRNA regulatory relationships from data. Each of the methods is developed based on some assumptions and constraints, for instance, assuming linear relationships between variables. For such reasons, computational methods are often subject to the problem of inconsistent performance across different datasets. On the other hand, ensemble methods integrate the results from individual methods and have been proved to outperform each of their individual component methods in theory. Results In this paper, we investigate the performance of some ensemble methods over the commonly used miRNA target prediction methods. We apply eight different popular miRNA target prediction methods to three cancer datasets, and compare their performance with the ensemble methods which integrate the results from each combination of the individual methods. The validation results using experimentally confirmed databases show that the results of the ensemble methods complement those obtained by the individual methods and the ensemble methods perform better than the individual methods across different datasets. The ensemble method, Pearson+IDA+Lasso, which combines methods in different approaches, including a correlation method, a causal inference method, and a regression method, is the best performed ensemble method in this study. Further analysis of the results of this ensemble method shows that the ensemble method can obtain more targets which could not be found by any of the single methods, and the discovered targets are more statistically significant and functionally enriched. The source codes, datasets, miRNA target predictions by all methods, and the ground truth for validation are available in the Supplementary materials. PMID:26114448

  19. Comparability of river suspended-sediment sampling and laboratory analysis methods

    USGS Publications Warehouse

    Groten, Joel T.; Johnson, Gregory D.

    2018-03-06

    Accurate measurements of suspended sediment, a leading water-quality impairment in many Minnesota rivers, are important for managing and protecting water resources; however, water-quality standards for suspended sediment in Minnesota are based on grab field sampling and total suspended solids (TSS) laboratory analysis methods that have underrepresented concentrations of suspended sediment in rivers compared to U.S. Geological Survey equal-width-increment or equal-discharge-increment (EWDI) field sampling and suspended sediment concentration (SSC) laboratory analysis methods. Because of this underrepresentation, the U.S. Geological Survey, in collaboration with the Minnesota Pollution Control Agency, collected concurrent grab and EWDI samples at eight sites to compare results obtained using different combinations of field sampling and laboratory analysis methods.Study results determined that grab field sampling and TSS laboratory analysis results were biased substantially low compared to EWDI sampling and SSC laboratory analysis results, respectively. Differences in both field sampling and laboratory analysis methods caused grab and TSS methods to be biased substantially low. The difference in laboratory analysis methods was slightly greater than field sampling methods.Sand-sized particles had a strong effect on the comparability of the field sampling and laboratory analysis methods. These results indicated that grab field sampling and TSS laboratory analysis methods fail to capture most of the sand being transported by the stream. The results indicate there is less of a difference among samples collected with grab field sampling and analyzed for TSS and concentration of fines in SSC. Even though differences are present, the presence of strong correlations between SSC and TSS concentrations provides the opportunity to develop site specific relations to address transport processes not captured by grab field sampling and TSS laboratory analysis methods.

  20. Pirogow's Amputation: A Modification of the Operation Method

    PubMed Central

    Bueschges, M.; Muehlberger, T.; Mauss, K. L.; Bruck, J. C.; Ottomann, C.

    2013-01-01

    Introduction. Pirogow's amputation at the ankle presents a valuable alternative to lower leg amputation for patients with the corresponding indications. Although this method offers the ability to stay mobile without the use of a prosthesis, it is rarely performed. This paper proposes a modification regarding the operation method of the Pirogow amputation. The results of the modified operation method on ten patients were objectified 12 months after the operation using a patient questionnaire (Ankle Score). Material and Methods. We modified the original method by rotating the calcaneus. To fix the calcaneus to the tibia, Kirschner wire and a 3/0 spongiosa tension screw as well as a Fixateur externe were used. Results. 70% of those questioned who were amputated following the modified Pirogow method indicated an excellent or very good result in total points whereas in the control group (original Pirogow's amputation) only 40% reported excellent or very good result. In addition, the level of pain experienced one year after the completed operation showed different results in favour of the group being operated with the modified way. Furthermore, patients in both groups showed differences in radiological results, postoperative leg length difference, and postoperative mobility. Conclusion. The modified Pirogow amputation presents a valuable alternative to the original amputation method for patients with the corresponding indications. The benefits are found in the significantly reduced pain, difference in reduced radiological complications, the increase in mobility without a prosthesis, and the reduction of postoperative leg length difference. PMID:23606976

  1. Methods for using clinical laboratory test results as baseline confounders in multi-site observational database studies when missing data are expected.

    PubMed

    Raebel, Marsha A; Shetterly, Susan; Lu, Christine Y; Flory, James; Gagne, Joshua J; Harrell, Frank E; Haynes, Kevin; Herrinton, Lisa J; Patorno, Elisabetta; Popovic, Jennifer; Selvan, Mano; Shoaibi, Azadeh; Wang, Xingmei; Roy, Jason

    2016-07-01

    Our purpose was to quantify missing baseline laboratory results, assess predictors of missingness, and examine performance of missing data methods. Using the Mini-Sentinel Distributed Database from three sites, we selected three exposure-outcome scenarios with laboratory results as baseline confounders. We compared hazard ratios (HRs) or risk differences (RDs) and 95% confidence intervals (CIs) from models that omitted laboratory results, included only available results (complete cases), and included results after applying missing data methods (multiple imputation [MI] regression, MI predictive mean matching [PMM] indicator). Scenario 1 considered glucose among second-generation antipsychotic users and diabetes. Across sites, glucose was available for 27.7-58.9%. Results differed between complete case and missing data models (e.g., olanzapine: HR 0.92 [CI 0.73, 1.12] vs 1.02 [0.90, 1.16]). Across-site models employing different MI approaches provided similar HR and CI; site-specific models provided differing estimates. Scenario 2 evaluated creatinine among individuals starting high versus low dose lisinopril and hyperkalemia. Creatinine availability: 44.5-79.0%. Results differed between complete case and missing data models (e.g., HR 0.84 [CI 0.77, 0.92] vs. 0.88 [0.83, 0.94]). HR and CI were identical across MI methods. Scenario 3 examined international normalized ratio (INR) among warfarin users starting interacting versus noninteracting antimicrobials and bleeding. INR availability: 20.0-92.9%. Results differed between ignoring INR versus including INR using missing data methods (e.g., RD 0.05 [CI -0.03, 0.13] vs 0.09 [0.00, 0.18]). Indicator and PMM methods gave similar estimates. Multi-site studies must consider site variability in missing data. Different missing data methods performed similarly. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. How to Quantify Penile Corpus Cavernosum Structures with Histomorphometry: Comparison of Two Methods

    PubMed Central

    Felix-Patrício, Bruno; De Souza, Diogo Benchimol; Gregório, Bianca Martins; Costa, Waldemar Silva; Sampaio, Francisco José

    2015-01-01

    The use of morphometrical tools in biomedical research permits the accurate comparison of specimens subjected to different conditions, and the surface density of structures is commonly used for this purpose. The traditional point-counting method is reliable but time-consuming, with computer-aided methods being proposed as an alternative. The aim of this study was to compare the surface density data of penile corpus cavernosum trabecular smooth muscle in different groups of rats, measured by two observers using the point-counting or color-based segmentation method. Ten normotensive and 10 hypertensive male rats were used in this study. Rat penises were processed to obtain smooth muscle immunostained histological slices and photomicrographs captured for analysis. The smooth muscle surface density was measured in both groups by two different observers by the point-counting method and by the color-based segmentation method. Hypertensive rats showed an increase in smooth muscle surface density by the two methods, and no difference was found between the results of the two observers. However, surface density values were higher by the point-counting method. The use of either method did not influence the final interpretation of the results, and both proved to have adequate reproducibility. However, as differences were found between the two methods, results obtained by either method should not be compared. PMID:26413547

  3. How to Quantify Penile Corpus Cavernosum Structures with Histomorphometry: Comparison of Two Methods.

    PubMed

    Felix-Patrício, Bruno; De Souza, Diogo Benchimol; Gregório, Bianca Martins; Costa, Waldemar Silva; Sampaio, Francisco José

    2015-01-01

    The use of morphometrical tools in biomedical research permits the accurate comparison of specimens subjected to different conditions, and the surface density of structures is commonly used for this purpose. The traditional point-counting method is reliable but time-consuming, with computer-aided methods being proposed as an alternative. The aim of this study was to compare the surface density data of penile corpus cavernosum trabecular smooth muscle in different groups of rats, measured by two observers using the point-counting or color-based segmentation method. Ten normotensive and 10 hypertensive male rats were used in this study. Rat penises were processed to obtain smooth muscle immunostained histological slices and photomicrographs captured for analysis. The smooth muscle surface density was measured in both groups by two different observers by the point-counting method and by the color-based segmentation method. Hypertensive rats showed an increase in smooth muscle surface density by the two methods, and no difference was found between the results of the two observers. However, surface density values were higher by the point-counting method. The use of either method did not influence the final interpretation of the results, and both proved to have adequate reproducibility. However, as differences were found between the two methods, results obtained by either method should not be compared.

  4. Simultaneous determination of binary mixture of amlodipine besylate and atenolol based on dual wavelengths

    NASA Astrophysics Data System (ADS)

    Lamie, Nesrine T.

    2015-10-01

    Four, accurate, precise, and sensitive spectrophotometric methods are developed for simultaneous determination of a binary mixture of amlodipine besylate (AM) and atenolol (AT). AM is determined at its λmax 360 nm (0D), while atenolol can be determined by four different methods. Method (A) is absorption factor (AF). Method (B) is the new ratio difference method (RD) which measures the difference in amplitudes between 210 and 226 nm. Method (C) is novel constant center spectrophotometric method (CC). Method (D) is mean centering of the ratio spectra (MCR) at 284 nm. The methods are tested by analyzing synthetic mixtures of the cited drugs and they are applied to their commercial pharmaceutical preparation. The validity of results is assessed by applying standard addition technique. The results obtained are found to agree statistically with those obtained by official methods, showing no significant difference with respect to accuracy and precision.

  5. The Comparison of Matching Methods Using Different Measures of Balance: Benefits and Risks Exemplified within a Study to Evaluate the Effects of German Disease Management Programs on Long-Term Outcomes of Patients with Type 2 Diabetes.

    PubMed

    Fullerton, Birgit; Pöhlmann, Boris; Krohn, Robert; Adams, John L; Gerlach, Ferdinand M; Erler, Antje

    2016-10-01

    To present a case study on how to compare various matching methods applying different measures of balance and to point out some pitfalls involved in relying on such measures. Administrative claims data from a German statutory health insurance fund covering the years 2004-2008. We applied three different covariance balance diagnostics to a choice of 12 different matching methods used to evaluate the effectiveness of the German disease management program for type 2 diabetes (DMPDM2). We further compared the effect estimates resulting from applying these different matching techniques in the evaluation of the DMPDM2. The choice of balance measure leads to different results on the performance of the applied matching methods. Exact matching methods performed well across all measures of balance, but resulted in the exclusion of many observations, leading to a change of the baseline characteristics of the study sample and also the effect estimate of the DMPDM2. All PS-based methods showed similar effect estimates. Applying a higher matching ratio and using a larger variable set generally resulted in better balance. Using a generalized boosted instead of a logistic regression model showed slightly better performance for balance diagnostics taking into account imbalances at higher moments. Best practice should include the application of several matching methods and thorough balance diagnostics. Applying matching techniques can provide a useful preprocessing step to reveal areas of the data that lack common support. The use of different balance diagnostics can be helpful for the interpretation of different effect estimates found with different matching methods. © Health Research and Educational Trust.

  6. Task exposures in an office environment: a comparison of methods.

    PubMed

    Van Eerd, Dwayne; Hogg-Johnson, Sheilah; Mazumder, Anjali; Cole, Donald; Wells, Richard; Moore, Anne

    2009-10-01

    Task-related factors such as frequency and duration are associated with musculoskeletal disorders in office settings. The primary objective was to compare various task recording methods as measures of exposure in an office workplace. A total of 41 workers from different jobs were recruited from a large urban newspaper (71% female, mean age 41 years SD 9.6). Questionnaire, task diaries, direct observation and video methods were used to record tasks. A common set of task codes was used across methods. Different estimates of task duration, number of tasks and task transitions arose from the different methods. Self-report methods did not consistently result in longer task duration estimates. Methodological issues could explain some of the differences in estimates seen between methods observed. It was concluded that different task recording methods result in different estimates of exposure likely due to different exposure constructs. This work addresses issues of exposure measurement in office environments. It is of relevance to ergonomists/researchers interested in how to best assess the risk of injury among office workers. The paper discusses the trade-offs between precision, accuracy and burden in the collection of computer task-based exposure measures and different underlying constructs captures in each method.

  7. A simulation-based evaluation of methods for inferring linear barriers to gene flow

    Treesearch

    Christopher Blair; Dana E. Weigel; Matthew Balazik; Annika T. H. Keeley; Faith M. Walker; Erin Landguth; Sam Cushman; Melanie Murphy; Lisette Waits; Niko Balkenhol

    2012-01-01

    Different analytical techniques used on the same data set may lead to different conclusions about the existence and strength of genetic structure. Therefore, reliable interpretation of the results from different methods depends on the efficacy and reliability of different statistical methods. In this paper, we evaluated the performance of multiple analytical methods to...

  8. Wavelet analysis in ecology and epidemiology: impact of statistical tests

    PubMed Central

    Cazelles, Bernard; Cazelles, Kévin; Chavez, Mario

    2014-01-01

    Wavelet analysis is now frequently used to extract information from ecological and epidemiological time series. Statistical hypothesis tests are conducted on associated wavelet quantities to assess the likelihood that they are due to a random process. Such random processes represent null models and are generally based on synthetic data that share some statistical characteristics with the original time series. This allows the comparison of null statistics with those obtained from original time series. When creating synthetic datasets, different techniques of resampling result in different characteristics shared by the synthetic time series. Therefore, it becomes crucial to consider the impact of the resampling method on the results. We have addressed this point by comparing seven different statistical testing methods applied with different real and simulated data. Our results show that statistical assessment of periodic patterns is strongly affected by the choice of the resampling method, so two different resampling techniques could lead to two different conclusions about the same time series. Moreover, our results clearly show the inadequacy of resampling series generated by white noise and red noise that are nevertheless the methods currently used in the wide majority of wavelets applications. Our results highlight that the characteristics of a time series, namely its Fourier spectrum and autocorrelation, are important to consider when choosing the resampling technique. Results suggest that data-driven resampling methods should be used such as the hidden Markov model algorithm and the ‘beta-surrogate’ method. PMID:24284892

  9. Wavelet analysis in ecology and epidemiology: impact of statistical tests.

    PubMed

    Cazelles, Bernard; Cazelles, Kévin; Chavez, Mario

    2014-02-06

    Wavelet analysis is now frequently used to extract information from ecological and epidemiological time series. Statistical hypothesis tests are conducted on associated wavelet quantities to assess the likelihood that they are due to a random process. Such random processes represent null models and are generally based on synthetic data that share some statistical characteristics with the original time series. This allows the comparison of null statistics with those obtained from original time series. When creating synthetic datasets, different techniques of resampling result in different characteristics shared by the synthetic time series. Therefore, it becomes crucial to consider the impact of the resampling method on the results. We have addressed this point by comparing seven different statistical testing methods applied with different real and simulated data. Our results show that statistical assessment of periodic patterns is strongly affected by the choice of the resampling method, so two different resampling techniques could lead to two different conclusions about the same time series. Moreover, our results clearly show the inadequacy of resampling series generated by white noise and red noise that are nevertheless the methods currently used in the wide majority of wavelets applications. Our results highlight that the characteristics of a time series, namely its Fourier spectrum and autocorrelation, are important to consider when choosing the resampling technique. Results suggest that data-driven resampling methods should be used such as the hidden Markov model algorithm and the 'beta-surrogate' method.

  10. A Preliminary Study of the Effectiveness of Different Recitation Teaching Methods

    NASA Astrophysics Data System (ADS)

    Endorf, Robert J.; Koenig, Kathleen M.; Braun, Gregory A.

    2006-02-01

    We present preliminary results from a comparative study of student understanding for students who attended recitation classes which used different teaching methods. Student volunteers from our introductory calculus-based physics course attended a special recitation class that was taught using one of four different teaching methods. A total of 272 students were divided into approximately equal groups for each method. Students in each class were taught the same topic, "Changes in energy and momentum," from Tutorials in Introductory Physics. The different teaching methods varied in the amount of student and teacher engagement. Student understanding was evaluated through pretests and posttests given at the recitation class. Our results demonstrate the importance of the instructor's role in teaching recitation classes. The most effective teaching method was for students working in cooperative learning groups with the instructors questioning the groups using Socratic dialogue. These results provide guidance and evidence for the teaching methods which should be emphasized in training future teachers and faculty members.

  11. Ensemble Methods for MiRNA Target Prediction from Expression Data.

    PubMed

    Le, Thuc Duy; Zhang, Junpeng; Liu, Lin; Li, Jiuyong

    2015-01-01

    microRNAs (miRNAs) are short regulatory RNAs that are involved in several diseases, including cancers. Identifying miRNA functions is very important in understanding disease mechanisms and determining the efficacy of drugs. An increasing number of computational methods have been developed to explore miRNA functions by inferring the miRNA-mRNA regulatory relationships from data. Each of the methods is developed based on some assumptions and constraints, for instance, assuming linear relationships between variables. For such reasons, computational methods are often subject to the problem of inconsistent performance across different datasets. On the other hand, ensemble methods integrate the results from individual methods and have been proved to outperform each of their individual component methods in theory. In this paper, we investigate the performance of some ensemble methods over the commonly used miRNA target prediction methods. We apply eight different popular miRNA target prediction methods to three cancer datasets, and compare their performance with the ensemble methods which integrate the results from each combination of the individual methods. The validation results using experimentally confirmed databases show that the results of the ensemble methods complement those obtained by the individual methods and the ensemble methods perform better than the individual methods across different datasets. The ensemble method, Pearson+IDA+Lasso, which combines methods in different approaches, including a correlation method, a causal inference method, and a regression method, is the best performed ensemble method in this study. Further analysis of the results of this ensemble method shows that the ensemble method can obtain more targets which could not be found by any of the single methods, and the discovered targets are more statistically significant and functionally enriched. The source codes, datasets, miRNA target predictions by all methods, and the ground truth for validation are available in the Supplementary materials.

  12. Evaluation of Lysis Methods for the Extraction of Bacterial DNA for Analysis of the Vaginal Microbiota.

    PubMed

    Gill, Christina; van de Wijgert, Janneke H H M; Blow, Frances; Darby, Alistair C

    2016-01-01

    Recent studies on the vaginal microbiota have employed molecular techniques such as 16S rRNA gene sequencing to describe the bacterial community as a whole. These techniques require the lysis of bacterial cells to release DNA before purification and PCR amplification of the 16S rRNA gene. Currently, methods for the lysis of bacterial cells are not standardised and there is potential for introducing bias into the results if some bacterial species are lysed less efficiently than others. This study aimed to compare the results of vaginal microbiota profiling using four different pretreatment methods for the lysis of bacterial samples (30 min of lysis with lysozyme, 16 hours of lysis with lysozyme, 60 min of lysis with a mixture of lysozyme, mutanolysin and lysostaphin and 30 min of lysis with lysozyme followed by bead beating) prior to chemical and enzyme-based DNA extraction with a commercial kit. After extraction, DNA yield did not significantly differ between methods with the exception of lysis with lysozyme combined with bead beating which produced significantly lower yields when compared to lysis with the enzyme cocktail or 30 min lysis with lysozyme only. However, this did not result in a statistically significant difference in the observed alpha diversity of samples. The beta diversity (Bray-Curtis dissimilarity) between different lysis methods was statistically significantly different, but this difference was small compared to differences between samples, and did not affect the grouping of samples with similar vaginal bacterial community structure by hierarchical clustering. An understanding of how laboratory methods affect the results of microbiota studies is vital in order to accurately interpret the results and make valid comparisons between studies. Our results indicate that the choice of lysis method does not prevent the detection of effects relating to the type of vaginal bacterial community one of the main outcome measures of epidemiological studies. However, we recommend that the same method is used on all samples within a particular study.

  13. Exploration of Analysis Methods for Diagnostic Imaging Tests: Problems with ROC AUC and Confidence Scores in CT Colonography

    PubMed Central

    Mallett, Susan; Halligan, Steve; Collins, Gary S.; Altman, Doug G.

    2014-01-01

    Background Different methods of evaluating diagnostic performance when comparing diagnostic tests may lead to different results. We compared two such approaches, sensitivity and specificity with area under the Receiver Operating Characteristic Curve (ROC AUC) for the evaluation of CT colonography for the detection of polyps, either with or without computer assisted detection. Methods In a multireader multicase study of 10 readers and 107 cases we compared sensitivity and specificity, using radiological reporting of the presence or absence of polyps, to ROC AUC calculated from confidence scores concerning the presence of polyps. Both methods were assessed against a reference standard. Here we focus on five readers, selected to illustrate issues in design and analysis. We compared diagnostic measures within readers, showing that differences in results are due to statistical methods. Results Reader performance varied widely depending on whether sensitivity and specificity or ROC AUC was used. There were problems using confidence scores; in assigning scores to all cases; in use of zero scores when no polyps were identified; the bimodal non-normal distribution of scores; fitting ROC curves due to extrapolation beyond the study data; and the undue influence of a few false positive results. Variation due to use of different ROC methods exceeded differences between test results for ROC AUC. Conclusions The confidence scores recorded in our study violated many assumptions of ROC AUC methods, rendering these methods inappropriate. The problems we identified will apply to other detection studies using confidence scores. We found sensitivity and specificity were a more reliable and clinically appropriate method to compare diagnostic tests. PMID:25353643

  14. Advances in NMR Spectroscopy for Lipid Oxidation Assessment

    USDA-ARS?s Scientific Manuscript database

    Although there are many analytical methods developed for the assessment of lipid oxidation, different analytical methods often give different, sometimes even contradictory, results. The reason for this inconsistency is that although there are many different kinds of oxidation products, most methods ...

  15. Relation between financial market structure and the real economy: comparison between clustering methods.

    PubMed

    Musmeci, Nicoló; Aste, Tomaso; Di Matteo, T

    2015-01-01

    We quantify the amount of information filtered by different hierarchical clustering methods on correlations between stock returns comparing the clustering structure with the underlying industrial activity classification. We apply, for the first time to financial data, a novel hierarchical clustering approach, the Directed Bubble Hierarchical Tree and we compare it with other methods including the Linkage and k-medoids. By taking the industrial sector classification of stocks as a benchmark partition, we evaluate how the different methods retrieve this classification. The results show that the Directed Bubble Hierarchical Tree can outperform other methods, being able to retrieve more information with fewer clusters. Moreover,we show that the economic information is hidden at different levels of the hierarchical structures depending on the clustering method. The dynamical analysis on a rolling window also reveals that the different methods show different degrees of sensitivity to events affecting financial markets, like crises. These results can be of interest for all the applications of clustering methods to portfolio optimization and risk hedging [corrected].

  16. Relation between Financial Market Structure and the Real Economy: Comparison between Clustering Methods

    PubMed Central

    Musmeci, Nicoló; Aste, Tomaso; Di Matteo, T.

    2015-01-01

    We quantify the amount of information filtered by different hierarchical clustering methods on correlations between stock returns comparing the clustering structure with the underlying industrial activity classification. We apply, for the first time to financial data, a novel hierarchical clustering approach, the Directed Bubble Hierarchical Tree and we compare it with other methods including the Linkage and k-medoids. By taking the industrial sector classification of stocks as a benchmark partition, we evaluate how the different methods retrieve this classification. The results show that the Directed Bubble Hierarchical Tree can outperform other methods, being able to retrieve more information with fewer clusters. Moreover, we show that the economic information is hidden at different levels of the hierarchical structures depending on the clustering method. The dynamical analysis on a rolling window also reveals that the different methods show different degrees of sensitivity to events affecting financial markets, like crises. These results can be of interest for all the applications of clustering methods to portfolio optimization and risk hedging. PMID:25786703

  17. Quantifying distinct associations on different temporal scales: comparison of DCCA and Pearson methods

    NASA Astrophysics Data System (ADS)

    Piao, Lin; Fu, Zuntao

    2016-11-01

    Cross-correlation between pairs of variables takes multi-time scale characteristic, and it can be totally different on different time scales (changing from positive correlation to negative one), e.g., the associations between mean air temperature and relative humidity over regions to the east of Taihang mountain in China. Therefore, how to correctly unveil these correlations on different time scales is really of great importance since we actually do not know if the correlation varies with scales in advance. Here, we compare two methods, i.e. Detrended Cross-Correlation Analysis (DCCA for short) and Pearson correlation, in quantifying scale-dependent correlations directly to raw observed records and artificially generated sequences with known cross-correlation features. Studies show that 1) DCCA related methods can indeed quantify scale-dependent correlations, but not Pearson method; 2) the correlation features from DCCA related methods are robust to contaminated noises, however, the results from Pearson method are sensitive to noise; 3) the scale-dependent correlation results from DCCA related methods are robust to the amplitude ratio between slow and fast components, while Pearson method may be sensitive to the amplitude ratio. All these features indicate that DCCA related methods take some advantages in correctly quantifying scale-dependent correlations, which results from different physical processes.

  18. Quantifying distinct associations on different temporal scales: comparison of DCCA and Pearson methods.

    PubMed

    Piao, Lin; Fu, Zuntao

    2016-11-09

    Cross-correlation between pairs of variables takes multi-time scale characteristic, and it can be totally different on different time scales (changing from positive correlation to negative one), e.g., the associations between mean air temperature and relative humidity over regions to the east of Taihang mountain in China. Therefore, how to correctly unveil these correlations on different time scales is really of great importance since we actually do not know if the correlation varies with scales in advance. Here, we compare two methods, i.e. Detrended Cross-Correlation Analysis (DCCA for short) and Pearson correlation, in quantifying scale-dependent correlations directly to raw observed records and artificially generated sequences with known cross-correlation features. Studies show that 1) DCCA related methods can indeed quantify scale-dependent correlations, but not Pearson method; 2) the correlation features from DCCA related methods are robust to contaminated noises, however, the results from Pearson method are sensitive to noise; 3) the scale-dependent correlation results from DCCA related methods are robust to the amplitude ratio between slow and fast components, while Pearson method may be sensitive to the amplitude ratio. All these features indicate that DCCA related methods take some advantages in correctly quantifying scale-dependent correlations, which results from different physical processes.

  19. Methods for Synthesizing Findings on Moderation Effects Across Multiple Randomized Trials

    PubMed Central

    Brown, C Hendricks; Sloboda, Zili; Faggiano, Fabrizio; Teasdale, Brent; Keller, Ferdinand; Burkhart, Gregor; Vigna-Taglianti, Federica; Howe, George; Masyn, Katherine; Wang, Wei; Muthén, Bengt; Stephens, Peggy; Grey, Scott; Perrino, Tatiana

    2011-01-01

    This paper presents new methods for synthesizing results from subgroup and moderation analyses across different randomized trials. We demonstrate that such a synthesis generally results in additional power to detect significant moderation findings above what one would find in a single trial. Three general methods for conducting synthesis analyses are discussed, with two methods, integrative data analysis, and parallel analyses, sharing a large advantage over traditional methods available in meta-analysis. We present a broad class of analytic models to examine moderation effects across trials that can be used to assess their overall effect and explain sources of heterogeneity, and present ways to disentangle differences across trials due to individual differences, contextual level differences, intervention, and trial design. PMID:21360061

  20. Methods for synthesizing findings on moderation effects across multiple randomized trials.

    PubMed

    Brown, C Hendricks; Sloboda, Zili; Faggiano, Fabrizio; Teasdale, Brent; Keller, Ferdinand; Burkhart, Gregor; Vigna-Taglianti, Federica; Howe, George; Masyn, Katherine; Wang, Wei; Muthén, Bengt; Stephens, Peggy; Grey, Scott; Perrino, Tatiana

    2013-04-01

    This paper presents new methods for synthesizing results from subgroup and moderation analyses across different randomized trials. We demonstrate that such a synthesis generally results in additional power to detect significant moderation findings above what one would find in a single trial. Three general methods for conducting synthesis analyses are discussed, with two methods, integrative data analysis and parallel analyses, sharing a large advantage over traditional methods available in meta-analysis. We present a broad class of analytic models to examine moderation effects across trials that can be used to assess their overall effect and explain sources of heterogeneity, and present ways to disentangle differences across trials due to individual differences, contextual level differences, intervention, and trial design.

  1. Performance assessment of methods for estimation of fractal dimension from scanning electron microscope images.

    PubMed

    Risović, Dubravko; Pavlović, Zivko

    2013-01-01

    Processing of gray scale images in order to determine the corresponding fractal dimension is very important due to widespread use of imaging technologies and application of fractal analysis in many areas of science, technology, and medicine. To this end, many methods for estimation of fractal dimension from gray scale images have been developed and routinely used. Unfortunately different methods (dimension estimators) often yield significantly different results in a manner that makes interpretation difficult. Here, we report results of comparative assessment of performance of several most frequently used algorithms/methods for estimation of fractal dimension. To that purpose, we have used scanning electron microscope images of aluminum oxide surfaces with different fractal dimensions. The performance of algorithms/methods was evaluated using the statistical Z-score approach. The differences between performances of six various methods are discussed and further compared with results obtained by electrochemical impedance spectroscopy on the same samples. The analysis of results shows that the performance of investigated algorithms varies considerably and that systematically erroneous fractal dimensions could be estimated using certain methods. The differential cube counting, triangulation, and box counting algorithms showed satisfactory performance in the whole investigated range of fractal dimensions. Difference statistic is proved to be less reliable generating 4% of unsatisfactory results. The performances of the Power spectrum, Partitioning and EIS were unsatisfactory in 29%, 38%, and 75% of estimations, respectively. The results of this study should be useful and provide guidelines to researchers using/attempting fractal analysis of images obtained by scanning microscopy or atomic force microscopy. © Wiley Periodicals, Inc.

  2. [Comprehensive weighted recognition method for hydrological abrupt change: With the runoff series of Jiajiu hydrological station in Lancang River as an example].

    PubMed

    Gu, Hai Ting; Xie, Ping; Sang, Yan Fang; Wu, Zi Yi

    2018-04-01

    Abrupt change is an important manifestation of hydrological process with dramatic variation in the context of global climate change, the accurate recognition of which has great significance to understand hydrological process changes and carry out the actual hydrological and water resources works. The traditional method is not reliable at both ends of the samples. The results of the methods are often inconsistent. In order to solve the problem, we proposed a comprehensive weighted recognition method for hydrological abrupt change based on weighting by comparing of 12 commonly used methods for testing change points. The reliability of the method was verified by Monte Carlo statistical test. The results showed that the efficiency of the 12 methods was influenced by the factors including coefficient of variation (Cv), deviation coefficient (Cs) before the change point, mean value difference coefficient, Cv difference coefficient and Cs difference coefficient, but with no significant relationship with the mean value of the sequence. Based on the performance of each method, the weight of each test method was given following the results from statistical test. The sliding rank sum test method and the sliding run test method had the highest weight, whereas the RS test method had the lowest weight. By this means, the change points with the largest comprehensive weight could be selected as the final result when the results of the different methods were inconsistent. This method was used to analyze the daily maximum sequence of Jiajiu station in the lower reaches of the Lancang River (1-day, 3-day, 5-day, 7-day and 1-month). The results showed that each sequence had obvious jump variation in 2004, which was in agreement with the physical causes of hydrological process change and water conservancy construction. The rationality and reliability of the proposed method was verified.

  3. Evaluation of Lysis Methods for the Extraction of Bacterial DNA for Analysis of the Vaginal Microbiota

    PubMed Central

    Gill, Christina; Blow, Frances; Darby, Alistair C.

    2016-01-01

    Background Recent studies on the vaginal microbiota have employed molecular techniques such as 16S rRNA gene sequencing to describe the bacterial community as a whole. These techniques require the lysis of bacterial cells to release DNA before purification and PCR amplification of the 16S rRNA gene. Currently, methods for the lysis of bacterial cells are not standardised and there is potential for introducing bias into the results if some bacterial species are lysed less efficiently than others. This study aimed to compare the results of vaginal microbiota profiling using four different pretreatment methods for the lysis of bacterial samples (30 min of lysis with lysozyme, 16 hours of lysis with lysozyme, 60 min of lysis with a mixture of lysozyme, mutanolysin and lysostaphin and 30 min of lysis with lysozyme followed by bead beating) prior to chemical and enzyme-based DNA extraction with a commercial kit. Results After extraction, DNA yield did not significantly differ between methods with the exception of lysis with lysozyme combined with bead beating which produced significantly lower yields when compared to lysis with the enzyme cocktail or 30 min lysis with lysozyme only. However, this did not result in a statistically significant difference in the observed alpha diversity of samples. The beta diversity (Bray-Curtis dissimilarity) between different lysis methods was statistically significantly different, but this difference was small compared to differences between samples, and did not affect the grouping of samples with similar vaginal bacterial community structure by hierarchical clustering. Conclusions An understanding of how laboratory methods affect the results of microbiota studies is vital in order to accurately interpret the results and make valid comparisons between studies. Our results indicate that the choice of lysis method does not prevent the detection of effects relating to the type of vaginal bacterial community one of the main outcome measures of epidemiological studies. However, we recommend that the same method is used on all samples within a particular study. PMID:27643503

  4. A collaborative design method to support integrated care. An ICT development method containing continuous user validation improves the entire care process and the individual work situation

    PubMed Central

    Scandurra, Isabella; Hägglund, Maria

    2009-01-01

    Introduction Integrated care involves different professionals, belonging to different care provider organizations and requires immediate and ubiquitous access to patient-oriented information, supporting an integrated view on the care process [1]. Purpose To present a method for development of usable and work process-oriented information and communication technology (ICT) systems for integrated care. Theory and method Based on Human-computer Interaction Science and in particular Participatory Design [2], we present a new collaborative design method in the context of health information systems (HIS) development [3]. This method implies a thorough analysis of the entire interdisciplinary cooperative work and a transformation of the results into technical specifications, via user validated scenarios, prototypes and use cases, ultimately leading to the development of appropriate ICT for the variety of occurring work situations for different user groups, or professions, in integrated care. Results and conclusions Application of the method in homecare of the elderly resulted in an HIS that was well adapted to the intended user groups. Conducted in multi-disciplinary seminars, the method captured and validated user needs and system requirements for different professionals, work situations, and environments not only for current work; it also aimed to improve collaboration in future (ICT supported) work processes. A holistic view of the entire care process was obtained and supported through different views of the HIS for different user groups, resulting in improved work in the entire care process as well as for each collaborating profession [4].

  5. Effectiveness of different tutorial recitation teaching methods and its implications for TA training

    NASA Astrophysics Data System (ADS)

    Koenig, Kathleen M.; Endorf, Robert J.; Braun, Gregory A.

    2007-06-01

    We present results from a comparative study of student understanding for students who attended recitation classes that used different teaching methods. Student volunteers from our introductory calculus-based physics course attended a special recitation class that was taught using one of four different teaching methods. A total of 272 students were divided into approximately equal groups for each method. Students in each class were taught the same topic, “Changes in Energy and Momentum,” from Tutorials in Introductory Physics. The different teaching methods varied in the amount of student and teacher engagement. Student understanding was evaluated through pre- and post-tests. Our results demonstrate the importance of the instructor’s role in teaching recitation classes. The most effective teaching method was for students working in cooperative learning groups with the instructors questioning the groups using Socratic dialogue. In addition, we investigated student preferences for modes of instruction through an open-ended survey. Our results provide guidance and evidence for the teaching methods that should be emphasized in training course instructors.

  6. Interpretation of biological and mechanical variations between the Lowry versus Bradford method for protein quantification.

    PubMed

    Lu, Tzong-Shi; Yiao, Szu-Yu; Lim, Kenneth; Jensen, Roderick V; Hsiao, Li-Li

    2010-07-01

    The identification of differences in protein expression resulting from methodical variations is an essential component to the interpretation of true, biologically significant results. We used the Lowry and Bradford methods- two most commonly used methods for protein quantification, to assess whether differential protein expressions are a result of true biological or methodical variations. MATERIAL #ENTITYSTARTX00026; Differential protein expression patterns was assessed by western blot following protein quantification by the Lowry and Bradford methods. We have observed significant variations in protein concentrations following assessment with the Lowry versus Bradford methods, using identical samples. Greater variations in protein concentration readings were observed over time and in samples with higher concentrations, with the Bradford method. Identical samples quantified using both methods yielded significantly different expression patterns on Western blot. We show for the first time that methodical variations observed in these protein assay techniques, can potentially translate into differential protein expression patterns, that can be falsely taken to be biologically significant. Our study therefore highlights the pivotal need to carefully consider methodical approaches to protein quantification in techniques that report quantitative differences.

  7. Models of convection-driven tectonic plates - A comparison of methods and results

    NASA Technical Reports Server (NTRS)

    King, Scott D.; Gable, Carl W.; Weinstein, Stuart A.

    1992-01-01

    Recent numerical studies of convection in the earth's mantle have included various features of plate tectonics. This paper describes three methods of modeling plates: through material properties, through force balance, and through a thin power-law sheet approximation. The results obtained are compared using each method on a series of simple calculations. From these results, scaling relations between the different parameterizations are developed. While each method produces different degrees of deformation within the surface plate, the surface heat flux and average plate velocity agree to within a few percent. The main results are not dependent upon the plate modeling method and herefore are representative of the physical system modeled.

  8. Development of Gold Standard Ion-Selective Electrode-Based Methods for Fluoride Analysis

    PubMed Central

    Martínez-Mier, E.A.; Cury, J.A.; Heilman, J.R.; Katz, B.P.; Levy, S.M.; Li, Y.; Maguire, A.; Margineda, J.; O’Mullane, D.; Phantumvanit, P.; Soto-Rojas, A.E.; Stookey, G.K.; Villa, A.; Wefel, J.S.; Whelton, H.; Whitford, G.M.; Zero, D.T.; Zhang, W.; Zohouri, V.

    2011-01-01

    Background/Aims: Currently available techniques for fluoride analysis are not standardized. Therefore, this study was designed to develop standardized methods for analyzing fluoride in biological and nonbiological samples used for dental research. Methods A group of nine laboratories analyzed a set of standardized samples for fluoride concentration using their own methods. The group then reviewed existing analytical techniques for fluoride analysis, identified inconsistencies in the use of these techniques and conducted testing to resolve differences. Based on the results of the testing undertaken to define the best approaches for the analysis, the group developed recommendations for direct and microdiffusion methods using the fluoride ion-selective electrode. Results Initial results demonstrated that there was no consensus regarding the choice of analytical techniques for different types of samples. Although for several types of samples, the results of the fluoride analyses were similar among some laboratories, greater differences were observed for saliva, food and beverage samples. In spite of these initial differences, precise and true values of fluoride concentration, as well as smaller differences between laboratories, were obtained once the standardized methodologies were used. Intraclass correlation coefficients ranged from 0.90 to 0.93, for the analysis of a certified reference material, using the standardized methodologies. Conclusion The results of this study demonstrate that the development and use of standardized protocols for F analysis significantly decreased differences among laboratories and resulted in more precise and true values. PMID:21160184

  9. Effect of joint spacing and joint dip on the stress distribution around tunnels using different numerical methods

    NASA Astrophysics Data System (ADS)

    Nikadat, Nooraddin; Fatehi Marji, Mohammad; Rahmannejad, Reza; Yarahmadi Bafghi, Alireza

    2016-11-01

    Different conditions may affect the stability of tunnels by the geometry (spacing and orientation) of joints in the surrounded rock mass. In this study, by comparing the results obtained by the three novel numerical methods i.e. finite element method (Phase2), discrete element method (UDEC) and indirect boundary element method (TFSDDM), the effects of joint spacing and joint dips on the stress distribution around rock tunnels are numerically studied. These comparisons indicate the validity of the stress analyses around circular rock tunnels. These analyses also reveal that for a semi-continuous environment, boundary element method gives more accurate results compared to the results of finite element and distinct element methods. In the indirect boundary element method, the displacements due to joints of different spacing and dips are estimated by using displacement discontinuity (DD) formulations and the total stress distribution around the tunnel are obtained by using fictitious stress (FS) formulations.

  10. Image preprocessing study on KPCA-based face recognition

    NASA Astrophysics Data System (ADS)

    Li, Xuan; Li, Dehua

    2015-12-01

    Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.

  11. Atmospheric Blocking and Intercomparison of Objective Detection Methods: Flow Field Characteristics

    NASA Astrophysics Data System (ADS)

    Pinheiro, M. C.; Ullrich, P. A.; Grotjahn, R.

    2017-12-01

    A number of objective methods for identifying and quantifying atmospheric blocking have been developed over the last couple of decades, but there is variable consensus on the resultant blocking climatology. This project examines blocking climatologies as produced by three different methods: two anomaly-based methods, and the geopotential height gradient method of Tibaldi and Molteni (1990). The results highlight the differences in blocking that arise from the choice of detection method, with emphasis on the physical characteristics of the flow field and the subsequent effects on the blocking patterns that emerge.

  12. Comparison of histomorphometrical data obtained with two different image analysis methods.

    PubMed

    Ballerini, Lucia; Franke-Stenport, Victoria; Borgefors, Gunilla; Johansson, Carina B

    2007-08-01

    A common way to determine tissue acceptance of biomaterials is to perform histomorphometrical analysis on histologically stained sections from retrieved samples with surrounding tissue, using various methods. The "time and money consuming" methods and techniques used are often "in house standards". We address light microscopic investigations of bone tissue reactions on un-decalcified cut and ground sections of threaded implants. In order to screen sections and generate results faster, the aim of this pilot project was to compare results generated with the in-house standard visual image analysis tool (i.e., quantifications and judgements done by the naked eye) with a custom made automatic image analysis program. The histomorphometrical bone area measurements revealed no significant differences between the methods but the results of the bony contacts varied significantly. The raw results were in relative agreement, i.e., the values from the two methods were proportional to each other: low bony contact values in the visual method corresponded to low values with the automatic method. With similar resolution images and further improvements of the automatic method this difference should become insignificant. A great advantage using the new automatic image analysis method is that it is time saving--analysis time can be significantly reduced.

  13. Investigation of Super Learner Methodology on HIV-1 Small Sample: Application on Jaguar Trial Data.

    PubMed

    Houssaïni, Allal; Assoumou, Lambert; Marcelin, Anne Geneviève; Molina, Jean Michel; Calvez, Vincent; Flandre, Philippe

    2012-01-01

    Background. Many statistical models have been tested to predict phenotypic or virological response from genotypic data. A statistical framework called Super Learner has been introduced either to compare different methods/learners (discrete Super Learner) or to combine them in a Super Learner prediction method. Methods. The Jaguar trial is used to apply the Super Learner framework. The Jaguar study is an "add-on" trial comparing the efficacy of adding didanosine to an on-going failing regimen. Our aim was also to investigate the impact on the use of different cross-validation strategies and different loss functions. Four different repartitions between training set and validations set were tested through two loss functions. Six statistical methods were compared. We assess performance by evaluating R(2) values and accuracy by calculating the rates of patients being correctly classified. Results. Our results indicated that the more recent Super Learner methodology of building a new predictor based on a weighted combination of different methods/learners provided good performance. A simple linear model provided similar results to those of this new predictor. Slight discrepancy arises between the two loss functions investigated, and slight difference arises also between results based on cross-validated risks and results from full dataset. The Super Learner methodology and linear model provided around 80% of patients correctly classified. The difference between the lower and higher rates is around 10 percent. The number of mutations retained in different learners also varys from one to 41. Conclusions. The more recent Super Learner methodology combining the prediction of many learners provided good performance on our small dataset.

  14. Analysis of drift correction in different simulated weighing schemes

    NASA Astrophysics Data System (ADS)

    Beatrici, A.; Rebelo, A.; Quintão, D.; Cacais, F. L.; Loayza, V. M.

    2015-10-01

    In the calibration of high accuracy mass standards some weighing schemes are used to reduce or eliminate the zero drift effects in mass comparators. There are different sources for the drift and different methods for its treatment. By using numerical methods, drift functions were simulated and a random term was included in each function. The comparison between the results obtained from ABABAB and ABBA weighing series was carried out. The results show a better efficacy of ABABAB method for drift with smooth variation and small randomness.

  15. Comparison of discrete ordinate and Monte Carlo simulations of polarized radiative transfer in two coupled slabs with different refractive indices.

    PubMed

    Cohen, D; Stamnes, S; Tanikawa, T; Sommersten, E R; Stamnes, J J; Lotsberg, J K; Stamnes, K

    2013-04-22

    A comparison is presented of two different methods for polarized radiative transfer in coupled media consisting of two adjacent slabs with different refractive indices, each slab being a stratified medium with no change in optical properties except in the direction of stratification. One of the methods is based on solving the integro-differential radiative transfer equation for the two coupled slabs using the discrete ordinate approximation. The other method is based on probabilistic and statistical concepts and simulates the propagation of polarized light using the Monte Carlo approach. The emphasis is on non-Rayleigh scattering for particles in the Mie regime. Comparisons with benchmark results available for a slab with constant refractive index show that both methods reproduce these benchmark results when the refractive index is set to be the same in the two slabs. Computed results for test cases with coupling (different refractive indices in the two slabs) show that the two methods produce essentially identical results for identical input in terms of absorption and scattering coefficients and scattering phase matrices.

  16. Sensitivity of different Trypanosoma vivax specific primers for the diagnosis of livestock trypanosomosis using different DNA extraction methods.

    PubMed

    Gonzales, J L; Loza, A; Chacon, E

    2006-03-15

    There are several T. vivax specific primers developed for PCR diagnosis. Most of these primers were validated under different DNA extraction methods and study designs leading to heterogeneity of results. The objective of the present study was to validate PCR as a diagnostic test for T. vivax trypanosomosis by means of determining the test sensitivity of different published specific primers with different sample preparations. Four different DNA extraction methods were used to test the sensitivity of PCR with four different primer sets. DNA was extracted directly from whole blood samples, blood dried on filter papers or blood dried on FTA cards. The results showed that the sensitivity of PCR with each primer set was highly dependant of the sample preparation and DNA extraction method. The highest sensitivities for all the primers tested were determined using DNA extracted from whole blood samples, while the lowest sensitivities were obtained when DNA was extracted from filter paper preparations. To conclude, the obtained results are discussed and a protocol for diagnosis and surveillance for T. vivax trypanosomosis is recommended.

  17. Improved methods of vibration analysis of pretwisted, airfoil blades

    NASA Technical Reports Server (NTRS)

    Subrahmanyam, K. B.; Kaza, K. R. V.

    1984-01-01

    Vibration analysis of pretwisted blades of asymmetric airfoil cross section is performed by using two mixed variational approaches. Numerical results obtained from these two methods are compared to those obtained from an improved finite difference method and also to those given by the ordinary finite difference method. The relative merits, convergence properties and accuracies of all four methods are studied and discussed. The effects of asymmetry and pretwist on natural frequencies and mode shapes are investigated. The improved finite difference method is shown to be far superior to the conventional finite difference method in several respects. Close lower bound solutions are provided by the improved finite difference method for untwisted blades with a relatively coarse mesh while the mixed methods have not indicated any specific bound.

  18. Influence of the Extractive Method on the Recovery of Phenolic Compounds in Different Parts of Hymenaea martiana Hayne

    PubMed Central

    Oliveira, Fernanda Granja da Silva; de Lima-Saraiva, Sarah Raquel Gomes; Oliveira, Ana Paula; Rabêlo, Suzana Vieira; Rolim, Larissa Araújo; Almeida, Jackson Roberto Guedes da Silva

    2016-01-01

    Background: Popularly known as “jatobá,” Hymenaea martiana Hayne is a medicinal plant widely used in the Brazilian Northeast for the treatment of various diseases. Objective: The aim of this study was to evaluate the influence of different extractive methods in the production of phenolic compounds from different parts of H. martiana. Materials and Methods: The leaves, bark, fruits, and seeds were dried, pulverized, and submitted to maceration, ultrasound, and percolation extractive methods, which were evaluated for yield, visual aspects, qualitative phytochemical screening, phenolic compound content, and total flavonoids. Results: The highest results of yield were obtained from the maceration of the leaves, which may be related to the contact time between the plant drug and solvent. The visual aspects of the extracts presented some differences between the extractive methods. The phytochemical screening showed consistent data with other studies of the genus. Both the vegetal part as the different extractive methods influenced significantly the levels of phenolic compounds, and the highest content was found in the maceration of the barks, even more than the content found previously. No differences between the levels of total flavonoids were significant. The highest concentration of total flavonoids was found in the ultrasound of the barks, followed by maceration on this drug. According to the results, the barks of H. martiana presented the highest total flavonoid contents. Conclusion: The results demonstrate that both the vegetable and the different extractive methods influenced significantly various parameters obtained in the various extracts, demonstrating the importance of systematic comparative studies for the development of pharmaceuticals and cosmetics. SUMMARY The phytochemical screening showed consistent data with other studies of the genus HymenaeaBoth the vegetable part and the different extractive methods influenced significantly various parameters obtained in the various extracts, including the levels of phenolic compoundsThe barks of H. martiana presented the highest total phenolic and flavonoid contents. PMID:27695267

  19. Determination of the pure silicon monocarbide content of silicon carbide and products based on silicon carbide

    NASA Technical Reports Server (NTRS)

    Prost, L.; Pauillac, A.

    1978-01-01

    Experience has shown that different methods of analysis of SiC products give different results. Methods identified as AFNOR, FEPA, and manufacturer P, currently used to detect SiC, free C, free Si, free Fe, and SiO2 are reviewed. The AFNOR method gives lower SiC content, attributed to destruction of SiC by grinding. Two products sent to independent labs for analysis by the AFNOR and FEPA methods showed somewhat different results, especially for SiC, SiO2, and Al2O3 content, whereas an X-ray analysis showed a SiC content approximately 10 points lower than by chemical methods.

  20. A comparative study of cultural methods for the detection of Salmonella in feed and feed ingredients

    PubMed Central

    Koyuncu, Sevinc; Haggblom, Per

    2009-01-01

    Background Animal feed as a source of infection to food producing animals is much debated. In order to increase our present knowledge about possible feed transmission it is important to know that the present isolation methods for Salmonella are reliable also for feed materials. In a comparative study the ability of the standard method used for isolation of Salmonella in feed in the Nordic countries, the NMKL71 method (Nordic Committee on Food Analysis) was compared to the Modified Semisolid Rappaport Vassiliadis method (MSRV) and the international standard method (EN ISO 6579:2002). Five different feed materials were investigated, namely wheat grain, soybean meal, rape seed meal, palm kernel meal, pellets of pig feed and also scrapings from a feed mill elevator. Four different levels of the Salmonella serotypes S. Typhimurium, S. Cubana and S. Yoruba were added to each feed material, respectively. For all methods pre-enrichment in Buffered Peptone Water (BPW) were carried out followed by enrichments in the different selective media and finally plating on selective agar media. Results The results obtained with all three methods showed no differences in detection levels, with an accuracy and sensitivity of 65% and 56%, respectively. However, Müller-Kauffmann tetrathionate-novobiocin broth (MKTTn), performed less well due to many false-negative results on Brilliant Green agar (BGA) plates. Compared to other feed materials palm kernel meal showed a higher detection level with all serotypes and methods tested. Conclusion The results of this study showed that the accuracy, sensitivity and specificity of the investigated cultural methods were equivalent. However, the detection levels for different feed and feed ingredients varied considerably. PMID:19192298

  1. Novel two wavelength spectrophotometric methods for simultaneous determination of binary mixtures with severely overlapping spectra

    NASA Astrophysics Data System (ADS)

    Lotfy, Hayam M.; Saleh, Sarah S.; Hassan, Nagiba Y.; Salem, Hesham

    2015-02-01

    This work presents the application of different spectrophotometric techniques based on two wavelengths for the determination of severely overlapped spectral components in a binary mixture without prior separation. Four novel spectrophotometric methods were developed namely: induced dual wavelength method (IDW), dual wavelength resolution technique (DWRT), advanced amplitude modulation method (AAM) and induced amplitude modulation method (IAM). The results of the novel methods were compared to that of three well-established methods which were: dual wavelength method (DW), Vierordt's method (VD) and bivariate method (BV). The developed methods were applied for the analysis of the binary mixture of hydrocortisone acetate (HCA) and fusidic acid (FSA) formulated as topical cream accompanied by the determination of methyl paraben and propyl paraben present as preservatives. The specificity of the novel methods was investigated by analyzing laboratory prepared mixtures and the combined dosage form. The methods were validated as per ICH guidelines where accuracy, repeatability, inter-day precision and robustness were found to be within the acceptable limits. The results obtained from the proposed methods were statistically compared with official ones where no significant difference was observed. No difference was observed between the obtained results when compared to the reported HPLC method, which proved that the developed methods could be alternative to HPLC techniques in quality control laboratories.

  2. Computational time analysis of the numerical solution of 3D electrostatic Poisson's equation

    NASA Astrophysics Data System (ADS)

    Kamboh, Shakeel Ahmed; Labadin, Jane; Rigit, Andrew Ragai Henri; Ling, Tech Chaw; Amur, Khuda Bux; Chaudhary, Muhammad Tayyab

    2015-05-01

    3D Poisson's equation is solved numerically to simulate the electric potential in a prototype design of electrohydrodynamic (EHD) ion-drag micropump. Finite difference method (FDM) is employed to discretize the governing equation. The system of linear equations resulting from FDM is solved iteratively by using the sequential Jacobi (SJ) and sequential Gauss-Seidel (SGS) methods, simulation results are also compared to examine the difference between the results. The main objective was to analyze the computational time required by both the methods with respect to different grid sizes and parallelize the Jacobi method to reduce the computational time. In common, the SGS method is faster than the SJ method but the data parallelism of Jacobi method may produce good speedup over SGS method. In this study, the feasibility of using parallel Jacobi (PJ) method is attempted in relation to SGS method. MATLAB Parallel/Distributed computing environment is used and a parallel code for SJ method is implemented. It was found that for small grid size the SGS method remains dominant over SJ method and PJ method while for large grid size both the sequential methods may take nearly too much processing time to converge. Yet, the PJ method reduces computational time to some extent for large grid sizes.

  3. Comparison study of two procedures for the determination of emamectin benzoate in medicated fish feed.

    PubMed

    Farer, Leslie J; Hayes, John M

    2005-01-01

    A new method has been developed for the determination of emamectin benzoate in fish feed. The method uses a wet extraction, cleanup by solid-phase extraction, and quantitation and separation by liquid chromatography (LC). In this paper, we compare the performance of this method with that of a previously reported LC assay for the determination of emamectin benzoate in fish feed. Although similar to the previous method, the new procedure uses a different sample pretreatment, wet extraction, and quantitation method. The performance of the new method was compared with that of the previously reported method by analyses of 22 medicated feed samples from various commercial sources. A comparison of the results presented here reveals slightly lower assay values obtained with the new method. Although a paired sample t-test indicates the difference in results is significant, this difference is within the method precision of either procedure.

  4. Comparison of different classification methods for analyzing electronic nose data to characterize sesame oils and blends.

    PubMed

    Shao, Xiaolong; Li, Hui; Wang, Nan; Zhang, Qiang

    2015-10-21

    An electronic nose (e-nose) was used to characterize sesame oils processed by three different methods (hot-pressed, cold-pressed, and refined), as well as blends of the sesame oils and soybean oil. Seven classification and prediction methods, namely PCA, LDA, PLS, KNN, SVM, LASSO and RF, were used to analyze the e-nose data. The classification accuracy and MAUC were employed to evaluate the performance of these methods. The results indicated that sesame oils processed with different methods resulted in different sensor responses, with cold-pressed sesame oil producing the strongest sensor signals, followed by the hot-pressed sesame oil. The blends of pressed sesame oils with refined sesame oil were more difficult to be distinguished than the blends of pressed sesame oils and refined soybean oil. LDA, KNN, and SVM outperformed the other classification methods in distinguishing sesame oil blends. KNN, LASSO, PLS, and SVM (with linear kernel), and RF models could adequately predict the adulteration level (% of added soybean oil) in the sesame oil blends. Among the prediction models, KNN with k = 1 and 2 yielded the best prediction results.

  5. Segmentation of mouse dynamic PET images using a multiphase level set method

    NASA Astrophysics Data System (ADS)

    Cheng-Liao, Jinxiu; Qi, Jinyi

    2010-11-01

    Image segmentation plays an important role in medical diagnosis. Here we propose an image segmentation method for four-dimensional mouse dynamic PET images. We consider that voxels inside each organ have similar time activity curves. The use of tracer dynamic information allows us to separate regions that have similar integrated activities in a static image but with different temporal responses. We develop a multiphase level set method that utilizes both the spatial and temporal information in a dynamic PET data set. Different weighting factors are assigned to each image frame based on the noise level and activity difference among organs of interest. We used a weighted absolute difference function in the data matching term to increase the robustness of the estimate and to avoid over-partition of regions with high contrast. We validated the proposed method using computer simulated dynamic PET data, as well as real mouse data from a microPET scanner, and compared the results with those of a dynamic clustering method. The results show that the proposed method results in smoother segments with the less number of misclassified voxels.

  6. Evaluation of angiogram visualization methods for fast and reliable aneurysm diagnosis

    NASA Astrophysics Data System (ADS)

    Lesar, Žiga; Bohak, Ciril; Marolt, Matija

    2015-03-01

    In this paper we present the results of an evaluation of different visualization methods for angiogram volumetric data-ray casting, marching cubes, and multi-level partition of unity implicits. There are several options available with ray-casting: isosurface extraction, maximum intensity projection and alpha compositing, each producing fundamentally different results. Different visualization methods are suitable for different needs, so this choice is crucial in diagnosis and decision making processes. We also evaluate visual effects such as ambient occlusion, screen space ambient occlusion, and depth of field. Some visualization methods include transparency, so we address the question of relevancy of this additional visual information. We employ transfer functions to map data values to color and transparency, allowing us to view or hide particular tissues. All the methods presented in this paper were developed using OpenCL, striving for real-time rendering and quality interaction. An evaluation has been conducted to assess the suitability of the visualization methods. Results show superiority of isosurface extraction with ambient occlusion effects. Visual effects may positively or negatively affect perception of depth, motion, and relative positions in space.

  7. Comparison of a New Cobinamide-Based Method to a Standard Laboratory Method for Measuring Cyanide in Human Blood

    PubMed Central

    Swezey, Robert; Shinn, Walter; Green, Carol; Drover, David R.; Hammer, Gregory B.; Schulman, Scott R.; Zajicek, Anne; Jett, David A.; Boss, Gerry R.

    2013-01-01

    Most hospital laboratories do not measure blood cyanide concentrations, and samples must be sent to reference laboratories. A simple method is needed for measuring cyanide in hospitals. The authors previously developed a method to quantify cyanide based on the high binding affinity of the vitamin B12 analog, cobinamide, for cyanide and a major spectral change observed for cyanide-bound cobinamide. This method is now validated in human blood, and the findings include a mean inter-assay accuracy of 99.1%, precision of 8.75% and a lower limit of quantification of 3.27 µM cyanide. The method was applied to blood samples from children treated with sodium nitroprusside and it yielded measurable results in 88 of 172 samples (51%), whereas the reference laboratory yielded results in only 19 samples (11%). In all 19 samples, the cobinamide-based method also yielded measurable results. The two methods showed reasonable agreement when analyzed by linear regression, but not when analyzed by a standard error of the estimate or paired t-test. Differences in results between the two methods may be because samples were assayed at different times on different sample types. The cobinamide-based method is applicable to human blood, and can be used in hospital laboratories and emergency rooms. PMID:23653045

  8. Tracking of Ball and Players in Beach Volleyball Videos

    PubMed Central

    Gomez, Gabriel; Herrera López, Patricia; Link, Daniel; Eskofier, Bjoern

    2014-01-01

    This paper presents methods for the determination of players' positions and contact time points by tracking the players and the ball in beach volleyball videos. Two player tracking methods are compared, a classical particle filter and a rigid grid integral histogram tracker. Due to mutual occlusion of the players and the camera perspective, results are best for the front players, with 74,6% and 82,6% of correctly tracked frames for the particle method and the integral histogram method, respectively. Results suggest an improved robustness against player confusion between different particle sets when tracking with a rigid grid approach. Faster processing and less player confusions make this method superior to the classical particle filter. Two different ball tracking methods are used that detect ball candidates from movement difference images using a background subtraction algorithm. Ball trajectories are estimated and interpolated from parabolic flight equations. The tracking accuracy of the ball is 54,2% for the trajectory growth method and 42,1% for the Hough line detection method. Tracking results of over 90% from the literature could not be confirmed. Ball contact frames were estimated from parabolic trajectory intersection, resulting in 48,9% of correctly estimated ball contact points. PMID:25426936

  9. Efficient color correction method for smartphone camera-based health monitoring application.

    PubMed

    Duc Dang; Chae Ho Cho; Daeik Kim; Oh Seok Kwon; Jo Woon Chong

    2017-07-01

    Smartphone health monitoring applications are recently highlighted due to the rapid development of hardware and software performance of smartphones. However, color characteristics of images captured by different smartphone models are dissimilar each other and this difference may give non-identical health monitoring results when the smartphone health monitoring applications monitor physiological information using their embedded smartphone cameras. In this paper, we investigate the differences in color properties of the captured images from different smartphone models and apply a color correction method to adjust dissimilar color values obtained from different smartphone cameras. Experimental results show that the color corrected images using the correction method provide much smaller color intensity errors compared to the images without correction. These results can be applied to enhance the consistency of smartphone camera-based health monitoring applications by reducing color intensity errors among the images obtained from different smartphones.

  10. Synthesizing Regression Results: A Factored Likelihood Method

    ERIC Educational Resources Information Center

    Wu, Meng-Jia; Becker, Betsy Jane

    2013-01-01

    Regression methods are widely used by researchers in many fields, yet methods for synthesizing regression results are scarce. This study proposes using a factored likelihood method, originally developed to handle missing data, to appropriately synthesize regression models involving different predictors. This method uses the correlations reported…

  11. An Exponential Finite Difference Technique for Solving Partial Differential Equations. M.S. Thesis - Toledo Univ., Ohio

    NASA Technical Reports Server (NTRS)

    Handschuh, Robert F.

    1987-01-01

    An exponential finite difference algorithm, as first presented by Bhattacharya for one-dimensianal steady-state, heat conduction in Cartesian coordinates, has been extended. The finite difference algorithm developed was used to solve the diffusion equation in one-dimensional cylindrical coordinates and applied to two- and three-dimensional problems in Cartesian coordinates. The method was also used to solve nonlinear partial differential equations in one (Burger's equation) and two (Boundary Layer equations) dimensional Cartesian coordinates. Predicted results were compared to exact solutions where available, or to results obtained by other numerical methods. It was found that the exponential finite difference method produced results that were more accurate than those obtained by other numerical methods, especially during the initial transient portion of the solution. Other applications made using the exponential finite difference technique included unsteady one-dimensional heat transfer with temperature varying thermal conductivity and the development of the temperature field in a laminar Couette flow.

  12. exponential finite difference technique for solving partial differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Handschuh, R.F.

    1987-01-01

    An exponential finite difference algorithm, as first presented by Bhattacharya for one-dimensianal steady-state, heat conduction in Cartesian coordinates, has been extended. The finite difference algorithm developed was used to solve the diffusion equation in one-dimensional cylindrical coordinates and applied to two- and three-dimensional problems in Cartesian coordinates. The method was also used to solve nonlinear partial differential equations in one (Burger's equation) and two (Boundary Layer equations) dimensional Cartesian coordinates. Predicted results were compared to exact solutions where available, or to results obtained by other numerical methods. It was found that the exponential finite difference method produced results that weremore » more accurate than those obtained by other numerical methods, especially during the initial transient portion of the solution. Other applications made using the exponential finite difference technique included unsteady one-dimensional heat transfer with temperature varying thermal conductivity and the development of the temperature field in a laminar Couette flow.« less

  13. Clustering Scientific Publications Based on Citation Relations: A Systematic Comparison of Different Methods.

    PubMed

    Šubelj, Lovro; van Eck, Nees Jan; Waltman, Ludo

    2016-01-01

    Clustering methods are applied regularly in the bibliometric literature to identify research areas or scientific fields. These methods are for instance used to group publications into clusters based on their relations in a citation network. In the network science literature, many clustering methods, often referred to as graph partitioning or community detection techniques, have been developed. Focusing on the problem of clustering the publications in a citation network, we present a systematic comparison of the performance of a large number of these clustering methods. Using a number of different citation networks, some of them relatively small and others very large, we extensively study the statistical properties of the results provided by different methods. In addition, we also carry out an expert-based assessment of the results produced by different methods. The expert-based assessment focuses on publications in the field of scientometrics. Our findings seem to indicate that there is a trade-off between different properties that may be considered desirable for a good clustering of publications. Overall, map equation methods appear to perform best in our analysis, suggesting that these methods deserve more attention from the bibliometric community.

  14. Clustering Scientific Publications Based on Citation Relations: A Systematic Comparison of Different Methods

    PubMed Central

    Šubelj, Lovro; van Eck, Nees Jan; Waltman, Ludo

    2016-01-01

    Clustering methods are applied regularly in the bibliometric literature to identify research areas or scientific fields. These methods are for instance used to group publications into clusters based on their relations in a citation network. In the network science literature, many clustering methods, often referred to as graph partitioning or community detection techniques, have been developed. Focusing on the problem of clustering the publications in a citation network, we present a systematic comparison of the performance of a large number of these clustering methods. Using a number of different citation networks, some of them relatively small and others very large, we extensively study the statistical properties of the results provided by different methods. In addition, we also carry out an expert-based assessment of the results produced by different methods. The expert-based assessment focuses on publications in the field of scientometrics. Our findings seem to indicate that there is a trade-off between different properties that may be considered desirable for a good clustering of publications. Overall, map equation methods appear to perform best in our analysis, suggesting that these methods deserve more attention from the bibliometric community. PMID:27124610

  15. A multi-strategy approach to informative gene identification from gene expression data.

    PubMed

    Liu, Ziying; Phan, Sieu; Famili, Fazel; Pan, Youlian; Lenferink, Anne E G; Cantin, Christiane; Collins, Catherine; O'Connor-McCourt, Maureen D

    2010-02-01

    An unsupervised multi-strategy approach has been developed to identify informative genes from high throughput genomic data. Several statistical methods have been used in the field to identify differentially expressed genes. Since different methods generate different lists of genes, it is very challenging to determine the most reliable gene list and the appropriate method. This paper presents a multi-strategy method, in which a combination of several data analysis techniques are applied to a given dataset and a confidence measure is established to select genes from the gene lists generated by these techniques to form the core of our final selection. The remainder of the genes that form the peripheral region are subject to exclusion or inclusion into the final selection. This paper demonstrates this methodology through its application to an in-house cancer genomics dataset and a public dataset. The results indicate that our method provides more reliable list of genes, which are validated using biological knowledge, biological experiments, and literature search. We further evaluated our multi-strategy method by consolidating two pairs of independent datasets, each pair is for the same disease, but generated by different labs using different platforms. The results showed that our method has produced far better results.

  16. Exploration of analysis methods for diagnostic imaging tests: problems with ROC AUC and confidence scores in CT colonography.

    PubMed

    Mallett, Susan; Halligan, Steve; Collins, Gary S; Altman, Doug G

    2014-01-01

    Different methods of evaluating diagnostic performance when comparing diagnostic tests may lead to different results. We compared two such approaches, sensitivity and specificity with area under the Receiver Operating Characteristic Curve (ROC AUC) for the evaluation of CT colonography for the detection of polyps, either with or without computer assisted detection. In a multireader multicase study of 10 readers and 107 cases we compared sensitivity and specificity, using radiological reporting of the presence or absence of polyps, to ROC AUC calculated from confidence scores concerning the presence of polyps. Both methods were assessed against a reference standard. Here we focus on five readers, selected to illustrate issues in design and analysis. We compared diagnostic measures within readers, showing that differences in results are due to statistical methods. Reader performance varied widely depending on whether sensitivity and specificity or ROC AUC was used. There were problems using confidence scores; in assigning scores to all cases; in use of zero scores when no polyps were identified; the bimodal non-normal distribution of scores; fitting ROC curves due to extrapolation beyond the study data; and the undue influence of a few false positive results. Variation due to use of different ROC methods exceeded differences between test results for ROC AUC. The confidence scores recorded in our study violated many assumptions of ROC AUC methods, rendering these methods inappropriate. The problems we identified will apply to other detection studies using confidence scores. We found sensitivity and specificity were a more reliable and clinically appropriate method to compare diagnostic tests.

  17. A Decision-Based Modified Total Variation Diffusion Method for Impulse Noise Removal

    PubMed Central

    Zhu, Qingxin; Song, Xiuli; Tao, Jinsong

    2017-01-01

    Impulsive noise removal usually employs median filtering, switching median filtering, the total variation L1 method, and variants. These approaches however often introduce excessive smoothing and can result in extensive visual feature blurring and thus are suitable only for images with low density noise. A new method to remove noise is proposed in this paper to overcome this limitation, which divides pixels into different categories based on different noise characteristics. If an image is corrupted by salt-and-pepper noise, the pixels are divided into corrupted and noise-free; if the image is corrupted by random valued impulses, the pixels are divided into corrupted, noise-free, and possibly corrupted. Pixels falling into different categories are processed differently. If a pixel is corrupted, modified total variation diffusion is applied; if the pixel is possibly corrupted, weighted total variation diffusion is applied; otherwise, the pixel is left unchanged. Experimental results show that the proposed method is robust to different noise strengths and suitable for different images, with strong noise removal capability as shown by PSNR/SSIM results as well as the visual quality of restored images. PMID:28536602

  18. State of the art of immunoassay methods for B-type natriuretic peptides: An update.

    PubMed

    Clerico, Aldo; Franzini, Maria; Masotti, Silvia; Prontera, Concetta; Passino, Claudio

    2015-01-01

    The aim of this review article is to give an update on the state of the art of the immunoassay methods for the measurement of B-type natriuretic peptide (BNP) and its related peptides. Using chromatographic procedures, several studies reported an increasing number of circulating peptides related to BNP in human plasma of patients with heart failure. These peptides may have reduced or even no biological activity. Furthermore, other studies have suggested that, using immunoassays that are considered specific for BNP, the precursor of the peptide hormone, proBNP, constitutes a major portion of the peptide measured in plasma of patients with heart failure. Because BNP immunoassay methods show large (up to 50%) systematic differences in values, the use of identical decision values for all immunoassay methods, as suggested by the most recent international guidelines, seems unreasonable. Since proBNP significantly cross-reacts with all commercial immunoassay methods considered specific for BNP, manufacturers should test and clearly declare the degree of cross-reactivity of glycosylated and non-glycosylated proBNP in their BNP immunoassay methods. Clinicians should take into account that there are large systematic differences between methods when they compare results from different laboratories that use different BNP immunoassays. On the other hand, clinical laboratories should take part in external quality assessment (EQA) programs to evaluate the bias of their method in comparison to other BNP methods. Finally, the authors believe that the development of more specific methods for the active peptide, BNP1-32, should reduce the systematic differences between methods and result in better harmonization of results.

  19. Combination of different methods to assess the fate of lignin in decomposing needle and leave litter

    NASA Astrophysics Data System (ADS)

    Klotzbücher, Thimo; Filley, Timothy; Kaiser, Klaus; Kalbitz, Karsten

    2010-05-01

    Lignin is a major component of plant litter. However, its fate during litter decay is still poorly understood. One reason is the difficult analysis. Commonly used methods utilize different methodological approaches and focus on different aspects, e.g., content of lignin and/or of lignin-derived phenols and the degree of oxidation. The comparability and feasibility of the methods has not been tested so far. Our aims were: (1) to compare different methods with respect to track lignin degradation during plant litter decay and (2) to evaluate possible advantages of combining the different results. We assessed lignin degradation in decaying litter by 13C-TMAH thermochemolysis and CuO oxidation (each combined with GC/MS) and by determination of acid-detergent lignin (ADL) combined with near infrared spectroscopy. Furthermore, water-extractable organic matter produced during litter decay was examined for indicators of lignin-derived compounds by UV absorbance at 280 nm, fluorescence spectroscopy, and 13C-TMAH GC/MS. The study included litter samples from 5 different tree species (acer, ash, beech, pine, spruce), exposed in litterbags to degradation in a spruce stand for 27 months. First results suggested stronger lignin degradation in coniferous than in deciduous litter. This was indicated by complementary results from various methods: Conifer litter showed a more pronounced decrease in ADL content and a stronger increase in oxidation degree of side chains (Ac/Al ratios of CuO oxidation and 13C-TMAH products). Furthermore water extracted organic matter from needles showed a higher aromaticity and molecule complexity. Thus properties of water extractable organic matter seemed to reflect the extents of lignin degradation in solid litter samples. Contents of lignin-derived phenols determined with the CuO method (VSC content) hardly changed during decay of needles and leaves. These results thus not matched the trends found with the ADL method. Our results suggested that water-soluble phenolic acids that are included in the CuO oxidation products, accumulated during decay of litter with less stable lignin and then contributed to VSC contents and to the pool of water- extractable organic matter. By combining results from different methods we gained a better understanding about the differences in lignin degradation between the litter species.

  20. A New Moving Object Detection Method Based on Frame-difference and Background Subtraction

    NASA Astrophysics Data System (ADS)

    Guo, Jiajia; Wang, Junping; Bai, Ruixue; Zhang, Yao; Li, Yong

    2017-09-01

    Although many methods of moving object detection have been proposed, moving object extraction is still the core in video surveillance. However, with the complex scene in real world, false detection, missed detection and deficiencies resulting from cavities inside the body still exist. In order to solve the problem of incomplete detection for moving objects, a new moving object detection method combined an improved frame-difference and Gaussian mixture background subtraction is proposed in this paper. To make the moving object detection more complete and accurate, the image repair and morphological processing techniques which are spatial compensations are applied in the proposed method. Experimental results show that our method can effectively eliminate ghosts and noise and fill the cavities of the moving object. Compared to other four moving object detection methods which are GMM, VIBE, frame-difference and a literature's method, the proposed method improve the efficiency and accuracy of the detection.

  1. Two Different Points of View through Artificial Intelligence and Vector Autoregressive Models for Ex Post and Ex Ante Forecasting

    PubMed Central

    Aydin, Alev Dilek; Caliskan Cavdar, Seyma

    2015-01-01

    The ANN method has been applied by means of multilayered feedforward neural networks (MLFNs) by using different macroeconomic variables such as the exchange rate of USD/TRY, gold prices, and the Borsa Istanbul (BIST) 100 index based on monthly data over the period of January 2000 and September 2014 for Turkey. Vector autoregressive (VAR) method has also been applied with the same variables for the same period of time. In this study, different from other studies conducted up to the present, ENCOG machine learning framework has been used along with JAVA programming language in order to constitute the ANN. The training of network has been done by resilient propagation method. The ex post and ex ante estimates obtained by the ANN method have been compared with the results obtained by the econometric forecasting method of VAR. Strikingly, our findings based on the ANN method reveal that there is a possibility of financial distress or a financial crisis in Turkey starting from October 2017. The results which were obtained with the method of VAR also support the results of ANN method. Additionally, our results indicate that the ANN approach has more superior prediction performance than the VAR method. PMID:26550010

  2. Two Different Points of View through Artificial Intelligence and Vector Autoregressive Models for Ex Post and Ex Ante Forecasting.

    PubMed

    Aydin, Alev Dilek; Caliskan Cavdar, Seyma

    2015-01-01

    The ANN method has been applied by means of multilayered feedforward neural networks (MLFNs) by using different macroeconomic variables such as the exchange rate of USD/TRY, gold prices, and the Borsa Istanbul (BIST) 100 index based on monthly data over the period of January 2000 and September 2014 for Turkey. Vector autoregressive (VAR) method has also been applied with the same variables for the same period of time. In this study, different from other studies conducted up to the present, ENCOG machine learning framework has been used along with JAVA programming language in order to constitute the ANN. The training of network has been done by resilient propagation method. The ex post and ex ante estimates obtained by the ANN method have been compared with the results obtained by the econometric forecasting method of VAR. Strikingly, our findings based on the ANN method reveal that there is a possibility of financial distress or a financial crisis in Turkey starting from October 2017. The results which were obtained with the method of VAR also support the results of ANN method. Additionally, our results indicate that the ANN approach has more superior prediction performance than the VAR method.

  3. Effectiveness of different tutorial recitation teaching methods and its implications for TA training

    NASA Astrophysics Data System (ADS)

    Endorf, Robert

    2008-04-01

    We present results from a comparative study of student understanding for students who attended recitation classes that used different teaching methods. The purpose of the study was to evaluate which teaching methods would be the most effective for recitation classes associated with large lectures in introductory physics courses. Student volunteers from our introductory calculus-based physics course at the University of Cincinnati attended a special recitation class that was taught using one of four different teaching methods. A total of 272 students were divided into approximately equal groups for each method. Students in each class were taught the same topic, ``Changes in Energy and Momentum,'' from ``Tutorials in Introductory Physics'' by Lillian McDermott, Peter Shaffer and the Physics Education Group at the University of Washington. The different teaching methods varied in the amount of student and teacher engagement. Student understanding was evaluated through pretests and posttests. Our results demonstrate the importance of the instructor's role in teaching recitation classes. The most effective teaching method was for students working in cooperative learning groups with the instructors questioning the groups using Socratic dialogue. In addition, we investigated student preferences of modes of instruction through an open-ended survey. Our results provide guidance and evidence for the teaching methods which should be emphasized in training course instructors.

  4. Experimental research on showing automatic disappearance pen handwriting based on spectral imaging technology

    NASA Astrophysics Data System (ADS)

    Su, Yi; Xu, Lei; Liu, Ningning; Huang, Wei; Xu, Xiaojing

    2016-10-01

    Purpose to find an efficient, non-destructive examining method for showing the disappearing words after writing with automatic disappearance pen. Method Using the imaging spectrometer to show the potential disappearance words on paper surface according to different properties of reflection absorbed by various substances in different bands. Results the disappeared words by using different disappearance pens to write on the same paper or the same disappearance pen to write on different papers, both can get good show results through the use of the spectral imaging examining methods. Conclusion Spectral imaging technology can show the disappearing words after writing by using the automatic disappearance pen.

  5. Statistical evaluation of fatty acid profile and cholesterol content in fish (common carp) lipids obtained by different sample preparation procedures.

    PubMed

    Spiric, Aurelija; Trbovic, Dejana; Vranic, Danijela; Djinovic, Jasna; Petronijevic, Radivoj; Matekalo-Sverak, Vesna

    2010-07-05

    Studies performed on lipid extraction from animal and fish tissues do not provide information on its influence on fatty acid composition of the extracted lipids as well as on cholesterol content. Data presented in this paper indicate the impact of extraction procedures on fatty acid profile of fish lipids extracted by the modified Soxhlet and ASE (accelerated solvent extraction) procedure. Cholesterol was also determined by direct saponification method, too. Student's paired t-test used for comparison of the total fat content in carp fish population obtained by two extraction methods shows that differences between values of the total fat content determined by ASE and modified Soxhlet method are not statistically significant. Values obtained by three different methods (direct saponification, ASE and modified Soxhlet method), used for determination of cholesterol content in carp, were compared by one-way analysis of variance (ANOVA). The obtained results show that modified Soxhlet method gives results which differ significantly from the results obtained by direct saponification and ASE method. However the results obtained by direct saponification and ASE method do not differ significantly from each other. The highest quantities for cholesterol (37.65 to 65.44 mg/100 g) in the analyzed fish muscle were obtained by applying direct saponification method, as less destructive one, followed by ASE (34.16 to 52.60 mg/100 g) and modified Soxhlet extraction method (10.73 to 30.83 mg/100 g). Modified Soxhlet method for extraction of fish lipids gives higher values for n-6 fatty acids than ASE method (t(paired)=3.22 t(c)=2.36), while there is no statistically significant difference in the n-3 content levels between the methods (t(paired)=1.31). The UNSFA/SFA ratio obtained by using modified Soxhlet method is also higher than the ratio obtained using ASE method (t(paired)=4.88 t(c)=2.36). Results of Principal Component Analysis (PCA) showed that the highest positive impact to the second principal component (PC2) is recorded by C18:3 n-3, and C20:3 n-6, being present in a higher amount in the samples treated by the modified Soxhlet extraction, while C22:5 n-3, C20:3 n-3, C22:1 and C20:4, C16 and C18 negatively influence the score values of the PC2, showing significantly increased level in the samples treated by ASE method. Hotelling's paired T-square test used on the first three principal components for confirmation of differences in individual fatty acid content obtained by ASE and Soxhlet method in carp muscle showed statistically significant difference between these two data sets (T(2)=161.308, p<0.001). Copyright 2010 Elsevier B.V. All rights reserved.

  6. Technical note: comparison of 3 methods for analyzing areas under the curve for glucose and nonesterified fatty acids concentrations following epinephrine challenge in dairy cows.

    PubMed

    Cardoso, F C; Sears, W; LeBlanc, S J; Drackley, J K

    2011-12-01

    The objective of the study was to compare 3 methods for calculating the area under the curve (AUC) for plasma glucose and nonesterified fatty acids (NEFA) after an intravenous epinephrine (EPI) challenge in dairy cows. Cows were assigned to 1 of 6 dietary niacin treatments in a completely randomized 6 × 6 Latin square with an extra period to measure carryover effects. Periods consisted of a 7-d (d 1 to 7) adaptation period followed by a 7-d (d 8 to 14) measurement period. On d 12, cows received an i.v. infusion of EPI (1.4 μg/kg of BW). Blood was sampled at -45, -30, -20, -10, and -5 min before EPI infusion and 2.5, 5, 10, 15, 20, 30, 45, 60, 90, and 120 min after. The AUC was calculated by incremental area, positive incremental area, and total area using the trapezoidal rule. The 3 methods resulted in different statistical inferences. When comparing the 3 methods for NEFA and glucose response, no significant differences among treatments and no interactions between treatment and AUC method were observed. For glucose and NEFA response, the method was statistically significant. Our results suggest that the positive incremental method and the total area method gave similar results and interpretation but differed from the incremental area method. Furthermore, the 3 methods evaluated can lead to different results and statistical inferences for glucose and NEFA AUC after an EPI challenge. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  7. Solitary traveling wave solutions of pressure equation of bubbly liquids with examination for viscosity and heat transfer

    NASA Astrophysics Data System (ADS)

    Khater, Mostafa M. A.; Seadawy, Aly R.; Lu, Dianchen

    2018-03-01

    In this research, we investigate one of the most popular model in nature and also industrial which is the pressure equation of bubbly liquids with examination for viscosity and heat transfer which has many application in nature and engineering. Understanding the physical meaning of exact and solitary traveling wave solutions for this equation gives the researchers in this field a great clear vision of the pressure waves in a mixture liquid and gas bubbles taking into consideration the viscosity of liquid and the heat transfer and also dynamics of contrast agents in the blood flow at ultrasonic researches. To achieve our goal, we apply three different methods which are extended tanh-function method, extended simple equation method and a new auxiliary equation method on this equation. We obtained exact and solitary traveling wave solutions and we also discuss the similarity and difference between these three method and make a comparison between results that we obtained with another results that obtained with the different researchers using different methods. All of these results and discussion explained the fact that our new auxiliary equation method is considered to be the most general, powerful and the most result-oriented. These kinds of solutions and discussion allow for the understanding of the phenomenon and its intrinsic properties as well as the ease of way of application and its applicability to other phenomena.

  8. Comparison of Sensible Heat Flux from Eddy Covariance and Scintillometer over different land surface conditions

    NASA Astrophysics Data System (ADS)

    Zeweldi, D. A.; Gebremichael, M.; Summis, T.; Wang, J.; Miller, D.

    2008-12-01

    The large source of uncertainty in satellite-based evapotranspiration algorithm results from the estimation of sensible heat flux H. Traditionally eddy covariance sensors, and recently large-aperture scintillometers, have been used as ground truth to evaluate satellite-based H estimates. The two methods rely on different physical measurement principles, and represent different foot print sizes. In New Mexico, we conducted a field campaign during summer 2008 to compare H estimates obtained from the eddy covariance and scintillometer methods. During this field campaign, we installed sonic anemometers; one propeller eddy covariance (OPEC) equipped with net radiometer and soil heat flux sensors; large aperture scintillometer (LAS); and weather station consisting of wind speed, direction and radiation sensors over three different experimental areas consisting of different roughness conditions (desert, irrigated area and lake). Our results show the similarities and differences in H estimates obtained from these various methods over the different land surface conditions. Further, our results show that the H estimates obtained from the LAS agree with those obtained from the eddy covariance method when high frequency thermocouple temperature, instead of the typical weather station temperature measurements, is used in the LAS analysis.

  9. Input respiratory impedance in mice: comparison between the flow-based and the wavetube method to perform the forced oscillation technique.

    PubMed

    Mori, V; Oliveira, M A; Vargas, M H M; da Cunha, A A; de Souza, R G; Pitrez, P M; Moriya, H T

    2017-06-01

    Objective and approach: In this study, we estimated the constant phase model (CPM) parameters from the respiratory impedance of male BALB/c mice by performing the forced oscillation technique (FOT) in a control group (n  =  8) and in a murine model of asthma (OVA) (n  =  10). Then, we compared the results obtained by two different methods, using a commercial equipment (flexiVent-flexiWare 7.X; SCIREQ, Montreal, Canada) (FXV) and a wavetube method equipment (Sly et al 2003 J. Appl. Physiol. 94 1460-6) (WVT). We believe that the results from different methods may not be comparable. First, we compared the results performing a two-way analysis of variance (ANOVA) for the resistance, elastance and tissue damping. We found statistically significant differences in all CPM parameters, except for resistance, when comparing Control and OVA groups. When comparing devices, we found statistically significant differences in resistance, while differences in elastance were not observed. For tissue damping, the results from WVT were observed to be higher than those from FXV. Finally, when comparing the relative variation between the CPM parameters of the Control and OVA groups in both devices, no significant differences were observed for all parameters. We then conclude that this assessment can compensate the effect of using different cannulas. Furthermore, tissue damping differences between groups can be compensated, since bronchoconstrictors were not used. Therefore, we believe that relative variations in the results between groups can be a comparing parameter when using different equipment without bronchoconstrictor administration.

  10. Efficacy Evaluation of Different Wavelet Feature Extraction Methods on Brain MRI Tumor Detection

    NASA Astrophysics Data System (ADS)

    Nabizadeh, Nooshin; John, Nigel; Kubat, Miroslav

    2014-03-01

    Automated Magnetic Resonance Imaging brain tumor detection and segmentation is a challenging task. Among different available methods, feature-based methods are very dominant. While many feature extraction techniques have been employed, it is still not quite clear which of feature extraction methods should be preferred. To help improve the situation, we present the results of a study in which we evaluate the efficiency of using different wavelet transform features extraction methods in brain MRI abnormality detection. Applying T1-weighted brain image, Discrete Wavelet Transform (DWT), Discrete Wavelet Packet Transform (DWPT), Dual Tree Complex Wavelet Transform (DTCWT), and Complex Morlet Wavelet Transform (CMWT) methods are applied to construct the feature pool. Three various classifiers as Support Vector Machine, K Nearest Neighborhood, and Sparse Representation-Based Classifier are applied and compared for classifying the selected features. The results show that DTCWT and CMWT features classified with SVM, result in the highest classification accuracy, proving of capability of wavelet transform features to be informative in this application.

  11. Paradigm Diagnostics Salmonella Indicator Broth (PDX-SIB) for detection of Salmonella on selected environmental surfaces.

    PubMed

    Olstein, Alan; Griffith, Leena; Feirtag, Joellen; Pearson, Nicole

    2013-01-01

    The Paradigm Diagnostics Salmonella Indicator Broth (PDX-SIB) is intended as a single-step selective enrichment indicator broth to be used as a simple screening test for the presence of Salmonella spp. in environmental samples. This method permits the end user to avoid multistep sample processing to identify presumptively positive samples, as exemplified by standard U.S. reference methods. PDX-SIB permits the outgrowth of Salmonella while inhibiting the growth of competitive Gram-negative and -positive microflora. Growth of Salmonella-positive cultures results in a visual color change of the medium from purple to yellow when the sample is grown at 37 +/- 1 degree C. Performance of PDX-SIB has been evaluated in five different categories: inclusivity-exclusivity, methods comparison, ruggedness, lot-to-lot variability, and shelf stability. The inclusivity panel included 100 different Salmonella serovars, 98 of which were SIB-positive during the 30 to 48 h incubation period. The exclusivity panel included 33 different non-Salmonella microorganisms, 31 of which were SIB-negative during the incubation period. Methods comparison studies included four different surfaces: S. Newport on plastic, S. Anatum on sealed concrete, S. Abaetetuba on ceramic tile, and S. Typhimurium in the presence of 1 log excess of Citrobacter freundii. Results of the methods comparison studies demonstrated no statistical difference between the SIB method and the U.S. Food and Drug Administration-Bacteriological Analytical Manual reference method, as measured by the Mantel-Haenszel Chi-square test. Ruggedness studies demonstrated little variation in test results when SIB incubation temperatures were varied over a 34-40 degrees C range. Lot-to-lot consistency results suggest no detectable differences in manufactured goods using two reference Salmonella serovars and one non-Salmonella microorganism.

  12. Two Measurement Methods of Leaf Dry Matter Content Produce Similar Results in a Broad Range of Species

    PubMed Central

    Vaieretti, María Victoria; Díaz, Sandra; Vile, Denis; Garnier, Eric

    2007-01-01

    Background and Aims Leaf dry matter content (LDMC) is widely used as an indicator of plant resource use in plant functional trait databases. Two main methods have been proposed to measure LDMC, which basically differ in the rehydration procedure to which leaves are subjected after harvesting. These are the ‘complete rehydration’ protocol of Garnier et al. (2001, Functional Ecology 15: 688–695) and the ‘partial rehydration’ protocol of Vendramini et al. (2002, New Phytologist 154: 147–157). Methods To test differences in LDMC due to the use of different methods, LDMC was measured on 51 native and cultivated species representing a wide range of plant families and growth forms from central-western Argentina, following the complete rehydration and partial rehydration protocols. Key Results and Conclusions The LDMC values obtained by both methods were strongly and positively correlated, clearly showing that LDMC is highly conserved between the two procedures. These trends were not altered by the exclusion of plants with non-laminar leaves. Although the complete rehydration method is the safest to measure LDMC, the partial rehydration procedure produces similar results and is faster. It therefore appears as an acceptable option for those situations in which the complete rehydration method cannot be applied. Two notes of caution are given for cases in which different datasets are compared or combined: (1) the discrepancy between the two rehydration protocols is greatest in the case of high-LDMC (succulent or tender) leaves; (2) the results suggest that, when comparing many studies across unrelated datasets, differences in the measurement protocol may be less important than differences among seasons, years and the quality of local habitats. PMID:17353207

  13. Colorimetric characterization of digital cameras with unrestricted capture settings applicable for different illumination circumstances

    NASA Astrophysics Data System (ADS)

    Fang, Jingyu; Xu, Haisong; Wang, Zhehong; Wu, Xiaomin

    2016-05-01

    With colorimetric characterization, digital cameras can be used as image-based tristimulus colorimeters for color communication. In order to overcome the restriction of fixed capture settings adopted in the conventional colorimetric characterization procedures, a novel method was proposed considering capture settings. The method calculating colorimetric value of the measured image contains five main steps, including conversion from RGB values to equivalent ones of training settings through factors based on imaging system model so as to build the bridge between different settings, scaling factors involved in preparation steps for transformation mapping to avoid errors resulted from nonlinearity of polynomial mapping for different ranges of illumination levels. The experiment results indicate that the prediction error of the proposed method, which was measured by CIELAB color difference formula, reaches less than 2 CIELAB units under different illumination levels and different correlated color temperatures. This prediction accuracy for different capture settings remains the same level as the conventional method for particular lighting condition.

  14. Prototype Procedures to Describe Army Jobs

    DTIC Science & Technology

    2010-07-01

    ratings for the same MOS. Consistent with a multi-trait multi- method framework, high profile similarities (or low mean differences ) among different ...rater types for the same MOS would indicate convergent validity. That is, different methods (i.e., rater types) yield converging results for the same... different methods of data collection depends upon the type of data collected. For example, it could be that data on work-oriented descriptors are most

  15. Application of the BMWP-Costa Rica biotic index in aquatic biomonitoring: sensitivity to collection method and sampling intensity.

    PubMed

    Gutiérrez-Fonseca, Pablo E; Lorion, Christopher M

    2014-04-01

    The use of aquatic macroinvertebrates as bio-indicators in water quality studies has increased considerably over the last decade in Costa Rica, and standard biomonitoring methods have now been formulated at the national level. Nevertheless, questions remain about the effectiveness of different methods of sampling freshwater benthic assemblages, and how sampling intensity may influence biomonitoring results. In this study, we compared the results of qualitative sampling using commonly applied methods with a more intensive quantitative approach at 12 sites in small, lowland streams on the southern Caribbean slope of Costa Rica. Qualitative samples were collected following the official protocol using a strainer during a set time period and macroinvertebrates were field-picked. Quantitative sampling involved collecting ten replicate Surber samples and picking out macroinvertebrates in the laboratory with a stereomicroscope. The strainer sampling method consistently yielded fewer individuals and families than quantitative samples. As a result, site scores calculated using the Biological Monitoring Working Party-Costa Rica (BMWP-CR) biotic index often differed greatly depending on the sampling method. Site water quality classifications using the BMWP-CR index differed between the two sampling methods for 11 of the 12 sites in 2005, and for 9 of the 12 sites in 2006. Sampling intensity clearly had a strong influence on BMWP-CR index scores, as well as perceived differences between reference and impacted sites. Achieving reliable and consistent biomonitoring results for lowland Costa Rican streams may demand intensive sampling and requires careful consideration of sampling methods.

  16. The effect of stochiastic technique on estimates of population viability from transition matrix models

    USGS Publications Warehouse

    Kaye, T.N.; Pyke, David A.

    2003-01-01

    Population viability analysis is an important tool for conservation biologists, and matrix models that incorporate stochasticity are commonly used for this purpose. However, stochastic simulations may require assumptions about the distribution of matrix parameters, and modelers often select a statistical distribution that seems reasonable without sufficient data to test its fit. We used data from long-term (5a??10 year) studies with 27 populations of five perennial plant species to compare seven methods of incorporating environmental stochasticity. We estimated stochastic population growth rate (a measure of viability) using a matrix-selection method, in which whole observed matrices were selected at random at each time step of the model. In addition, we drew matrix elements (transition probabilities) at random using various statistical distributions: beta, truncated-gamma, truncated-normal, triangular, uniform, or discontinuous/observed. Recruitment rates were held constant at their observed mean values. Two methods of constraining stage-specific survival to a??100% were also compared. Different methods of incorporating stochasticity and constraining matrix column sums interacted in their effects and resulted in different estimates of stochastic growth rate (differing by up to 16%). Modelers should be aware that when constraining stage-specific survival to 100%, different methods may introduce different levels of bias in transition element means, and when this happens, different distributions for generating random transition elements may result in different viability estimates. There was no species effect on the results and the growth rates derived from all methods were highly correlated with one another. We conclude that the absolute value of population viability estimates is sensitive to model assumptions, but the relative ranking of populations (and management treatments) is robust. Furthermore, these results are applicable to a range of perennial plants and possibly other life histories.

  17. Personalized Privacy-Preserving Frequent Itemset Mining Using Randomized Response

    PubMed Central

    Sun, Chongjing; Fu, Yan; Zhou, Junlin; Gao, Hui

    2014-01-01

    Frequent itemset mining is the important first step of association rule mining, which discovers interesting patterns from the massive data. There are increasing concerns about the privacy problem in the frequent itemset mining. Some works have been proposed to handle this kind of problem. In this paper, we introduce a personalized privacy problem, in which different attributes may need different privacy levels protection. To solve this problem, we give a personalized privacy-preserving method by using the randomized response technique. By providing different privacy levels for different attributes, this method can get a higher accuracy on frequent itemset mining than the traditional method providing the same privacy level. Finally, our experimental results show that our method can have better results on the frequent itemset mining while preserving personalized privacy. PMID:25143989

  18. Personalized privacy-preserving frequent itemset mining using randomized response.

    PubMed

    Sun, Chongjing; Fu, Yan; Zhou, Junlin; Gao, Hui

    2014-01-01

    Frequent itemset mining is the important first step of association rule mining, which discovers interesting patterns from the massive data. There are increasing concerns about the privacy problem in the frequent itemset mining. Some works have been proposed to handle this kind of problem. In this paper, we introduce a personalized privacy problem, in which different attributes may need different privacy levels protection. To solve this problem, we give a personalized privacy-preserving method by using the randomized response technique. By providing different privacy levels for different attributes, this method can get a higher accuracy on frequent itemset mining than the traditional method providing the same privacy level. Finally, our experimental results show that our method can have better results on the frequent itemset mining while preserving personalized privacy.

  19. Associations between Family Adversity and Brain Volume in Adolescence: Manual vs. Automated Brain Segmentation Yields Different Results.

    PubMed

    Lyden, Hannah; Gimbel, Sarah I; Del Piero, Larissa; Tsai, A Bryna; Sachs, Matthew E; Kaplan, Jonas T; Margolin, Gayla; Saxbe, Darby

    2016-01-01

    Associations between brain structure and early adversity have been inconsistent in the literature. These inconsistencies may be partially due to methodological differences. Different methods of brain segmentation may produce different results, obscuring the relationship between early adversity and brain volume. Moreover, adolescence is a time of significant brain growth and certain brain areas have distinct rates of development, which may compromise the accuracy of automated segmentation approaches. In the current study, 23 adolescents participated in two waves of a longitudinal study. Family aggression was measured when the youths were 12 years old, and structural scans were acquired an average of 4 years later. Bilateral amygdalae and hippocampi were segmented using three different methods (manual tracing, FSL, and NeuroQuant). The segmentation estimates were compared, and linear regressions were run to assess the relationship between early family aggression exposure and all three volume segmentation estimates. Manual tracing results showed a positive relationship between family aggression and right amygdala volume, whereas FSL segmentation showed negative relationships between family aggression and both the left and right hippocampi. However, results indicate poor overlap between methods, and different associations were found between early family aggression exposure and brain volume depending on the segmentation method used.

  20. Associations between Family Adversity and Brain Volume in Adolescence: Manual vs. Automated Brain Segmentation Yields Different Results

    PubMed Central

    Lyden, Hannah; Gimbel, Sarah I.; Del Piero, Larissa; Tsai, A. Bryna; Sachs, Matthew E.; Kaplan, Jonas T.; Margolin, Gayla; Saxbe, Darby

    2016-01-01

    Associations between brain structure and early adversity have been inconsistent in the literature. These inconsistencies may be partially due to methodological differences. Different methods of brain segmentation may produce different results, obscuring the relationship between early adversity and brain volume. Moreover, adolescence is a time of significant brain growth and certain brain areas have distinct rates of development, which may compromise the accuracy of automated segmentation approaches. In the current study, 23 adolescents participated in two waves of a longitudinal study. Family aggression was measured when the youths were 12 years old, and structural scans were acquired an average of 4 years later. Bilateral amygdalae and hippocampi were segmented using three different methods (manual tracing, FSL, and NeuroQuant). The segmentation estimates were compared, and linear regressions were run to assess the relationship between early family aggression exposure and all three volume segmentation estimates. Manual tracing results showed a positive relationship between family aggression and right amygdala volume, whereas FSL segmentation showed negative relationships between family aggression and both the left and right hippocampi. However, results indicate poor overlap between methods, and different associations were found between early family aggression exposure and brain volume depending on the segmentation method used. PMID:27656121

  1. Comparison of methods used to estimate conventional undiscovered petroleum resources: World examples

    USGS Publications Warehouse

    Ahlbrandt, T.S.; Klett, T.R.

    2005-01-01

    Various methods for assessing undiscovered oil, natural gas, and natural gas liquid resources were compared in support of the USGS World Petroleum Assessment 2000. Discovery process, linear fractal, parabolic fractal, engineering estimates, PETRIMES, Delphi, and the USGS 2000 methods were compared. Three comparisons of these methods were made in: (1) the Neuquen Basin province, Argentina (different assessors, same input data); (2) provinces in North Africa, Oman, and Yemen (same assessors, different methods); and (3) the Arabian Peninsula, Arabian (Persian) Gulf, and North Sea (different assessors, different methods). A fourth comparison (same assessors, same assessment methods but different geologic models), between results from structural and stratigraphic assessment units in the North Sea used only the USGS 2000 method, and hence compared the type of assessment unit rather than the method. In comparing methods, differences arise from inherent differences in assumptions regarding: (1) the underlying distribution of the parent field population (all fields, discovered and undiscovered), (2) the population of fields being estimated; that is, the entire parent distribution or the undiscovered resource distribution, (3) inclusion or exclusion of large outlier fields; (4) inclusion or exclusion of field (reserve) growth, (5) deterministic or probabilistic models, (6) data requirements, and (7) scale and time frame of the assessment. Discovery process, Delphi subjective consensus, and the USGS 2000 method yield comparable results because similar procedures are employed. In mature areas such as the Neuquen Basin province in Argentina, the linear and parabolic fractal and engineering methods were conservative compared to the other five methods and relative to new reserve additions there since 1995. The PETRIMES method gave the most optimistic estimates in the Neuquen Basin. In less mature areas, the linear fractal method yielded larger estimates relative to other methods. A geologically based model, such as one using the total petroleum system approach, is preferred in that it combines the elements of petroleum source, reservoir, trap and seal with the tectono-stratigraphic history of basin evolution with petroleum resource potential. Care must be taken to demonstrate that homogeneous populations in terms of geology, geologic risk, exploration, and discovery processes are used in the assessment process. The USGS 2000 method (7th Approximation Model, EMC computational program) is robust; that is, it can be used in both mature and immature areas, and provides comparable results when using different geologic models (e.g. stratigraphic or structural) with differing amounts of subdivisions, assessment units, within the total petroleum system. ?? 2005 International Association for Mathematical Geology.

  2. Approximation of the exponential integral (well function) using sampling methods

    NASA Astrophysics Data System (ADS)

    Baalousha, Husam Musa

    2015-04-01

    Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on sampling methods. Three different sampling methods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of sampling methods were compared with results obtained by Mathematica software, which was used as a benchmark. All three sampling methods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.

  3. The spa typing of methicillin-resistant Staphylococcus aureus isolates by High Resolution Melting (HRM) analysis.

    PubMed

    Fasihi, Yasser; Fooladi, Saba; Mohammadi, Mohammad Ali; Emaneini, Mohammad; Kalantar-Neyestanaki, Davood

    2017-09-06

    Molecular typing is an important tool for control and prevention of infection. A suitable molecular typing method for epidemiological investigation must be easy to perform, highly reproducible, inexpensive, rapid and easy to interpret. In this study, two molecular typing methods including the conventional PCR-sequencing method and high resolution melting (HRM) analysis were used for staphylococcal protein A (spa) typing of 30 Methicillin-resistant Staphylococcus aureus (MRSA) isolates recovered from clinical samples. Based on PCR-sequencing method results, 16 different spa types were identified among the 30 MRSA isolates. Among the 16 different spa types, 14 spa types separated by HRM method. Two spa types including t4718 and t2894 were not separated from each other. According to our results, spa typing based on HRM analysis method is very rapid, easy to perform and cost-effective, but this method must be standardized for different regions, spa types, and real-time machinery.

  4. Test methods for environment-assisted cracking

    NASA Astrophysics Data System (ADS)

    Turnbull, A.

    1992-03-01

    The test methods for assessing environment assisted cracking of metals in aqueous solution are described. The advantages and disadvantages are examined and the interrelationship between results from different test methods is discussed. The source of differences in susceptibility to cracking occasionally observed from the varied mechanical test methods arises often from the variation between environmental parameters in the different test conditions and the lack of adequate specification, monitoring, and control of environmental variables. Time is also a significant factor when comparing results from short term tests with long exposure tests. In addition to these factors, the intrinsic difference in the important mechanical variables, such as strain rate, associated with the various mechanical tests methods can change the apparent sensitivity of the material to stress corrosion cracking. The increasing economic pressure for more accelerated testing is in conflict with the characteristic time dependence of corrosion processes. Unreliable results may be inevitable in some cases but improved understanding of mechanisms and the development of mechanistically based models of environment assisted cracking which incorporate the key mechanical, material, and environmental variables can provide the framework for a more realistic interpretation of short term data.

  5. [Research on Time-frequency Characteristics of Magneto-acoustic Signal of Different Thickness Medium Based on Wave Summing Method].

    PubMed

    Zhang, Shunqi; Yin, Tao; Ma, Ren; Liu, Zhipeng

    2015-08-01

    Functional imaging method of biological electrical characteristics based on magneto-acoustic effect gives valuable information of tissue in early tumor diagnosis, therein time and frequency characteristics analysis of magneto-acoustic signal is important in image reconstruction. This paper proposes wave summing method based on Green function solution for acoustic source of magneto-acoustic effect. Simulations and analysis under quasi 1D transmission condition are carried out to time and frequency characteristics of magneto-acoustic signal of models with different thickness. Simulation results of magneto-acoustic signal were verified through experiments. Results of the simulation with different thickness showed that time-frequency characteristics of magneto-acoustic signal reflected thickness of sample. Thin sample, which is less than one wavelength of pulse, and thick sample, which is larger than one wavelength, showed different summed waveform and frequency characteristics, due to difference of summing thickness. Experimental results verified theoretical analysis and simulation results. This research has laid a foundation for acoustic source and conductivity reconstruction to the medium with different thickness in magneto-acoustic imaging.

  6. Comparison of Different Classification Methods for Analyzing Electronic Nose Data to Characterize Sesame Oils and Blends

    PubMed Central

    Shao, Xiaolong; Li, Hui; Wang, Nan; Zhang, Qiang

    2015-01-01

    An electronic nose (e-nose) was used to characterize sesame oils processed by three different methods (hot-pressed, cold-pressed, and refined), as well as blends of the sesame oils and soybean oil. Seven classification and prediction methods, namely PCA, LDA, PLS, KNN, SVM, LASSO and RF, were used to analyze the e-nose data. The classification accuracy and MAUC were employed to evaluate the performance of these methods. The results indicated that sesame oils processed with different methods resulted in different sensor responses, with cold-pressed sesame oil producing the strongest sensor signals, followed by the hot-pressed sesame oil. The blends of pressed sesame oils with refined sesame oil were more difficult to be distinguished than the blends of pressed sesame oils and refined soybean oil. LDA, KNN, and SVM outperformed the other classification methods in distinguishing sesame oil blends. KNN, LASSO, PLS, and SVM (with linear kernel), and RF models could adequately predict the adulteration level (% of added soybean oil) in the sesame oil blends. Among the prediction models, KNN with k = 1 and 2 yielded the best prediction results. PMID:26506350

  7. Effects of Problem-Based Learning on Attitude: A Meta-Analysis Study

    ERIC Educational Resources Information Center

    Demirel, Melek; Dagyar, Miray

    2016-01-01

    To date, researchers have frequently investigated students' attitudes toward courses supported by problem-based learning. There are several studies with different results in the literature. It is necessary to combine and interpret the findings of these studies through a meta-analysis method. This method aims to combine different results of similar…

  8. Sediment laboratory quality-assurance project: studies of methods and materials

    USGS Publications Warehouse

    Gordon, J.D.; Newland, C.A.; Gray, J.R.

    2001-01-01

    In August 1996 the U.S. Geological Survey initiated the Sediment Laboratory Quality-Assurance project. The Sediment Laboratory Quality Assurance project is part of the National Sediment Laboratory Quality-Assurance program. This paper addresses the fmdings of the sand/fme separation analysis completed for the single-blind reference sediment-sample project and differences in reported results between two different analytical procedures. From the results it is evident that an incomplete separation of fme- and sand-size material commonly occurs resulting in the classification of some of the fme-size material as sand-size material. Electron microscopy analysis supported the hypothesis that the negative bias for fme-size material and the positive bias for sand-size material is largely due to aggregation of some of the fine-size material into sand-size particles and adherence of fine-size material to the sand-size grains. Electron microscopy analysis showed that preserved river water, which was low in dissolved solids, specific conductance, and neutral pH, showed less aggregation and adhesion than preserved river water that was higher in dissolved solids and specific conductance with a basic pH. Bacteria were also found growing in the matrix, which may enhance fme-size material aggregation through their adhesive properties. Differences between sediment-analysis methods were also investigated as pan of this study. Suspended-sediment concentration results obtained from one participating laboratory that used a total-suspended solids (TSS) method had greater variability and larger negative biases than results obtained when this laboratory used a suspended-sediment concentration method. When TSS methods were used to analyze the reference samples, the median suspended sediment concentration percent difference was -18.04 percent. When the laboratory used a suspended-sediment concentration method, the median suspended-sediment concentration percent difference was -2.74 percent. The percent difference was calculated as follows: Percent difference = (( reported mass - known mass)/known mass ) X 100.

  9. Antioxidant Properties of Brazilian Tropical Fruits by Correlation between Different Assays

    PubMed Central

    Pereira Lima, Giuseppina Pace; Fabris, Sabrina

    2013-01-01

    Four different assays (the Folin-Ciocalteu, DPPH, enzymatic method, and inhibitory activity on lipid peroxidation) based on radically different physicochemical principles and normally used to determine the antioxidant activity of food have been confronted and utilized to investigate the antioxidant activity of fruits originated from Brazil, with particular attention to more exotic and less-studied species (jurubeba, Solanum paniculatum; pequi, Caryocar brasiliense; pitaya, Hylocereus undatus; siriguela, Spondias purpurea; umbu, Spondias tuberosa) in order to (i) verify the correlations between results obtained by the different assays, with the final purpose to obtain more reliable results avoiding possible measuring-method linked mistakes and (ii) individuate the more active fruit species. As expected, the different methods give different responses, depending on the specific assay reaction. Anyhow all results indicate high antioxidant properties for siriguela and jurubeba and poor values for pitaya, umbu, and pequi. Considering that no marked difference of ascorbic acid content has been detected among the different fruits, experimental data suggest that antioxidant activities of the investigated Brazilian fruits are poorly correlated with this molecule, principally depending on their total polyphenolic content. PMID:24106692

  10. A new method of Quickbird own image fusion

    NASA Astrophysics Data System (ADS)

    Han, Ying; Jiang, Hong; Zhang, Xiuying

    2009-10-01

    With the rapid development of remote sensing technology, the means of accessing to remote sensing data become increasingly abundant, thus the same area can form a large number of multi-temporal, different resolution image sequence. At present, the fusion methods are mainly: HPF, IHS transform method, PCA method, Brovey, Mallat algorithm and wavelet transform and so on. There exists a serious distortion of the spectrums in the IHS transform, Mallat algorithm omits low-frequency information of the high spatial resolution images, the integration results of which has obvious blocking effects. Wavelet multi-scale decomposition for different sizes, the directions, details and the edges can have achieved very good results, but different fusion rules and algorithms can achieve different effects. This article takes the Quickbird own image fusion as an example, basing on wavelet transform and HVS, wavelet transform and IHS integration. The result shows that the former better. This paper introduces the correlation coefficient, the relative average spectral error index and usual index to evaluate the quality of image.

  11. Efficacy of Conventional Laser Irradiation Versus a New Method for Gingival Depigmentation (Sieve Method): A Clinical Trial.

    PubMed

    Houshmand, Behzad; Janbakhsh, Noushin; Khalilian, Fatemeh; Talebi Ardakani, Mohammad Reza

    2017-01-01

    Introduction: Diode laser irradiation has recently shown promising results for treatment of gingival pigmentation. This study sought to compare the efficacy of 2 diode laser irradiation protocols for treatment of gingival pigmentations, namely the conventional method and the sieve method. Methods: In this split-mouth clinical trial, 15 patients with gingival pigmentation were selected and their pigmentation intensity was determined using Dummett's oral pigmentation index (DOPI) in different dental regions. Diode laser (980 nm wavelength and 2 W power) was irradiated through a stipple pattern (sieve method) and conventionally in the other side of the mouth. Level of pain and satisfaction with the outcome (both patient and periodontist) were measured using a 0-10 visual analog scale (VAS) for both methods. Patients were followed up at 2 weeks, one month and 3 months. Pigmentation levels were compared using repeated measures of analysis of variance (ANOVA). The difference in level of pain and satisfaction between the 2 groups was analyzed by sample t test and general estimate equation model. Results: No significant differences were found regarding the reduction of pigmentation scores and pain and scores between the 2 groups. The difference in satisfaction with the results at the three time points was significant in both conventional and sieve methods in patients ( P = 0.001) and periodontists ( P = 0.015). Conclusion: Diode laser irradiation in both methods successfully eliminated gingival pigmentations. The sieve method was comparable to conventional technique, offering no additional advantage.

  12. Solutions of the two-dimensional Hubbard model: Benchmarks and results from a wide range of numerical algorithms

    DOE PAGES

    LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; ...

    2015-12-14

    Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification ofmore » uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Furthermore, cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.« less

  13. Evidential analysis of difference images for change detection of multitemporal remote sensing images

    NASA Astrophysics Data System (ADS)

    Chen, Yin; Peng, Lijuan; Cremers, Armin B.

    2018-03-01

    In this article, we develop two methods for unsupervised change detection in multitemporal remote sensing images based on Dempster-Shafer's theory of evidence (DST). In most unsupervised change detection methods, the probability of difference image is assumed to be characterized by mixture models, whose parameters are estimated by the expectation maximization (EM) method. However, the main drawback of the EM method is that it does not consider spatial contextual information, which may entail rather noisy detection results with numerous spurious alarms. To remedy this, we firstly develop an evidence theory based EM method (EEM) which incorporates spatial contextual information in EM by iteratively fusing the belief assignments of neighboring pixels to the central pixel. Secondly, an evidential labeling method in the sense of maximizing a posteriori probability (MAP) is proposed in order to further enhance the detection result. It first uses the parameters estimated by EEM to initialize the class labels of a difference image. Then it iteratively fuses class conditional information and spatial contextual information, and updates labels and class parameters. Finally it converges to a fixed state which gives the detection result. A simulated image set and two real remote sensing data sets are used to evaluate the two evidential change detection methods. Experimental results show that the new evidential methods are comparable to other prevalent methods in terms of total error rate.

  14. Calculation methods study on hot spot stress of new girder structure detail

    NASA Astrophysics Data System (ADS)

    Liao, Ping; Zhao, Renda; Jia, Yi; Wei, Xing

    2017-10-01

    To study modeling calculation methods of new girder structure detail's hot spot stress, based on surface extrapolation method among hot spot stress method, a few finite element analysis models of this welded detail were established by finite element software ANSYS. The influence of element type, mesh density, different local modeling methods of the weld toe and extrapolation methods was analyzed on hot spot stress calculation results at the toe of welds. The results show that the difference of the normal stress in the thickness direction and the surface direction among different models is larger when the distance from the weld toe is smaller. When the distance from the toe is greater than 0.5t, the normal stress of solid models, shell models with welds and non-weld shell models tends to be consistent along the surface direction. Therefore, it is recommended that the extrapolated point should be selected outside the 0.5t for new girder welded detail. According to the results of the calculation and analysis, shell models have good grid stability, and extrapolated hot spot stress of solid models is smaller than that of shell models. So it is suggested that formula 2 and solid45 should be carried out during the hot spot stress extrapolation calculation of this welded detail. For each finite element model under different shell modeling methods, the results calculated by formula 2 are smaller than those of the other two methods, and the results of shell models with welds are the largest. Under the same local mesh density, the extrapolated hot spot stress decreases gradually with the increase of the number of layers in the thickness direction of the main plate, and the variation range is within 7.5%.

  15. Military Housing Privatization Initiative (MHPI), Eglin AFB, Florida and Hurlburt Field, Florida. Final Environmental Impact Statement

    DTIC Science & Technology

    2011-05-01

    There are several different methods available for determining stormwater runoff peak flows. Two of the most widely used methods are the Rational...environmental factors between the alternatives differ in terms of their respective potential for adverse effects relative to their location. ENVIRONMENTAL...Force selects a development proposal. As a result, the actual project scope may result in different numbers of units constructed or demolished, or

  16. An Unsupervised Change Detection Method Using Time-Series of PolSAR Images from Radarsat-2 and GaoFen-3.

    PubMed

    Liu, Wensong; Yang, Jie; Zhao, Jinqi; Shi, Hongtao; Yang, Le

    2018-02-12

    The traditional unsupervised change detection methods based on the pixel level can only detect the changes between two different times with same sensor, and the results are easily affected by speckle noise. In this paper, a novel method is proposed to detect change based on time-series data from different sensors. Firstly, the overall difference image of the time-series PolSAR is calculated by omnibus test statistics, and difference images between any two images in different times are acquired by R j test statistics. Secondly, the difference images are segmented with a Generalized Statistical Region Merging (GSRM) algorithm which can suppress the effect of speckle noise. Generalized Gaussian Mixture Model (GGMM) is then used to obtain the time-series change detection maps in the final step of the proposed method. To verify the effectiveness of the proposed method, we carried out the experiment of change detection using time-series PolSAR images acquired by Radarsat-2 and Gaofen-3 over the city of Wuhan, in China. Results show that the proposed method can not only detect the time-series change from different sensors, but it can also better suppress the influence of speckle noise and improve the overall accuracy and Kappa coefficient.

  17. Assessment of changing interdependencies between human electroencephalograms using nonlinear methods

    NASA Astrophysics Data System (ADS)

    Pereda, E.; Rial, R.; Gamundi, A.; González, J.

    2001-01-01

    We investigate the problems that might arise when two recently developed methods for detecting interdependencies between time series using state space embedding are applied to signals of different complexity. With this aim, these methods were used to assess the interdependencies between two electroencephalographic channels from 10 adult human subjects during different vigilance states. The significance and nature of the measured interdependencies were checked by comparing the results of the original data with those of different types of surrogates. We found that even with proper reconstructions of the dynamics of the time series, both methods may give wrong statistical evidence of decreasing interdependencies during deep sleep due to changes in the complexity of each individual channel. The main factor responsible for this result was the use of an insufficient number of neighbors in the calculations. Once this problem was surmounted, both methods showed the existence of a significant relationship between the channels which was mostly of linear type and increased from awake to slow wave sleep. We conclude that the significance of the qualitative results provided for both methods must be carefully tested before drawing any conclusion about the implications of such results.

  18. Comparison of membrane filtration and multiple tube methods for the enumeration of coliform organisms in water

    PubMed Central

    1972-01-01

    The membrane methods described in Report 71 on the bacteriological examination of water supplies (Report, 1969) for the enumeration of coliform organisms and Escherichia coli in waters, together with a glutamate membrane method, were compared with the glutamate multiple tube method recommended in Report 71 and an incubation procedure similar to that used for membranes with the first 4 hr. at 30° C., and with MacConkey broth in multiple tubes. Although there were some differences between individual laboratories, the combined results from all participating laboratories showed that standard and extended membrane methods gave significantly higher results than the glutamate tube method for coliform organisms in both chlorinated and unchlorinated waters, but significantly lower results for Esch. coli with chlorinated waters and equivocal results with unchlorinated waters. Extended membranes gave higher results than glutamate tubes in larger proportions of samples than did standard membranes. Although transport membranes did not do so well as standard membrane methods, the results were usually in agreement with glutamate tubes except for Esch. coli in chlorinated waters. The glutamate membranes were unsatisfactory. Preliminary incubation of glutamate at 30° C. made little difference to the results. PMID:4567313

  19. Comparison of gestational dating methods and implications ...

    EPA Pesticide Factsheets

    OBJECTIVES: Estimating gestational age is usually based on date of last menstrual period (LMP) or clinical estimation (CE); both approaches introduce potential bias. Differences in methods of estimation may lead to misclassificat ion and inconsistencies in risk estimates, particularly if exposure assignment is also gestation-dependent. This paper examines a'what-if' scenario in which alternative methods are used and attempts to elucidate how method choice affects observed results.METHODS: We constructed two 20-week gestational age cohorts of pregnancies between 2000 and 2005 (New Jersey, Pennsylvania, Ohio, USA) using live birth certificates : one defined preterm birth (PTB) status using CE and one using LMP. Within these, we estimated risk for 4 categories of preterm birth (PTBs per 106 pregnancies) and risk differences (RD (95% Cl s)) associated with exposure to particulate matter (PM2. 5).RESULTS: More births were classified preterm using LMP (16%) compared with CE (8%). RD divergences increased between cohorts as exposure period approached delivery. Among births between 28 and 31 weeks, week 7 PM2.5 exposure conveyed RDs of 44 (21 to 67) for CE and 50 (18 to 82) for LMP populations, while week 24 exposure conveyed RDs of 33 (11 to 56) and -20 (-50 to 10), respectively.CONCLUSIONS: Different results from analyses restricted to births with both CE and LMP are most likely due to differences in dating methods rather than selection issues. Results are sensitive t

  20. Simulation of one-sided heating of boiler unit membrane-type water walls

    NASA Astrophysics Data System (ADS)

    Kurepin, M. P.; Serbinovskiy, M. Yu.

    2017-03-01

    This study describes the results of simulation of the temperature field and the stress-strain state of membrane-type gastight water walls of boiler units using the finite element method. The methods of analytical and standard calculation of one-sided heating of fin-tube water walls by a radiative heat flux are analyzed. The methods and software for input data calculation in the finite-element simulation, including thermoelastic moments in welded panels that result from their one-sided heating, are proposed. The method and software modules are used for water wall simulation using ANSYS. The results of simulation of the temperature field, stress field, deformations and displacement of the membrane-type panel for the boiler furnace water wall using the finite-element method, as well as the results of calculation of the panel tube temperature, stresses and deformations using the known methods, are presented. The comparison of the known experimental results on heating and bending by given moments of membrane-type water walls and numerical simulations is performed. It is demonstrated that numerical results agree with high accuracy with the experimental data. The relative temperature difference does not exceed 1%. The relative difference of the experimental fin mutual turning angle caused by one-sided heating by radiative heat flux and the results obtained in the finite element simulation does not exceed 8.5% for nondisplaced fins and 7% for fins with displacement. The same difference for the theoretical results and the simulation using the finite-element method does not exceed 3% and 7.1%, respectively. The proposed method and software modules for simulation of the temperature field and stress-strain state of the water walls are verified and the feasibility of their application in practical design is proven.

  1. K-nearest neighbors based methods for identification of different gear crack levels under different motor speeds and loads: Revisited

    NASA Astrophysics Data System (ADS)

    Wang, Dong

    2016-03-01

    Gears are the most commonly used components in mechanical transmission systems. Their failures may cause transmission system breakdown and result in economic loss. Identification of different gear crack levels is important to prevent any unexpected gear failure because gear cracks lead to gear tooth breakage. Signal processing based methods mainly require expertize to explain gear fault signatures which is usually not easy to be achieved by ordinary users. In order to automatically identify different gear crack levels, intelligent gear crack identification methods should be developed. The previous case studies experimentally proved that K-nearest neighbors based methods exhibit high prediction accuracies for identification of 3 different gear crack levels under different motor speeds and loads. In this short communication, to further enhance prediction accuracies of existing K-nearest neighbors based methods and extend identification of 3 different gear crack levels to identification of 5 different gear crack levels, redundant statistical features are constructed by using Daubechies 44 (db44) binary wavelet packet transform at different wavelet decomposition levels, prior to the use of a K-nearest neighbors method. The dimensionality of redundant statistical features is 620, which provides richer gear fault signatures. Since many of these statistical features are redundant and highly correlated with each other, dimensionality reduction of redundant statistical features is conducted to obtain new significant statistical features. At last, the K-nearest neighbors method is used to identify 5 different gear crack levels under different motor speeds and loads. A case study including 3 experiments is investigated to demonstrate that the developed method provides higher prediction accuracies than the existing K-nearest neighbors based methods for recognizing different gear crack levels under different motor speeds and loads. Based on the new significant statistical features, some other popular statistical models including linear discriminant analysis, quadratic discriminant analysis, classification and regression tree and naive Bayes classifier, are compared with the developed method. The results show that the developed method has the highest prediction accuracies among these statistical models. Additionally, selection of the number of new significant features and parameter selection of K-nearest neighbors are thoroughly investigated.

  2. Vehicle Speed and Length Estimation Using Data from Two Anisotropic Magneto-Resistive (AMR) Sensors

    PubMed Central

    Markevicius, Vytautas; Navikas, Dangirutis; Valinevicius, Algimantas; Zilys, Mindaugas

    2017-01-01

    Methods for estimating a car’s length are presented in this paper, as well as the results achieved by using a self-designed system equipped with two anisotropic magneto-resistive (AMR) sensors, which were placed on a road lane. The purpose of the research was to compare the lengths of mid-size cars, i.e., family cars (hatchbacks), saloons (sedans), station wagons and SUVs. Four methods were used in the research: a simple threshold based method, a threshold method based on moving average and standard deviation, a two-extreme-peak detection method and a method based on the amplitude and time normalization using linear extrapolation (or interpolation). The results were achieved by analyzing changes in the magnitude and in the absolute z-component of the magnetic field as well. The tests, which were performed in four different Earth directions, show differences in the values of estimated lengths. The magnitude-based results in the case when cars drove from the South to the North direction were even up to 1.2 m higher than the other results achieved using the threshold methods. Smaller differences in lengths were observed when the distances were measured between two extreme peaks in the car magnetic signatures. The results were summarized in tables and the errors of estimated lengths were presented. The maximal errors, related to real lengths, were up to 22%. PMID:28771171

  3. SU-E-I-96: A Study About the Influence of ROI Variation On Tumor Segmentation in PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, L; Tan, S; Lu, W

    2014-06-01

    Purpose: To study the influence of different regions of interest (ROI) on tumor segmentation in PET. Methods: The experiments were conducted on a cylindrical phantom. Six spheres with different volumes (0.5ml, 1ml, 6ml, 12ml, 16ml and 20 ml) were placed inside a cylindrical container to mimic tumors of different sizes. The spheres were filled with 11C solution as sources and the cylindrical container was filled with 18F-FDG solution as the background. The phantom was continuously scanned in a Biograph-40 True Point/True View PET/CT scanner, and 42 images were reconstructed with source-to-background ratio (SBR) ranging from 16:1 to 1.8:1. We tookmore » a large and a small ROI for each sphere, both of which contain the whole sphere and does not contain any other spheres. Six other ROIs of different sizes were then taken between the large and the small ROI. For each ROI, all images were segmented by eitht thresholding methods and eight advanced methods, respectively. The segmentation results were evaluated by dice similarity index (DSI), classification error (CE) and volume error (VE). The robustness of different methods to ROI variation was quantified using the interrun variation and a generalized Cohen's kappa. Results: With the change of ROI, the segmentation results of all tested methods changed more or less. Compared with all advanced methods, thresholding methods were less affected by the ROI change. In addition, most of the thresholding methods got more accurate segmentation results for all sphere sizes. Conclusion: The results showed that the segmentation performance of all tested methods was affected by the change of ROI. Thresholding methods were more robust to this change and they can segment the PET image more accurately. This work was supported in part by National Natural Science Foundation of China (NNSFC), under Grant Nos. 60971112 and 61375018, and Fundamental Research Funds for the Central Universities, under Grant No. 2012QN086. Wei Lu was supported in part by the National Institutes of Health (NIH) Grant No. R01 CA172638.« less

  4. The variability of software scoring of the CDMAM phantom associated with a limited number of images

    NASA Astrophysics Data System (ADS)

    Yang, Chang-Ying J.; Van Metter, Richard

    2007-03-01

    Software scoring approaches provide an attractive alternative to human evaluation of CDMAM images from digital mammography systems, particularly for annual quality control testing as recommended by the European Protocol for the Quality Control of the Physical and Technical Aspects of Mammography Screening (EPQCM). Methods for correlating CDCOM-based results with human observer performance have been proposed. A common feature of all methods is the use of a small number (at most eight) of CDMAM images to evaluate the system. This study focuses on the potential variability in the estimated system performance that is associated with these methods. Sets of 36 CDMAM images were acquired under carefully controlled conditions from three different digital mammography systems. The threshold visibility thickness (TVT) for each disk diameter was determined using previously reported post-analysis methods from the CDCOM scorings for a randomly selected group of eight images for one measurement trial. This random selection process was repeated 3000 times to estimate the variability in the resulting TVT values for each disk diameter. The results from using different post-analysis methods, different random selection strategies and different digital systems were compared. Additional variability of the 0.1 mm disk diameter was explored by comparing the results from two different image data sets acquired under the same conditions from the same system. The magnitude and the type of error estimated for experimental data was explained through modeling. The modeled results also suggest a limitation in the current phantom design for the 0.1 mm diameter disks. Through modeling, it was also found that, because of the binomial statistic nature of the CDMAM test, the true variability of the test could be underestimated by the commonly used method of random re-sampling.

  5. Evaluation of methods for detection of fluorescence labeled subcellular objects in microscope images.

    PubMed

    Ruusuvuori, Pekka; Aijö, Tarmo; Chowdhury, Sharif; Garmendia-Torres, Cecilia; Selinummi, Jyrki; Birbaumer, Mirko; Dudley, Aimée M; Pelkmans, Lucas; Yli-Harja, Olli

    2010-05-13

    Several algorithms have been proposed for detecting fluorescently labeled subcellular objects in microscope images. Many of these algorithms have been designed for specific tasks and validated with limited image data. But despite the potential of using extensive comparisons between algorithms to provide useful information to guide method selection and thus more accurate results, relatively few studies have been performed. To better understand algorithm performance under different conditions, we have carried out a comparative study including eleven spot detection or segmentation algorithms from various application fields. We used microscope images from well plate experiments with a human osteosarcoma cell line and frames from image stacks of yeast cells in different focal planes. These experimentally derived images permit a comparison of method performance in realistic situations where the number of objects varies within image set. We also used simulated microscope images in order to compare the methods and validate them against a ground truth reference result. Our study finds major differences in the performance of different algorithms, in terms of both object counts and segmentation accuracies. These results suggest that the selection of detection algorithms for image based screens should be done carefully and take into account different conditions, such as the possibility of acquiring empty images or images with very few spots. Our inclusion of methods that have not been used before in this context broadens the set of available detection methods and compares them against the current state-of-the-art methods for subcellular particle detection.

  6. Testing Different Model Building Procedures Using Multiple Regression.

    ERIC Educational Resources Information Center

    Thayer, Jerome D.

    The stepwise regression method of selecting predictors for computer assisted multiple regression analysis was compared with forward, backward, and best subsets regression, using 16 data sets. The results indicated the stepwise method was preferred because of its practical nature, when the models chosen by different selection methods were similar…

  7. The Role of Psychological and Physiological Factors in Decision Making under Risk and in a Dilemma

    PubMed Central

    Fooken, Jonas; Schaffner, Markus

    2016-01-01

    Different methods to elicit risk attitudes of individuals often provide differing results despite a common theory. Reasons for such inconsistencies may be the different influence of underlying factors in risk-taking decisions. In order to evaluate this conjecture, a better understanding of underlying factors across methods and decision contexts is desirable. In this paper we study the difference in result of two different risk elicitation methods by linking estimates of risk attitudes to gender, age, and personality traits, which have been shown to be related. We also investigate the role of these factors during decision-making in a dilemma situation. For these two decision contexts we also investigate the decision-maker's physiological state during the decision, measured by heart rate variability (HRV), which we use as an indicator of emotional involvement. We found that the two elicitation methods provide different individual risk attitude measures which is partly reflected in a different gender effect between the methods. Personality traits explain only relatively little in terms of driving risk attitudes and the difference between methods. We also found that risk taking and the physiological state are related for one of the methods, suggesting that more emotionally involved individuals are more risk averse in the experiment. Finally, we found evidence that personality traits are connected to whether individuals made a decision in the dilemma situation, but risk attitudes and the physiological state were not indicative for the ability to decide in this decision context. PMID:26834591

  8. Microphone Array

    NASA Astrophysics Data System (ADS)

    Bader, Rolf

    This chapter deals with microphone arrays. It is arranged according to the different methods available to proceed through the different problems and through the different mathematical methods. After discussing general properties of different array types, such as plane arrays, spherical arrays, or scanning arrays, it proceeds to the signal processing tools that are most used in speech processing. In the third section, backpropagating methods based on the Helmholtz-Kirchhoff integral are discussed, which result in spatial radiation patterns of vibrating bodies or air.

  9. The effect of different methods and image analyzers on the results of the in vivo comet assay.

    PubMed

    Kyoya, Takahiro; Iwamoto, Rika; Shimanura, Yuko; Terada, Megumi; Masuda, Shuichi

    2018-01-01

    The in vivo comet assay is a widely used genotoxicity test that can detect DNA damage in a range of organs. It is included in the Organisation for Economic Co-operation and Development Guidelines for the Testing of Chemicals. However, various protocols are still used for this assay, and several different image analyzers are used routinely to evaluate the results. Here, we verified a protocol that largely contributes to the equivalence of results, and we assessed the effect on the results when slides made from the same sample were analyzed using two different image analyzers (Comet Assay IV vs Comet Analyzer). Standardizing the agarose concentrations and DNA unwinding and electrophoresis times had a large impact on the equivalence of the results between the different methods used for the in vivo comet assay. In addition, there was some variation in the sensitivity of the two different image analyzers tested; however this variation was considered to be minor and became negligible when the test conditions were standardized between the two different methods. By standardizing the concentrations of low melting agarose and DNA unwinding and electrophoresis times between both methods used in the current study, the sensitivity to detect the genotoxicity of a positive control substance in the in vivo comet assay became generally comparable, independently of the image analyzer used. However, there may still be the possibility that other conditions, except for the three described here, could affect the reproducibility of the in vivo comet assay.

  10. Social network extraction based on Web: 1. Related superficial methods

    NASA Astrophysics Data System (ADS)

    Khairuddin Matyuso Nasution, Mahyuddin

    2018-01-01

    Often the nature of something affects methods to resolve the related issues about it. Likewise, methods to extract social networks from the Web, but involve the structured data types differently. This paper reveals several methods of social network extraction from the same sources that is Web: the basic superficial method, the underlying superficial method, the description superficial method, and the related superficial methods. In complexity we derive the inequalities between methods and so are their computations. In this case, we find that different results from the same tools make the difference from the more complex to the simpler: Extraction of social network by involving co-occurrence is more complex than using occurrences.

  11. Comparison of Manual Refraction Versus Autorefraction in 60 Diabetic Retinopathy Patients

    PubMed Central

    Shirzadi, Keyvan; Shahraki, Kourosh; Yahaghi, Emad; Makateb, Ali; Khosravifard, Keivan

    2016-01-01

    Aim: The purpose of the study was to evaluate the comparison of manual refraction versus autorefraction in diabetic retinopathy patients. Material and Methods: The study was conducted at the Be’sat Army Hospital from 2013-2015. In the present study differences between two common refractometry methods (manual refractometry and Auto refractometry) in diagnosis and follow up of retinopathy in patients affected with diabetes is investigated. Results: Our results showed that there is a significant difference in visual acuity score of patients between manual and auto refractometry. Despite this fact, spherical equivalent scores of two methods of refractometry did not show a significant statistical difference in the patients. Conclusion: Although use of manual refraction is comparable with autorefraction in evaluating spherical equivalent scores in diabetic patients affected with retinopathy, but in the case of visual acuity results from these two methods are not comparable. PMID:27703289

  12. News video story segmentation method using fusion of audio-visual features

    NASA Astrophysics Data System (ADS)

    Wen, Jun; Wu, Ling-da; Zeng, Pu; Luan, Xi-dao; Xie, Yu-xiang

    2007-11-01

    News story segmentation is an important aspect for news video analysis. This paper presents a method for news video story segmentation. Different form prior works, which base on visual features transform, the proposed technique uses audio features as baseline and fuses visual features with it to refine the results. At first, it selects silence clips as audio features candidate points, and selects shot boundaries and anchor shots as two kinds of visual features candidate points. Then this paper selects audio feature candidates as cues and develops different fusion method, which effectively using diverse type visual candidates to refine audio candidates, to get story boundaries. Experiment results show that this method has high efficiency and adaptability to different kinds of news video.

  13. Analysis of financial time series using multiscale entropy based on skewness and kurtosis

    NASA Astrophysics Data System (ADS)

    Xu, Meng; Shang, Pengjian

    2018-01-01

    There is a great interest in studying dynamic characteristics of the financial time series of the daily stock closing price in different regions. Multi-scale entropy (MSE) is effective, mainly in quantifying the complexity of time series on different time scales. This paper applies a new method for financial stability from the perspective of MSE based on skewness and kurtosis. To better understand the superior coarse-graining method for the different kinds of stock indexes, we take into account the developmental characteristics of the three continents of Asia, North America and European stock markets. We study the volatility of different financial time series in addition to analyze the similarities and differences of coarsening time series from the perspective of skewness and kurtosis. A kind of corresponding relationship between the entropy value of stock sequences and the degree of stability of financial markets, were observed. The three stocks which have particular characteristics in the eight piece of stock sequences were discussed, finding the fact that it matches the result of applying the MSE method to showing results on a graph. A comparative study is conducted to simulate over synthetic and real world data. Results show that the modified method is more effective to the change of dynamics and has more valuable information. The result is obtained at the same time, finding the results of skewness and kurtosis discrimination is obvious, but also more stable.

  14. Effects of various light curing methods on the leachability of uncured substances and hardness of a composite resin.

    PubMed

    Moon, H-J; Lee, Y-K; Lim, B-S; Kim, C-W

    2004-03-01

    The purpose of this study was to evaluate the effect of the various light curing units (plasma arc, halogen and light-emitting diodes) and irradiation methods (one-step, two-step and pulse) using different light energy densities on the leachability of unreacted monomers (Bis-GMA and UDMA) and the surface hardness of a composite resin (Z250, 3M). Leachability of the specimens immersed for 7 days in ethanol was analysed by HPLC. Vicker's hardness number (VHN) was measured immediately after curing (IC) and after immersion in ethanol for 7 days. Various irradiation methods with three curing units resulted in differences in the amount of leached monomers and VHN of IC when light energy density was lower than 17.0 J cm(-2) (P = 0.05). However, regardless of curing units and irradiation methods, these results were not different when the time or light energy density increased. When similar light energy density was irradiated (15.6-17.7 J cm(-2)), the efficiency of irradiation methods was different by the following order: one-step > or = two-step > pulse. These results suggest that the amount of leached monomers and VHN were influenced by forming polymer structure in activation and initiation stages of polymerization process with different light source energies and curing times.

  15. Validation of odor concentration from mechanical-biological treatment piles using static chamber and wind tunnel with different wind speed values.

    PubMed

    Szyłak-Szydłowski, Mirosław

    2017-09-01

    The basic principle of odor sampling from surface sources is based primarily on the amount of air obtained from a specific area of the ground, which acts as a source of malodorous compounds. Wind tunnels and flux chambers are often the only available, direct method of evaluating the odor fluxes from small area sources. There are currently no widely accepted chamber-based methods; thus, there is still a need for standardization of these methods to ensure accuracy and comparability. Previous research has established that there is a significant difference between the odor concentration values obtained using the Lindvall chamber and those obtained by a dynamic flow chamber. Thus, the present study compares sampling methods using a streaming chamber modeled on the Lindvall cover (using different wind speeds), a static chamber, and a direct sampling method without any screens. The volumes of chambers in the current work were similar, ~0.08 m 3 . This study was conducted at the mechanical-biological treatment plant in Poland. Samples were taken from a pile covered by the membrane. Measured odor concentration values were between 2 and 150 ou E /m 3 . Results of the study demonstrated that both chambers can be used interchangeably in the following conditions: odor concentration is below 60 ou E /m 3 , wind speed inside the Lindvall chamber is below 0.2 m/sec, and a flow value is below 0.011 m 3 /sec. Increasing the wind speed above the aforementioned value results in significant differences in the results obtained between those methods. In all experiments, the results of the concentration of odor in the samples using the static chamber were consistently higher than those from the samples measured in the Lindvall chamber. Lastly, the results of experiments were employed to determine a model function of the relationship between wind speed and odor concentration values. Several researchers wrote that there are no widely accepted chamber-based methods. Also, there is still a need for standardization to ensure full comparability of these methods. The present study compared the existing methods to improve the standardization of area source sampling. The practical usefulness of the results was proving that both examined chambers can be used interchangeably. Statistically similar results were achieved while odor concentration was below 60 ou E /m 3 and wind speed inside the Lindvall chamber was below 0.2 m/sec. Increasing wind speed over these values results in differences between these methods. A model function of relationship between wind speed and odor concentration value was determined.

  16. A spring system method for a mesh generation problem

    NASA Astrophysics Data System (ADS)

    Romanov, A.

    2018-04-01

    A new direct method for the 2d-mesh generation for a simply-connected domain using a spring system is observed. The method can be used with other methods to modify a mesh for growing solid problems. Advantages and disadvantages of the method are shown. Different types of boundary conditions are explored. The results of modelling for different target domains are given. Some applications for composite materials are studied.

  17. Comparison of Three Different Methods for Pile Integrity Testing on a Cylindrical Homogeneous Polyamide Specimen

    NASA Astrophysics Data System (ADS)

    Lugovtsova, Y. D.; Soldatov, A. I.

    2016-01-01

    Three different methods for pile integrity testing are proposed to compare on a cylindrical homogeneous polyamide specimen. The methods are low strain pile integrity testing, multichannel pile integrity testing and testing with a shaker system. Since the low strain pile integrity testing is well-established and standardized method, the results from it are used as a reference for other two methods.

  18. Comparing and improving reconstruction methods for proxies based on compositional data

    NASA Astrophysics Data System (ADS)

    Nolan, C.; Tipton, J.; Booth, R.; Jackson, S. T.; Hooten, M.

    2017-12-01

    Many types of studies in paleoclimatology and paleoecology involve compositional data. Often, these studies aim to use compositional data to reconstruct an environmental variable of interest; the reconstruction is usually done via the development of a transfer function. Transfer functions have been developed using many different methods. Existing methods tend to relate the compositional data and the reconstruction target in very simple ways. Additionally, the results from different methods are rarely compared. Here we seek to address these two issues. First, we introduce a new hierarchical Bayesian multivariate gaussian process model; this model allows for the relationship between each species in the compositional dataset and the environmental variable to be modeled in a way that captures the underlying complexities. Then, we compare this new method to machine learning techniques and commonly used existing methods. The comparisons are based on reconstructing the water table depth history of Caribou Bog (an ombrotrophic Sphagnum peat bog in Old Town, Maine, USA) from a new 7500 year long record of testate amoebae assemblages. The resulting reconstructions from different methods diverge in both their resulting means and uncertainties. In particular, uncertainty tends to be drastically underestimated by some common methods. These results will help to improve inference of water table depth from testate amoebae. Furthermore, this approach can be applied to test and improve inferences of past environmental conditions from a broad array of paleo-proxies based on compositional data

  19. Digital photography and transparency-based methods for measuring wound surface area.

    PubMed

    Bhedi, Amul; Saxena, Atul K; Gadani, Ravi; Patel, Ritesh

    2013-04-01

    To compare and determine a credible method of measurement of wound surface area by linear, transparency, and photographic methods for monitoring progress of wound healing accurately and ascertaining whether these methods are significantly different. From April 2005 to December 2006, 40 patients (30 men, 5 women, 5 children) admitted to the surgical ward of Shree Sayaji General Hospital, Baroda, had clean as well as infected wound following trauma, debridement, pressure sore, venous ulcer, and incision and drainage. Wound surface areas were measured by these three methods (linear, transparency, and photographic methods) simultaneously on alternate days. The linear method is statistically and significantly different from transparency and photographic methods (P value <0.05), but there is no significant difference between transparency and photographic methods (P value >0.05). Photographic and transparency methods provided measurements of wound surface area with equivalent result and there was no statistically significant difference between these two methods.

  20. [Reconsidering children's dreams. A critical review of methods and results in developmental dream research from Freud to contemporary works].

    PubMed

    Sándor, Piroska; Bódizs, Róbert

    2014-01-01

    Examining children's dream development is a significant challenge for researchers. Results from studies on children's dreaming may enlighten us on the nature and role of dreaming as well as broaden our knowledge of consciousness and cognitive development. This review summarizes the main questions and historical progress in developmental dream research, with the aim of shedding light on the advantages, disadvantages and effects of different settings and methods on research outcomes. A typical example would be the dreams of 3 to 5 year-olds: they are simple and static, with a relative absence of emotions and active self participation according to laboratory studies; studies using different methodology however found them to be vivid, rich in emotions, with the self as an active participant. Questions about the validity of different methods arise, and are considered within this review. Given that methodological differences can result in highly divergent outcomes, it is strongly recommended for future research to select methodology and treat results more carefully.

  1. Evaluating fMRI methods for assessing hemispheric language dominance in healthy subjects.

    PubMed

    Baciu, Monica; Juphard, Alexandra; Cousin, Emilie; Bas, Jean François Le

    2005-08-01

    We evaluated two methods for quantifying the hemispheric language dominance in healthy subjects, by using a rhyme detection (deciding whether couple of words rhyme) and a word fluency (generating words starting with a given letter) task. One of methods called "flip method" (FM) was based on the direct statistical comparison between hemispheres' activity. The second one, the classical lateralization indices method (LIM), was based on calculating lateralization indices by taking into account the number of activated pixels within hemispheres. The main difference between methods is the statistical assessment of the inter-hemispheric difference: while FM shows if the difference between hemispheres' activity is statistically significant, LIM shows only that if there is a difference between hemispheres. The robustness of LIM and FM was assessed by calculating correlation coefficients between LIs obtained with each of these methods and manual lateralization indices MLI obtained with Edinburgh inventory. Our results showed significant correlation between LIs provided by each method and the MIL, suggesting that both methods are robust for quantifying hemispheric dominance for language in healthy subjects. In the present study we also evaluated the effect of spatial normalization, smoothing and "clustering" (NSC) on the intra-hemispheric location of activated regions and inter-hemispheric asymmetry of the activation. Our results have shown that NSC did not affect the hemispheric specialization but increased the value of the inter-hemispheric difference.

  2. Calculation of compressible boundary layer flow about airfoils by a finite element/finite difference method

    NASA Technical Reports Server (NTRS)

    Strong, Stuart L.; Meade, Andrew J., Jr.

    1992-01-01

    Preliminary results are presented of a finite element/finite difference method (semidiscrete Galerkin method) used to calculate compressible boundary layer flow about airfoils, in which the group finite element scheme is applied to the Dorodnitsyn formulation of the boundary layer equations. The semidiscrete Galerkin (SDG) method promises to be fast, accurate and computationally efficient. The SDG method can also be applied to any smoothly connected airfoil shape without modification and possesses the potential capability of calculating boundary layer solutions beyond flow separation. Results are presented for low speed laminar flow past a circular cylinder and past a NACA 0012 airfoil at zero angle of attack at a Mach number of 0.5. Also shown are results for compressible flow past a flat plate for a Mach number range of 0 to 10 and results for incompressible turbulent flow past a flat plate. All numerical solutions assume an attached boundary layer.

  3. Effects of phone versus mail survey methods on the measurement of health-related quality of life and emotional and behavioural problems in adolescents.

    PubMed

    Erhart, Michael; Wetzel, Ralf M; Krügel, André; Ravens-Sieberer, Ulrike

    2009-12-30

    Telephone interviews have become established as an alternative to traditional mail surveys for collecting epidemiological data in public health research. However, the use of telephone and mail surveys raises the question of to what extent the results of different data collection methods deviate from one another. We therefore set out to study possible differences in using telephone and mail survey methods to measure health-related quality of life and emotional and behavioural problems in children and adolescents. A total of 1700 German children aged 8-18 years and their parents were interviewed randomly either by telephone or by mail. Health-related Quality of Life (HRQoL) and mental health problems (MHP) were assessed using the KINDL-R Quality of Life instrument and the Strengths and Difficulties Questionnaire (SDQ) children's self-report and parent proxy report versions. Mean Differences ("d" effect size) and differences in Cronbach alpha were examined across modes of administration. Pearson correlation between children's and parents' scores was calculated within a multi-trait-multi-method (MTMM) analysis and compared across survey modes using Fisher-Z transformation. Telephone and mail survey methods resulted in similar completion rates and similar socio-demographic and socio-economic makeups of the samples. Telephone methods resulted in more positive self- and parent proxy reports of children's HRQoL (SMD < or = 0.27) and MHP (SMD < or = 0.32) on many scales. For the phone administered KINDL, lower Cronbach alpha values (self/proxy Total: 0.79/0.84) were observed (mail survey self/proxy Total: 0.84/0.87). KINDL MTMM results were weaker for the phone surveys: mono-trait-multi-method mean r = 0.31 (mail: r = 0.45); multi-trait-mono-method mean (self/parents) r = 0.29/0.36 (mail: r = 0.34/0.40); multi-trait-multi-method mean r = 0.14 (mail: r = 0.21). Weaker MTMM results were also observed for the phone administered SDQ: mono-trait-multi-method mean r = 0.32 (mail: r = 0.40); multi-trait-mono-method mean (self/parents) r = 0.24/0.30 (mail: r = 0.20/0.32); multi-trait-multi-method mean r = 0.14 (mail = 0.14). The SDQ classification into borderline and abnormal for some scales was affected by the method (OR = 0.36-1.55). The observed differences between phone and mail surveys are small but should be regarded as relevant in certain settings. Therefore, while both methods are valid, some changes are necessary. The weaker reliability and MTMM validity associated with phone methods necessitates improved phone adaptations of paper and pencil questionnaires. The effects of phone versus mail survey modes are partly different across constructs/measures.

  4. Influence of ROI definition on the heart-to-mediastinum ratio in planar 123I-MIBG imaging.

    PubMed

    Klene, Christiane; Jungen, Christiane; Okuda, Koichi; Kobayashi, Yuske; Helberg, Annabelle; Mester, Janos; Meyer, Christian; Nakajima, Kenichi

    2018-02-01

    Iodine-123-metaiodobenzylguanidine ( 123 I-MIBG) imaging with estimation of the heart-to-mediastinum ratio (HMR) has been established for risk assessment in patients with chronic heart failure. Our aim was to evaluate the effect of different methods of ROI definition on the renderability of HMR to normal or decreased sympathetic innervation. The results of three different methods of ROI definition (clinical routine (CLI), simple standardization (STA), and semi-automated (AUT) were compared. Ranges of 95% limits of agreement (LoA) of inter-observer variabilities were 0.28 and 0.13 for STA and AUT, respectively. Considering a HMR of 1.60 as the lower limit of normal, 13 of 32 (41%) for method STA and 5 of 32 (16%) for method AUT of all HMR measurements could not be classified to normal or pathologic. Ranges of 95% LoA of inter-method variabilities were 0.72 for CLI vs AUT, 0.65 for CLI vs STA, and 0.31 for STA vs AUT. Different methods of ROI definition result in different ranges of the LoA of the measured HMR with relevance for rendering the results to normal or pathological innervation. We could demonstrate that standardized protocols can help keep methodological variabilities limited, narrowing the gray zone of renderability.

  5. A novel method for calculating the dynamic capillary force and correcting the pressure error in micro-tube experiment.

    PubMed

    Wang, Shuoliang; Liu, Pengcheng; Zhao, Hui; Zhang, Yuan

    2017-11-29

    Micro-tube experiment has been implemented to understand the mechanisms of governing microcosmic fluid percolation and is extensively used in both fields of micro electromechanical engineering and petroleum engineering. The measured pressure difference across the microtube is not equal to the actual pressure difference across the microtube. Taking into account the additional pressure losses between the outlet of the micro tube and the outlet of the entire setup, we propose a new method for predicting the dynamic capillary pressure using the Level-set method. We first demonstrate it is a reliable method for describing microscopic flow by comparing the micro-model flow-test results against the predicted results using the Level-set method. In the proposed approach, Level-set method is applied to predict the pressure distribution along the microtube when the fluids flow along the microtube at a given flow rate; the microtube used in the calculation has the same size as the one used in the experiment. From the simulation results, the pressure difference across a curved interface (i.e., dynamic capillary pressure) can be directly obtained. We also show that dynamic capillary force should be properly evaluated in the micro-tube experiment in order to obtain the actual pressure difference across the microtube.

  6. Diurnal temperature asymmetries and fog at Churchill, Manitoba

    NASA Astrophysics Data System (ADS)

    Gough, William A.; He, Dianze

    2015-07-01

    A variety of methods are available to calculate daily mean temperature. We explore how the difference between two commonly used methods provides insight into the local climate of Churchill, Manitoba. In particular, we found that these differences related closely to seasonal fog. A strong statistically significant correlation was found between the fog frequency (hours per day) and the diurnal temperature asymmetries of the surface temperature using the difference between the min/max and 24-h methods of daily temperature calculation. The relationship was particularly strong for winter, spring and summer. Autumn appears to experience the joint effect of fog formation and the radiative effect of snow cover. The results of this study suggests that subtle variations of diurnality of temperature, as measured in the difference of the two mean temperature methods of calculation, may be used as a proxy for fog detection in the Hudson Bay region. These results also provide a cautionary note for the spatial analysis of mean temperatures using data derived from the two different methods particularly in areas that are fog prone.

  7. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review.

    PubMed

    Mathes, Tim; Klaßen, Pauline; Pieper, Dawid

    2017-11-28

    Our objective was to assess the frequency of data extraction errors and its potential impact on results in systematic reviews. Furthermore, we evaluated the effect of different extraction methods, reviewer characteristics and reviewer training on error rates and results. We performed a systematic review of methodological literature in PubMed, Cochrane methodological registry, and by manual searches (12/2016). Studies were selected by two reviewers independently. Data were extracted in standardized tables by one reviewer and verified by a second. The analysis included six studies; four studies on extraction error frequency, one study comparing different reviewer extraction methods and two studies comparing different reviewer characteristics. We did not find a study on reviewer training. There was a high rate of extraction errors (up to 50%). Errors often had an influence on effect estimates. Different data extraction methods and reviewer characteristics had moderate effect on extraction error rates and effect estimates. The evidence base for established standards of data extraction seems weak despite the high prevalence of extraction errors. More comparative studies are needed to get deeper insights into the influence of different extraction methods.

  8. Comparison of the applicability of Demirjian and Willems methods for dental age estimation in children from the Thrace region, Turkey.

    PubMed

    Ozveren, N; Serindere, G

    2018-04-01

    Dental age (DA) estimation is frequently used in the fields of orthodontics, paediatric dentistry and forensic science. DA estimation methods use radiology, and are reliable and non-destructive according to the literature. The Demirjian method is currently the most frequently used method, but recently, the Willems method was reported to have given results that were more accurate for some regions. The aim of this study was to detect and compare the accuracy of DA estimation methods for children and adolescents from the Thrace region, Turkey. The mean difference between the chronological age (CA) and the DA was selected as the primary outcome measure, and the difference range according to sex and age group was selected as the secondary outcome. Panoramic radiographs (n=766) from a Thrace region population (380 males and 386 females) ranging in age from 6 to 14.99 years old were evaluated. DA was calculated using both the Demirjian and the Willems methods. The mean CA of the subjects was 11.39±2.34 years (males=11.08±2.42 years and females=11.70±2.23 years). The mean difference values between the CA and the DA (CA-DA) using the Demirjian method and the Willems method were -0.87 and -0.17 for females, respectively, and -1.04 and -0.40 for males, respectively. For the different age groups, the differences between the CA and the DA calculated using the Demirjian method (CA-DA) ranged from -0.53 to -1.46 years for males and from -0.19 to -1.20 years for females, while the mean differences between the CA and the DA calculated by the Willems method (CA-DA) ranged from -0.19 to -0.50 years for males and from 0.20 to -0.49 years for females. The results suggest that the Willems method produced more accurate results for almost all age groups of both sexes, and it is better suited for children from the Thrace region of Turkey, than the Demirjian method. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Improving the numerical integration solution of satellite orbits in the presence of solar radiation pressure using modified back differences

    NASA Technical Reports Server (NTRS)

    Lundberg, J. B.; Feulner, M. R.; Abusali, P. A. M.; Ho, C. S.

    1991-01-01

    The method of modified back differences, a technique that significantly reduces the numerical integration errors associated with crossing shadow boundaries using a fixed-mesh multistep integrator without a significant increase in computer run time, is presented. While Hubbard's integral approach can produce significant improvements to the trajectory solution, the interpolation method provides the best overall results. It is demonstrated that iterating on the point mass term correction is also important for achieving the best overall results. It is also shown that the method of modified back differences can be implemented with only a small increase in execution time.

  10. Image scanning fluorescence emission difference microscopy based on a detector array.

    PubMed

    Li, Y; Liu, S; Liu, D; Sun, S; Kuang, C; Ding, Z; Liu, X

    2017-06-01

    We propose a novel imaging method that enables the enhancement of three-dimensional resolution of confocal microscopy significantly and achieve experimentally a new fluorescence emission difference method for the first time, based on the parallel detection with a detector array. Following the principles of photon reassignment in image scanning microscopy, images captured by the detector array were arranged. And by selecting appropriate reassign patterns, the imaging result with enhanced resolution can be achieved with the method of fluorescence emission difference. Two specific methods are proposed in this paper, showing that the difference between an image scanning microscopy image and a confocal image will achieve an improvement of transverse resolution by approximately 43% compared with that in confocal microscopy, and the axial resolution can also be enhanced by at least 22% experimentally and 35% theoretically. Moreover, the methods presented in this paper can improve the lateral resolution by around 10% than fluorescence emission difference and 15% than Airyscan. The mechanism of our methods is verified by numerical simulations and experimental results, and it has significant potential in biomedical applications. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  11. An Investigation of the Overlap Between the Statistical Discrete Gust and the Power Spectral Density Analysis Methods

    NASA Technical Reports Server (NTRS)

    Perry, Boyd, III; Pototzky, Anthony S.; Woods, Jessica A.

    1989-01-01

    The results of a NASA investigation of a claimed Overlap between two gust response analysis methods: the Statistical Discrete Gust (SDG) Method and the Power Spectral Density (PSD) Method are presented. The claim is that the ratio of an SDG response to the corresponding PSD response is 10.4. Analytical results presented for several different airplanes at several different flight conditions indicate that such an Overlap does appear to exist. However, the claim was not met precisely: a scatter of up to about 10 percent about the 10.4 factor can be expected.

  12. Recovery of Bacillus Spore Contaminants from Rough Surfaces: a Challenge to Space Mission Cleanliness Control▿

    PubMed Central

    Probst, Alexander; Facius, Rainer; Wirth, Reinhard; Wolf, Marco; Moissl-Eichinger, Christine

    2011-01-01

    Microbial contaminants on spacecraft can threaten the scientific integrity of space missions due to probable interference with life detection experiments. Therefore, space agencies measure the cultivable spore load (“bioburden”) of a spacecraft. A recent study has reported an insufficient recovery of Bacillus atrophaeus spores from Vectran fabric, a typical spacecraft airbag material (A. Probst, R. Facius, R. Wirth, and C. Moissl-Eichinger, Appl. Environ. Microbiol. 76:5148-5158, 2010). Here, 10 different sampling methods were compared for B. atrophaeus spore recovery from this rough textile, revealing significantly different efficiencies (0.5 to 15.4%). The most efficient method, based on the wipe-rinse technique (foam-spatula protocol; 13.2% efficiency), was then compared to the current European Space Agency (ESA) standard wipe assay in sampling four different kinds of spacecraft-related surfaces. Results indicate that the novel protocol out-performed the standard method with an average efficiency of 41.1% compared to 13.9% for the standard method. Additional experiments were performed by sampling Vectran fabric seeded with seven different spore concentrations and five different Bacillus species (B. atrophaeus, B. anthracis Sterne, B. megaterium, B. thuringiensis, and B. safensis). Among these, B. atrophaeus spores were recovered with the highest (13.2%) efficiency and B. anthracis Sterne spores were recovered with the lowest (0.3%) efficiency. Different inoculation methods of seeding spores on test surfaces (spotting and aerosolization) resulted in different spore recovery efficiencies. The results of this study provide a step forward in understanding the spore distribution on and recovery from rough surfaces. The results presented will contribute relevant knowledge to the fields of astrobiology and B. anthracis research. PMID:21216908

  13. Reference interval computation: which method (not) to choose?

    PubMed

    Pavlov, Igor Y; Wilson, Andrew R; Delgado, Julio C

    2012-07-11

    When different methods are applied to reference interval (RI) calculation the results can sometimes be substantially different, especially for small reference groups. If there are no reliable RI data available, there is no way to confirm which method generates results closest to the true RI. We randomly drawn samples obtained from a public database for 33 markers. For each sample, RIs were calculated by bootstrapping, parametric, and Box-Cox transformed parametric methods. Results were compared to the values of the population RI. For approximately half of the 33 markers, results of all 3 methods were within 3% of the true reference value. For other markers, parametric results were either unavailable or deviated considerably from the true values. The transformed parametric method was more accurate than bootstrapping for sample size of 60, very close to bootstrapping for sample size 120, but in some cases unavailable. We recommend against using parametric calculations to determine RIs. The transformed parametric method utilizing Box-Cox transformation would be preferable way of RI calculation, if it satisfies normality test. If not, the bootstrapping is always available, and is almost as accurate and precise as the transformed parametric method. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Chemometrics Methods for Specificity, Authenticity and Traceability Analysis of Olive Oils: Principles, Classifications and Applications

    PubMed Central

    Messai, Habib; Farman, Muhammad; Sarraj-Laabidi, Abir; Hammami-Semmar, Asma; Semmar, Nabil

    2016-01-01

    Background. Olive oils (OOs) show high chemical variability due to several factors of genetic, environmental and anthropic types. Genetic and environmental factors are responsible for natural compositions and polymorphic diversification resulting in different varietal patterns and phenotypes. Anthropic factors, however, are at the origin of different blends’ preparation leading to normative, labelled or adulterated commercial products. Control of complex OO samples requires their (i) characterization by specific markers; (ii) authentication by fingerprint patterns; and (iii) monitoring by traceability analysis. Methods. These quality control and management aims require the use of several multivariate statistical tools: specificity highlighting requires ordination methods; authentication checking calls for classification and pattern recognition methods; traceability analysis implies the use of network-based approaches able to separate or extract mixed information and memorized signals from complex matrices. Results. This chapter presents a review of different chemometrics methods applied for the control of OO variability from metabolic and physical-chemical measured characteristics. The different chemometrics methods are illustrated by different study cases on monovarietal and blended OO originated from different countries. Conclusion. Chemometrics tools offer multiple ways for quantitative evaluations and qualitative control of complex chemical variability of OO in relation to several intrinsic and extrinsic factors. PMID:28231172

  15. A hybrid method for accurate star tracking using star sensor and gyros.

    PubMed

    Lu, Jiazhen; Yang, Lie; Zhang, Hao

    2017-10-01

    Star tracking is the primary operating mode of star sensors. To improve tracking accuracy and efficiency, a hybrid method using a star sensor and gyroscopes is proposed in this study. In this method, the dynamic conditions of an aircraft are determined first by the estimated angular acceleration. Under low dynamic conditions, the star sensor is used to measure the star vector and the vector difference method is adopted to estimate the current angular velocity. Under high dynamic conditions, the angular velocity is obtained by the calibrated gyros. The star position is predicted based on the estimated angular velocity and calibrated gyros using the star vector measurements. The results of the semi-physical experiment show that this hybrid method is accurate and feasible. In contrast with the star vector difference and gyro-assisted methods, the star position prediction result of the hybrid method is verified to be more accurate in two different cases under the given random noise of the star centroid.

  16. Effect of different cooking methods on total phenolic contents and antioxidant activities of four Boletus mushrooms.

    PubMed

    Sun, Liping; Bai, Xue; Zhuang, Yongliang

    2014-11-01

    The influences of cooking methods (steaming, pressure-cooking, microwaving, frying and boiling) on total phenolic contents and antioxidant activities of fruit body of Boletus mushrooms (B. aereus, B. badius, B. pinophilus and B. edulis) have been evaluated. The results showed that microwaving was better in retention of total phenolics than other cooking methods, while boiling significantly decreased the contents of total phenolics in samples under study. Effects of different cooking methods on phenolic acids profiles of Boletus mushrooms showed varieties with both the species of mushroom and the cooking method. Effects of cooking treatments on antioxidant activities of Boletus mushrooms were evaluated by in vitro assays of hydroxyl radical (OH·) -scavenging activity, reducing power and 1, 1-diphenyl-2-picrylhydrazyl radicals (DPPH·) -scavenging activity. Results indicated the changes of antioxidant activities of four Boletus mushrooms were different in five cooking methods. This study could provide some information to encourage food industry to recommend particular cooking methods.

  17. The comparative analysis of the current-meter method and the pressure-time method used for discharge measurements in the Kaplan turbine penstocks

    NASA Astrophysics Data System (ADS)

    Adamkowski, A.; Krzemianowski, Z.

    2012-11-01

    The paper presents experiences gathered during many years of utilizing the current-meter and pressure-time methods for flow rate measurements in many hydropower plants. The integration techniques used in these both methods are different from the recommendations contained in the relevant international standards, mainly from the graphical and arithmetical ones. The results of the comparative analysis of both methods applied at the same time during the hydraulic performance tests of two Kaplan turbines in one of the Polish hydropower plant are presented in the final part of the paper. In the case of the pressure-time method application, the concrete penstocks of the tested turbines required installing a special measuring instrumentation inside the penstock. The comparison has shown a satisfactory agreement between the results of discharge measurements executed using the both considered methods. Maximum differences between the discharge values have not exceeded 1.0 % and the average differences have not been greater than 0.5 %.

  18. A real-time TV logo tracking method using template matching

    NASA Astrophysics Data System (ADS)

    Li, Zhi; Sang, Xinzhu; Yan, Binbin; Leng, Junmin

    2012-11-01

    A fast and accurate TV Logo detection method is presented based on real-time image filtering, noise eliminating and recognition of image features including edge and gray level information. It is important to accurately extract the optical template using the time averaging method from the sample video stream, and then different templates are used to match different logos in separated video streams with different resolution based on the topology features of logos. 12 video streams with different logos are used to verify the proposed method, and the experimental result demonstrates that the achieved accuracy can be up to 99%.

  19. Aggregation of Sentinel-2 time series classifications as a solution for multitemporal analysis

    NASA Astrophysics Data System (ADS)

    Lewiński, Stanislaw; Nowakowski, Artur; Malinowski, Radek; Rybicki, Marcin; Kukawska, Ewa; Krupiński, Michał

    2017-10-01

    The general aim of this work was to elaborate efficient and reliable aggregation method that could be used for creating a land cover map at a global scale from multitemporal satellite imagery. The study described in this paper presents methods for combining results of land cover/land use classifications performed on single-date Sentinel-2 images acquired at different time periods. For that purpose different aggregation methods were proposed and tested on study sites spread on different continents. The initial classifications were performed with Random Forest classifier on individual Sentinel-2 images from a time series. In the following step the resulting land cover maps were aggregated pixel by pixel using three different combinations of information on the number of occurrences of a certain land cover class within a time series and the posterior probability of particular classes resulting from the Random Forest classification. From the proposed methods two are shown superior and in most cases were able to reach or outperform the accuracy of the best individual classifications of single-date images. Moreover, the aggregations results are very stable when used on data with varying cloudiness. They also enable to reduce considerably the number of cloudy pixels in the resulting land cover map what is significant advantage for mapping areas with frequent cloud coverage.

  20. Metrological characterization of X-ray diffraction methods at different acquisition geometries for determination of crystallite size in nano-scale materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uvarov, Vladimir, E-mail: vladimiru@savion.huji.ac.il; Popov, Inna

    2013-11-15

    Crystallite size values were determined by X-ray diffraction methods for 183 powder samples. The tested size range was from a few to about several hundred nanometers. Crystallite size was calculated with direct use of the Scherrer equation, the Williamson–Hall method and the Rietveld procedure via the application of a series of commercial and free software. The results were statistically treated to estimate the significance of the difference in size resulting from these methods. We also estimated effect of acquisition conditions (Bragg–Brentano, parallel-beam geometry, step size, counting time) and data processing on the calculated crystallite size values. On the basis ofmore » the obtained results it is possible to conclude that direct use of the Scherrer equation, Williamson–Hall method and the Rietveld refinement employed by a series of software (EVA, PCW and TOPAS respectively) yield very close results for crystallite sizes less than 60 nm for parallel beam geometry and less than 100 nm for Bragg–Brentano geometry. However, we found that despite the fact that the differences between the crystallite sizes, which were calculated by various methods, are small by absolute values, they are statistically significant in some cases. The values of crystallite size determined from XRD were compared with those obtained by imaging in a transmission (TEM) and scanning electron microscopes (SEM). It was found that there was a good correlation in size only for crystallites smaller than 50 – 60 nm. Highlights: • The crystallite sizes for 183 nanopowders were calculated using different XRD methods • Obtained results were subject to statistical treatment • Results obtained with Bragg-Brentano and parallel beam geometries were compared • Influence of conditions of XRD pattern acquisition on results was estimated • Calculated by XRD crystallite sizes were compared with same obtained by TEM and SEM.« less

  1. Analysis of full disc Ca II K spectroheliograms. I. Photometric calibration and centre-to-limb variation compensation

    NASA Astrophysics Data System (ADS)

    Chatzistergos, Theodosios; Ermolli, Ilaria; Solanki, Sami K.; Krivova, Natalie A.

    2018-01-01

    Context. Historical Ca II K spectroheliograms (SHG) are unique in representing long-term variations of the solar chromospheric magnetic field. They usually suffer from numerous problems and lack photometric calibration. Thus accurate processing of these data is required to get meaningful results from their analysis. Aims: In this paper we aim at developing an automatic processing and photometric calibration method that provides precise and consistent results when applied to historical SHG. Methods: The proposed method is based on the assumption that the centre-to-limb variation of the intensity in quiet Sun regions does not vary with time. We tested the accuracy of the proposed method on various sets of synthetic images that mimic problems encountered in historical observations. We also tested our approach on a large sample of images randomly extracted from seven different SHG archives. Results: The tests carried out on the synthetic data show that the maximum relative errors of the method are generally <6.5%, while the average error is <1%, even if rather poor quality observations are considered. In the absence of strong artefacts the method returns images that differ from the ideal ones by <2% in any pixel. The method gives consistent values for both plage and network areas. We also show that our method returns consistent results for images from different SHG archives. Conclusions: Our tests show that the proposed method is more accurate than other methods presented in the literature. Our method can also be applied to process images from photographic archives of solar observations at other wavelengths than Ca II K.

  2. Comparative Results of Using Different Methods for Discovery of Microorganisms in very Ancient Layers of the Central Antarctic Glacier above the Lake Vostok

    NASA Technical Reports Server (NTRS)

    Abyzov, S. S.; Hoover, R. B.; Imura, S.; Mitskevich, I. N.; Naganuma, T.; Poglazova, M. N.; Ivanov, M. V.

    2002-01-01

    The ice sheet of the Central Antarctic is considered by the scientific community worldwide, as a model to elaborate on different methods to search for life outside Earth. This became especially significant in connection with the discovery of the underglacial lake in the vicinity of the Russian Antarctic Station Vostok. Lake Vostok is considered by many scientists as an analog of the ice covered seas of Jupiter's satellite Europa. According to the opinion of many researchers there is the possibility that relict forms of microorganisms, well preserved since the Ice Age, may be present in this lake. Investigations throughout the thickness of the ice sheet above Lake Vostok show the presence of microorganisms belonging to different well-known taxonomic groups, even in the very ancient horizons near close to floor of the glacier. Different methods were used to search for microorganisms that are rarely found in the deep ancient layers of an ice sheet. The method of aseptic sampling from the ice cores and the results of controlled sterile conditions in all stages when conducting these investigations, are described in detail in previous reports. Primary investigations tried the usual methods of sowing samples onto different nutrient media, and the result was that only a few microorganisms grew on the media used. The possibility of isolating the organisms obtained for further investigations, by using modern methods including DNA-analysis, appears to be the preferred method. Further investigations of the very ancient layers of the ice sheet by radioisotopic, luminescence, and scanning electron microscopy methods at different modifications, revealed the quantity and morphological diversity of the cells of microorganisms that were distributed on the different horizons. Investigations over many years have shown that the microflora in the very ancient strata of the Antarctic ice cover, nearest to the bedrock, support the effectiveness of using a combination of different methods to search for signs of life in ancient icy formations, which might play a role in the long-term preservation and transportation of microbial life throughout the Universe.

  3. Comparing Performance of Methods to Deal with Differential Attrition in Lottery Based Evaluations

    ERIC Educational Resources Information Center

    Zamarro, Gema; Anderson, Kaitlin; Steele, Jennifer; Miller, Trey

    2016-01-01

    The purpose of this study is to study the performance of different methods (inverse probability weighting and estimation of informative bounds) to control for differential attrition by comparing the results of different methods using two datasets: an original dataset from Portland Public Schools (PPS) subject to high rates of differential…

  4. EIT Imaging of admittivities with a D-bar method and spatial prior: experimental results for absolute and difference imaging.

    PubMed

    Hamilton, S J

    2017-05-22

    Electrical impedance tomography (EIT) is an emerging imaging modality that uses harmless electrical measurements taken on electrodes at a body's surface to recover information about the internal electrical conductivity and or permittivity. The image reconstruction task of EIT is a highly nonlinear inverse problem that is sensitive to noise and modeling errors making the image reconstruction task challenging. D-bar methods solve the nonlinear problem directly, bypassing the need for detailed and time-intensive forward models, to provide absolute (static) as well as time-difference EIT images. Coupling the D-bar methodology with the inclusion of high confidence a priori data results in a noise-robust regularized image reconstruction method. In this work, the a priori D-bar method for complex admittivities is demonstrated effective on experimental tank data for absolute imaging for the first time. Additionally, the method is adjusted for, and tested on, time-difference imaging scenarios. The ability of the method to be used for conductivity, permittivity, absolute as well as time-difference imaging provides the user with great flexibility without a high computational cost.

  5. Investigating the Importance of the Pocket-estimation Method in Pocket-based Approaches: An Illustration Using Pocket-ligand Classification.

    PubMed

    Caumes, Géraldine; Borrel, Alexandre; Abi Hussein, Hiba; Camproux, Anne-Claude; Regad, Leslie

    2017-09-01

    Small molecules interact with their protein target on surface cavities known as binding pockets. Pocket-based approaches are very useful in all of the phases of drug design. Their first step is estimating the binding pocket based on protein structure. The available pocket-estimation methods produce different pockets for the same target. The aim of this work is to investigate the effects of different pocket-estimation methods on the results of pocket-based approaches. We focused on the effect of three pocket-estimation methods on a pocket-ligand (PL) classification. This pocket-based approach is useful for understanding the correspondence between the pocket and ligand spaces and to develop pharmacological profiling models. We found pocket-estimation methods yield different binding pockets in terms of boundaries and properties. These differences are responsible for the variation in the PL classification results that can have an impact on the detected correspondence between pocket and ligand profiles. Thus, we highlighted the importance of the pocket-estimation method choice in pocket-based approaches. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. [Determination of metals in waste bag filter of steel works by microwave digestion-flame atomic absorption spectrometry].

    PubMed

    Ning, Xun-An; Zhou, Yun; Liu, Jing-Yong; Wang, Jiang-Hui; Li, Lei; Ma, Xiao-Guo

    2011-09-01

    A method of microwave digestion technique-flame atomic absorption spectrometry was proposed to determine the total contents of Cu, Zn, Pb, Cd, Cr and Ni in five different kinds of waste bag filters from a steel plant. The digestion effects of the six acid systems on the heavy metals digestion were studied for the first time. The relative standard deviation (RSD) of the method was between 1.02% and 9.35%, and the recovery rates obtained by standard addition method ranged from 87.7% to 105.6%. The results indicated that the proposed method exhibited the advantages of simplicity, speediness, accuracy and repeatability, and it was suitable for determining the metal elements of the waste bag filter. The results also showed that different digestion systems should be used according to different waste bag filters. The waste bag filter samples from different production processes had different metal elements content. The Pb and Zn were the highest in the waste bag filters, while the Cu, Ni, Cd and Cr were relatively lower. These determination results provided the scientific data for further treatment and disposal of the waste bag filter.

  7. A comparison of five partial volume correction methods for Tau and Amyloid PET imaging with [18F]THK5351 and [11C]PIB.

    PubMed

    Shidahara, Miho; Thomas, Benjamin A; Okamura, Nobuyuki; Ibaraki, Masanobu; Matsubara, Keisuke; Oyama, Senri; Ishikawa, Yoichi; Watanuki, Shoichi; Iwata, Ren; Furumoto, Shozo; Tashiro, Manabu; Yanai, Kazuhiko; Gonda, Kohsuke; Watabe, Hiroshi

    2017-08-01

    To suppress partial volume effect (PVE) in brain PET, there have been many algorithms proposed. However, each methodology has different property due to its assumption and algorithms. Our aim of this study was to investigate the difference among partial volume correction (PVC) method for tau and amyloid PET study. We investigated two of the most commonly used PVC methods, Müller-Gärtner (MG) and geometric transfer matrix (GTM) and also other three methods for clinical tau and amyloid PET imaging. One healthy control (HC) and one Alzheimer's disease (AD) PET studies of both [ 18 F]THK5351 and [ 11 C]PIB were performed using a Eminence STARGATE scanner (Shimadzu Inc., Kyoto, Japan). All PET images were corrected for PVE by MG, GTM, Labbé (LABBE), Regional voxel-based (RBV), and Iterative Yang (IY) methods, with segmented or parcellated anatomical information processed by FreeSurfer, derived from individual MR images. PVC results of 5 algorithms were compared with the uncorrected data. In regions of high uptake of [ 18 F]THK5351 and [ 11 C]PIB, different PVCs demonstrated different SUVRs. The degree of difference between PVE uncorrected and corrected depends on not only PVC algorithm but also type of tracer and subject condition. Presented PVC methods are straight-forward to implement but the corrected images require careful interpretation as different methods result in different levels of recovery.

  8. a Method of Time-Series Change Detection Using Full Polsar Images from Different Sensors

    NASA Astrophysics Data System (ADS)

    Liu, W.; Yang, J.; Zhao, J.; Shi, H.; Yang, L.

    2018-04-01

    Most of the existing change detection methods using full polarimetric synthetic aperture radar (PolSAR) are limited to detecting change between two points in time. In this paper, a novel method was proposed to detect the change based on time-series data from different sensors. Firstly, the overall difference image of a time-series PolSAR was calculated by ominous statistic test. Secondly, difference images between any two images in different times ware acquired by Rj statistic test. Generalized Gaussian mixture model (GGMM) was used to obtain time-series change detection maps in the last step for the proposed method. To verify the effectiveness of the proposed method, we carried out the experiment of change detection by using the time-series PolSAR images acquired by Radarsat-2 and Gaofen-3 over the city of Wuhan, in China. Results show that the proposed method can detect the time-series change from different sensors.

  9. Analytical investigation of different mathematical approaches utilizing manipulation of ratio spectra

    NASA Astrophysics Data System (ADS)

    Osman, Essam Eldin A.

    2018-01-01

    This work represents a comparative study of different approaches of manipulating ratio spectra, applied on a binary mixture of ciprofloxacin HCl and dexamethasone sodium phosphate co-formulated as ear drops. The proposed new spectrophotometric methods are: ratio difference spectrophotometric method (RDSM), amplitude center method (ACM), first derivative of the ratio spectra (1DD) and mean centering of ratio spectra (MCR). The proposed methods were checked using laboratory-prepared mixtures and were successfully applied for the analysis of pharmaceutical formulation containing the cited drugs. The proposed methods were validated according to the ICH guidelines. A comparative study was conducted between those methods regarding simplicity, limitations and sensitivity. The obtained results were statistically compared with those obtained from the reported HPLC method, showing no significant difference with respect to accuracy and precision.

  10. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys

    PubMed Central

    Hund, Lauren; Bedrick, Edward J.; Pagano, Marcello

    2015-01-01

    Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis. PMID:26125967

  11. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys.

    PubMed

    Hund, Lauren; Bedrick, Edward J; Pagano, Marcello

    2015-01-01

    Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis.

  12. Trend Change Detection in NDVI Time Series: Effects of Inter-Annual Variability and Methodology

    NASA Technical Reports Server (NTRS)

    Forkel, Matthias; Carvalhais, Nuno; Verbesselt, Jan; Mahecha, Miguel D.; Neigh, Christopher S.R.; Reichstein, Markus

    2013-01-01

    Changing trends in ecosystem productivity can be quantified using satellite observations of Normalized Difference Vegetation Index (NDVI). However, the estimation of trends from NDVI time series differs substantially depending on analyzed satellite dataset, the corresponding spatiotemporal resolution, and the applied statistical method. Here we compare the performance of a wide range of trend estimation methods and demonstrate that performance decreases with increasing inter-annual variability in the NDVI time series. Trend slope estimates based on annual aggregated time series or based on a seasonal-trend model show better performances than methods that remove the seasonal cycle of the time series. A breakpoint detection analysis reveals that an overestimation of breakpoints in NDVI trends can result in wrong or even opposite trend estimates. Based on our results, we give practical recommendations for the application of trend methods on long-term NDVI time series. Particularly, we apply and compare different methods on NDVI time series in Alaska, where both greening and browning trends have been previously observed. Here, the multi-method uncertainty of NDVI trends is quantified through the application of the different trend estimation methods. Our results indicate that greening NDVI trends in Alaska are more spatially and temporally prevalent than browning trends. We also show that detected breakpoints in NDVI trends tend to coincide with large fires. Overall, our analyses demonstrate that seasonal trend methods need to be improved against inter-annual variability to quantify changing trends in ecosystem productivity with higher accuracy.

  13. Assessing muscular oxygenation during incremental exercise using near-infrared spectroscopy: comparison of three different methods.

    PubMed

    Agbangla, N F; Audiffren, M; Albinet, C T

    2017-12-20

    Using continuous-wave near-infrared spectroscopy (NIRS), this study compared three different methods, namely the slope method (SM), the amplitude method (AM), and the area under the curve (AUC) method to determine the variations of intramuscular oxygenation level as a function of workload. Ten right-handed subjects (22+/-4 years) performed one isometric contraction at each of three different workloads (30 %, 50 % and 90 % of maximal voluntary strength) during a period of twenty seconds. Changes in oxyhemoglobin (delta[HbO(2)]) and deoxyhemoglobin (delta[HHb]) concentrations in the superficial flexor of fingers were recorded using continuous-wave NIRS. The results showed a strong consistency between the three methods, with standardized Cronbach alphas of 0.87 for delta[HHb] and 0.95 for delta[HbO(2)]. No significant differences between the three methods were observed concerning delta[HHb] as a function of workload. However, only the SM showed sufficient sensitivity to detect a significant decrease in delta[HbO(2)] between 30 % and 50 % of workload (p<0.01). Among these three methods, the SM appeared to be the only method that was well adapted and sensitive enough to determine slight changes in delta[HbO(2)]. Theoretical and methodological implications of these results are discussed.

  14. Hands-Off and Hands-On Casting Consistency of Amputee below Knee Sockets Using Magnetic Resonance Imaging

    PubMed Central

    Rowe, Philip

    2013-01-01

    Residual limb shape capturing (Casting) consistency has a great influence on the quality of socket fit. Magnetic Resonance Imaging was used to establish a reliable reference grid for intercast and intracast shape and volume consistency of two common casting methods, Hands-off and Hands-on. Residual limbs were cast for twelve people with a unilateral below knee amputation and scanned twice for each casting concept. Subsequently, all four volume images of each amputee were semiautomatically segmented and registered to a common coordinate system using the tibia and then the shape and volume differences were calculated. The results show that both casting methods have intra cast volume consistency and there is no significant volume difference between the two methods. Inter- and intracast mean volume differences were not clinically significant based on the volume of one sock criteria. Neither the Hands-off nor the Hands-on method resulted in a consistent residual limb shape as the coefficient of variation of shape differences was high. The resultant shape of the residual limb in the Hands-off casting was variable but the differences were not clinically significant. For the Hands-on casting, shape differences were equal to the maximum acceptable limit for a poor socket fit. PMID:24348164

  15. Hands-off and hands-on casting consistency of amputee below knee sockets using magnetic resonance imaging.

    PubMed

    Safari, Mohammad Reza; Rowe, Philip; McFadyen, Angus; Buis, Arjan

    2013-01-01

    Residual limb shape capturing (Casting) consistency has a great influence on the quality of socket fit. Magnetic Resonance Imaging was used to establish a reliable reference grid for intercast and intracast shape and volume consistency of two common casting methods, Hands-off and Hands-on. Residual limbs were cast for twelve people with a unilateral below knee amputation and scanned twice for each casting concept. Subsequently, all four volume images of each amputee were semiautomatically segmented and registered to a common coordinate system using the tibia and then the shape and volume differences were calculated. The results show that both casting methods have intra cast volume consistency and there is no significant volume difference between the two methods. Inter- and intracast mean volume differences were not clinically significant based on the volume of one sock criteria. Neither the Hands-off nor the Hands-on method resulted in a consistent residual limb shape as the coefficient of variation of shape differences was high. The resultant shape of the residual limb in the Hands-off casting was variable but the differences were not clinically significant. For the Hands-on casting, shape differences were equal to the maximum acceptable limit for a poor socket fit.

  16. Quantifying the quality of medical x-ray images: An evaluation based on normal anatomy for lumbar spine and chest radiography

    NASA Astrophysics Data System (ADS)

    Tingberg, Anders Martin

    Optimisation in diagnostic radiology requires accurate methods for determination of patient absorbed dose and clinical image quality. Simple methods for evaluation of clinical image quality are at present scarce and this project aims at developing such methods. Two methods are used and further developed; fulfillment of image criteria (IC) and visual grading analysis (VGA). Clinical image quality descriptors are defined based on these two methods: image criteria score (ICS) and visual grading analysis score (VGAS), respectively. For both methods the basis is the Image Criteria of the ``European Guidelines on Quality Criteria for Diagnostic Radiographic Images''. Both methods have proved to be useful for evaluation of clinical image quality. The two methods complement each other: IC is an absolute method, which means that the quality of images of different patients and produced with different radiographic techniques can be compared with each other. The separating power of IC is, however, weaker than that of VGA. VGA is the best method for comparing images produced with different radiographic techniques and has strong separating power, but the results are relative, since the quality of an image is compared to the quality of a reference image. The usefulness of the two methods has been verified by comparing the results from both of them with results from a generally accepted method for evaluation of clinical image quality, receiver operating characteristics (ROC). The results of the comparison between the two methods based on visibility of anatomical structures and the method based on detection of pathological structures (free-response forced error) indicate that the former two methods can be used for evaluation of clinical image quality as efficiently as the method based on ROC. More studies are, however, needed for us to be able to draw a general conclusion, including studies of other organs, using other radiographic techniques, etc. The results of the experimental evaluation of clinical image quality are compared with physical quantities calculated with a theoretical model based on a voxel phantom, and correlations are found. The results demonstrate that the computer model can be a useful toot in planning further experimental studies.

  17. Wavefront reconstruction for multi-lateral shearing interferometry using difference Zernike polynomials fitting

    NASA Astrophysics Data System (ADS)

    Liu, Ke; Wang, Jiannian; Wang, Hai; Li, Yanqiu

    2018-07-01

    For the multi-lateral shearing interferometers (multi-LSIs), the measurement accuracy can be enhanced by estimating the wavefront under test with the multidirectional phase information encoded in the shearing interferogram. Usually the multi-LSIs reconstruct the test wavefront from the phase derivatives in multiple directions using the discrete Fourier transforms (DFT) method, which is only suitable to small shear ratios and relatively sensitive to noise. To improve the accuracy of multi-LSIs, wavefront reconstruction from the multidirectional phase differences using the difference Zernike polynomials fitting (DZPF) method is proposed in this paper. For the DZPF method applied in the quadriwave LSI, difference Zernike polynomials in only two orthogonal shear directions are required to represent the phase differences in multiple shear directions. In this way, the test wavefront can be reconstructed from the phase differences in multiple shear directions using a noise-variance weighted least-squares method with almost no extra computational burden, compared with the usual recovery from the phase differences in two orthogonal directions. Numerical simulation results show that the DZPF method can maintain high reconstruction accuracy in a wider range of shear ratios and has much better anti-noise performance than the DFT method. A null test experiment of the quadriwave LSI has been conducted and the experimental results show that the measurement accuracy of the quadriwave LSI can be improved from 0.0054 λ rms to 0.0029 λ rms (λ = 632.8 nm) by substituting the DFT method with the proposed DZPF method in the wavefront reconstruction process.

  18. Are LOD and LOQ Reliable Parameters for Sensitivity Evaluation of Spectroscopic Methods?

    PubMed

    Ershadi, Saba; Shayanfar, Ali

    2018-03-22

    The limit of detection (LOD) and the limit of quantification (LOQ) are common parameters to assess the sensitivity of analytical methods. In this study, the LOD and LOQ of previously reported terbium sensitized analysis methods were calculated by different methods, and the results were compared with sensitivity parameters [lower limit of quantification (LLOQ)] of U.S. Food and Drug Administration guidelines. The details of the calibration curve and standard deviation of blank samples of three different terbium-sensitized luminescence methods for the quantification of mycophenolic acid, enrofloxacin, and silibinin were used for the calculation of LOD and LOQ. A comparison of LOD and LOQ values calculated by various methods and LLOQ shows a considerable difference. The significant difference of the calculated LOD and LOQ with various methods and LLOQ should be considered in the sensitivity evaluation of spectroscopic methods.

  19. Validation of a physical anthropology methodology using mandibles for gender estimation in a Brazilian population

    PubMed Central

    CARVALHO, Suzana Papile Maciel; BRITO, Liz Magalhães; de PAIVA, Luiz Airton Saavedra; BICUDO, Lucilene Arilho Ribeiro; CROSATO, Edgard Michel; de OLIVEIRA, Rogério Nogueira

    2013-01-01

    Validation studies of physical anthropology methods in the different population groups are extremely important, especially in cases in which the population variations may cause problems in the identification of a native individual by the application of norms developed for different communities. Objective This study aimed to estimate the gender of skeletons by application of the method of Oliveira, et al. (1995), previously used in a population sample from Northeast Brazil. Material and Methods The accuracy of this method was assessed for a population from Southeast Brazil and validated by statistical tests. The method used two mandibular measurements, namely the bigonial distance and the mandibular ramus height. The sample was composed of 66 skulls and the method was applied by two examiners. The results were statistically analyzed by the paired t test, logistic discriminant analysis and logistic regression. Results The results demonstrated that the application of the method of Oliveira, et al. (1995) in this population achieved very different outcomes between genders, with 100% for females and only 11% for males, which may be explained by ethnic differences. However, statistical adjustment of measurement data for the population analyzed allowed accuracy of 76.47% for males and 78.13% for females, with the creation of a new discriminant formula. Conclusion It was concluded that methods involving physical anthropology present high rate of accuracy for human identification, easy application, low cost and simplicity; however, the methodologies must be validated for the different populations due to differences in ethnic patterns, which are directly related to the phenotypic aspects. In this specific case, the method of Oliveira, et al. (1995) presented good accuracy and may be used for gender estimation in Brazil in two geographic regions, namely Northeast and Southeast; however, for other regions of the country (North, Central West and South), previous methodological adjustment is recommended as demonstrated in this study. PMID:24037076

  20. Comparison of microcrystalline characterization results from oil palm midrib alpha cellulose using different delignization method

    NASA Astrophysics Data System (ADS)

    Yuliasmi, S.; Pardede, T. R.; Nerdy; Syahputra, H.

    2017-03-01

    Oil palm midrib is one of the waste generated by palm plants containing 34.89% cellulose. Cellulose has the potential to produce microcrystalline cellulose can be used as an excipient in tablet formulations by direct compression. Microcrystalline cellulose is the result of a controlled hydrolysis of alpha cellulose, so the alpha cellulose extraction process of oil palm midrib greatly affect the quality of the resulting microcrystalline cellulose. The purpose of this study was to compare the microcrystalline cellulose produced from alpha cellulose extracted from oil palm midrib by two different methods. Fisrt delignization method uses sodium hydroxide. Second method uses a mixture of nitric acid and sodium nitrite, and continued with sodium hydroxide and sodium sulfite. Microcrystalline cellulose obtained by both method was characterized separately, including organoleptic test, color reagents test, dissolution test, pH test and determination of functional groups by FTIR. The results was compared with microcrystalline cellulose which has been available on the market. The characterization results showed that microcrystalline cellulose obtained by first method has the most similar characteristics to the microcrystalline cellulose available in the market.

  1. The combination of the error correction methods of GAFCHROMIC EBT3 film

    PubMed Central

    Li, Yinghui; Chen, Lixin; Zhu, Jinhan; Liu, Xiaowei

    2017-01-01

    Purpose The aim of this study was to combine a set of methods for use of radiochromic film dosimetry, including calibration, correction for lateral effects and a proposed triple-channel analysis. These methods can be applied to GAFCHROMIC EBT3 film dosimetry for radiation field analysis and verification of IMRT plans. Methods A single-film exposure was used to achieve dose calibration, and the accuracy was verified based on comparisons with the square-field calibration method. Before performing the dose analysis, the lateral effects on pixel values were corrected. The position dependence of the lateral effect was fitted by a parabolic function, and the curvature factors of different dose levels were obtained using a quadratic formula. After lateral effect correction, a triple-channel analysis was used to reduce disturbances and convert scanned images from films into dose maps. The dose profiles of open fields were measured using EBT3 films and compared with the data obtained using an ionization chamber. Eighteen IMRT plans with different field sizes were measured and verified with EBT3 films, applying our methods, and compared to TPS dose maps, to check correct implementation of film dosimetry proposed here. Results The uncertainty of lateral effects can be reduced to ±1 cGy. Compared with the results of Micke A et al., the residual disturbances of the proposed triple-channel method at 48, 176 and 415 cGy are 5.3%, 20.9% and 31.4% smaller, respectively. Compared with the ionization chamber results, the difference in the off-axis ratio and percentage depth dose are within 1% and 2%, respectively. For the application of IMRT verification, there were no difference between two triple-channel methods. Compared with only corrected by triple-channel method, the IMRT results of the combined method (include lateral effect correction and our present triple-channel method) show a 2% improvement for large IMRT fields with the criteria 3%/3 mm. PMID:28750023

  2. Virtual and stereoscopic anatomy: when virtual reality meets medical education.

    PubMed

    de Faria, Jose Weber Vieira; Teixeira, Manoel Jacobsen; de Moura Sousa Júnior, Leonardo; Otoch, Jose Pinhata; Figueiredo, Eberval Gadelha

    2016-11-01

    OBJECTIVE The authors sought to construct, implement, and evaluate an interactive and stereoscopic resource for teaching neuroanatomy, accessible from personal computers. METHODS Forty fresh brains (80 hemispheres) were dissected. Images of areas of interest were captured using a manual turntable and processed and stored in a 5337-image database. Pedagogic evaluation was performed in 84 graduate medical students, divided into 3 groups: 1 (conventional method), 2 (interactive nonstereoscopic), and 3 (interactive and stereoscopic). The method was evaluated through a written theory test and a lab practicum. RESULTS Groups 2 and 3 showed the highest mean scores in pedagogic evaluations and differed significantly from Group 1 (p < 0.05). Group 2 did not differ statistically from Group 3 (p > 0.05). Size effects, measured as differences in scores before and after lectures, indicate the effectiveness of the method. ANOVA results showed significant difference (p < 0.05) between groups, and the Tukey test showed statistical differences between Group 1 and the other 2 groups (p < 0.05). No statistical differences between Groups 2 and 3 were found in the practicum. However, there were significant differences when Groups 2 and 3 were compared with Group 1 (p < 0.05). CONCLUSIONS The authors conclude that this method promoted further improvement in knowledge for students and fostered significantly higher learning when compared with traditional teaching resources.

  3. Mobile micro-colorimeter and micro-spectrometer sensor modules as enablers for the replacement of subjective inspections by objective measurements for optically clear colored liquids in-field

    NASA Astrophysics Data System (ADS)

    Dittrich, Paul-Gerald; Grunert, Fred; Ehehalt, Jörg; Hofmann, Dietrich

    2015-03-01

    Aim of the paper is to show that the colorimetric characterization of optically clear colored liquids can be performed with different measurement methods and their application specific multichannel spectral sensors. The possible measurement methods are differentiated by the applied types of multichannel spectral sensors and therefore by their spectral resolution, measurement speed, measurement accuracy and measurement costs. The paper describes how different types of multichannel spectral sensors are calibrated with different types of calibration methods and how the measurement values can be used for further colorimetric calculations. The different measurement methods and the different application specific calibration methods will be explained methodically and theoretically. The paper proofs that and how different multichannel spectral sensor modules with different calibration methods can be applied with smartpads for the calculation of measurement results both in laboratory and in field. A given practical example is the application of different multichannel spectral sensors for the colorimetric characterization of petroleum oils and fuels and their colorimetric characterization by the Saybolt color scale.

  4. Evaluation of Alternative Altitude Scaling Methods for Thermal Ice Protection System in NASA Icing Research Tunnel

    NASA Technical Reports Server (NTRS)

    Lee, Sam; Addy, Harold; Broeren, Andy P.; Orchard, David M.

    2017-01-01

    A test was conducted at NASA Icing Research Tunnel to evaluate altitude scaling methods for thermal ice protection system. Two scaling methods based on Weber number were compared against a method based on the Reynolds number. The results generally agreed with the previous set of tests conducted in NRCC Altitude Icing Wind Tunnel. The Weber number based scaling methods resulted in smaller runback ice mass than the Reynolds number based scaling method. The ice accretions from the Weber number based scaling method also formed farther upstream. However there were large differences in the accreted ice mass between the two Weber number based scaling methods. The difference became greater when the speed was increased. This indicated that there may be some Reynolds number effects that isnt fully accounted for and warrants further study.

  5. Validation of a combi oven cooking method for preparation of chicken breast meat for quality assessment.

    PubMed

    Zhuang, H; Savage, E M

    2008-10-01

    Quality assessment results of cooked meat can be significantly affected by sample preparation with different cooking techniques. A combi oven is a relatively new cooking technique in the U.S. market. However, there was a lack of published data about its effect on quality measurements of chicken meat. Broiler breast fillets deboned at 24-h postmortem were cooked with one of the 3 methods to the core temperature of 80 degrees C. Cooking methods were evaluated based on cooking operation requirements, sensory profiles, Warner-Bratzler (WB) shear and cooking loss. Our results show that the average cooking time for the combi oven was 17 min compared with 31 min for the commercial oven method and 16 min for the hot water method. The combi oven did not result in a significant difference in the WB shear force values, although the cooking loss of the combi oven samples was significantly lower than the commercial oven and hot water samples. Sensory profiles of the combi oven samples did not significantly differ from those of the commercial oven and hot water samples. These results demonstrate that combi oven cooking did not significantly affect sensory profiles and WB shear force measurements of chicken breast muscle compared to the other 2 cooking methods. The combi oven method appears to be an acceptable alternative for preparing chicken breast fillets in a quality assessment.

  6. Assessment of Different Discrete Particle Methods Ability To Predict Gas-Particle Flow in a Small-Scale Fluidized Bed

    DOE PAGES

    Lu, Liqiang; Gopalan, Balaji; Benyahia, Sofiane

    2017-06-21

    Several discrete particle methods exist in the open literature to simulate fluidized bed systems, such as discrete element method (DEM), time driven hard sphere (TDHS), coarse-grained particle method (CGPM), coarse grained hard sphere (CGHS), and multi-phase particle-in-cell (MP-PIC). These different approaches usually solve the fluid phase in a Eulerian fixed frame of reference and the particle phase using the Lagrangian method. The first difference between these models lies in tracking either real particles or lumped parcels. The second difference is in the treatment of particle-particle interactions: by calculating collision forces (DEM and CGPM), using momentum conservation laws (TDHS and CGHS),more » or based on particle stress model (MP-PIC). These major model differences lead to a wide range of results accuracy and computation speed. However, these models have never been compared directly using the same experimental dataset. In this research, a small-scale fluidized bed is simulated with these methods using the same open-source code MFIX. The results indicate that modeling the particle-particle collision by TDHS increases the computation speed while maintaining good accuracy. Also, lumping few particles in a parcel increases the computation speed with little loss in accuracy. However, modeling particle-particle interactions with solids stress leads to a big loss in accuracy with a little increase in computation speed. The MP-PIC method predicts an unphysical particle-particle overlap, which results in incorrect voidage distribution and incorrect overall bed hydrodynamics. Based on this study, we recommend using the CGHS method for fluidized bed simulations due to its computational speed that rivals that of MPPIC while maintaining a much better accuracy.« less

  7. Assessment of Different Discrete Particle Methods Ability To Predict Gas-Particle Flow in a Small-Scale Fluidized Bed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Liqiang; Gopalan, Balaji; Benyahia, Sofiane

    Several discrete particle methods exist in the open literature to simulate fluidized bed systems, such as discrete element method (DEM), time driven hard sphere (TDHS), coarse-grained particle method (CGPM), coarse grained hard sphere (CGHS), and multi-phase particle-in-cell (MP-PIC). These different approaches usually solve the fluid phase in a Eulerian fixed frame of reference and the particle phase using the Lagrangian method. The first difference between these models lies in tracking either real particles or lumped parcels. The second difference is in the treatment of particle-particle interactions: by calculating collision forces (DEM and CGPM), using momentum conservation laws (TDHS and CGHS),more » or based on particle stress model (MP-PIC). These major model differences lead to a wide range of results accuracy and computation speed. However, these models have never been compared directly using the same experimental dataset. In this research, a small-scale fluidized bed is simulated with these methods using the same open-source code MFIX. The results indicate that modeling the particle-particle collision by TDHS increases the computation speed while maintaining good accuracy. Also, lumping few particles in a parcel increases the computation speed with little loss in accuracy. However, modeling particle-particle interactions with solids stress leads to a big loss in accuracy with a little increase in computation speed. The MP-PIC method predicts an unphysical particle-particle overlap, which results in incorrect voidage distribution and incorrect overall bed hydrodynamics. Based on this study, we recommend using the CGHS method for fluidized bed simulations due to its computational speed that rivals that of MPPIC while maintaining a much better accuracy.« less

  8. An evaluation of total starch and starch gelatinization methodologies in pelleted animal feed.

    PubMed

    Zhu, L; Jones, C; Guo, Q; Lewis, L; Stark, C R; Alavi, S

    2016-04-01

    The quantification of total starch content (TS) or degree of starch gelatinization (DG) in animal feed is always challenging because of the potential interference from other ingredients. In this study, the differences in TS or DG measurement in pelleted swine feed due to variations in analytical methodology were quantified. Pelleted swine feed was used to create 6 different diets manufactured with various processing conditions in a 2 × 3 factorial design (2 conditioning temperatures, 77 or 88°C, and 3 conditioning retention times, 15, 30, or 60 s). Samples at each processing stage (cold mash, hot mash, hot pelletized feed, and final cooled pelletized feed) were collected for each of the 6 treatments and analyzed for TS and DG. Two different methodologies were evaluated for TS determination (the AOAC International method 996.11 vs. the modified glucoamylase method) and DG determination (the modified glucoamylase method vs. differential scanning calorimetry [DSC]). For TS determination, the AOAC International method 996.11 measured lower TS values in cold pellets compared with the modified glucoamylase method. The AOAC International method resulted in lower TS in cold mash than cooled pelletized feed, whereas the modified glucoamylase method showed no significant differences in TS content before or after pelleting. For DG, the modified glucoamylase method demonstrated increased DG with each processing step. Furthermore, increasing the conditioning temperature and time resulted in a greater DG when evaluated by the modified glucoamylase method. However, results demonstrated that DSC is not suitable as a quantitative tool for determining DG in multicomponent animal feeds due to interferences from nonstarch transformations, such as protein denaturation.

  9. Plant selection for ethnobotanical uses on the Amalfi Coast (Southern Italy).

    PubMed

    Savo, V; Joy, R; Caneva, G; McClatchey, W C

    2015-07-15

    Many ethnobotanical studies have investigated selection criteria for medicinal and non-medicinal plants. In this paper we test several statistical methods using different ethnobotanical datasets in order to 1) define to which extent the nature of the datasets can affect the interpretation of results; 2) determine if the selection for different plant uses is based on phylogeny, or other selection criteria. We considered three different ethnobotanical datasets: two datasets of medicinal plants and a dataset of non-medicinal plants (handicraft production, domestic and agro-pastoral practices) and two floras of the Amalfi Coast. We performed residual analysis from linear regression, the binomial test and the Bayesian approach for calculating under-used and over-used plant families within ethnobotanical datasets. Percentages of agreement were calculated to compare the results of the analyses. We also analyzed the relationship between plant selection and phylogeny, chorology, life form and habitat using the chi-square test. Pearson's residuals for each of the significant chi-square analyses were examined for investigating alternative hypotheses of plant selection criteria. The three statistical analysis methods differed within the same dataset, and between different datasets and floras, but with some similarities. In the two medicinal datasets, only Lamiaceae was identified in both floras as an over-used family by all three statistical methods. All statistical methods in one flora agreed that Malvaceae was over-used and Poaceae under-used, but this was not found to be consistent with results of the second flora in which one statistical result was non-significant. All other families had some discrepancy in significance across methods, or floras. Significant over- or under-use was observed in only a minority of cases. The chi-square analyses were significant for phylogeny, life form and habitat. Pearson's residuals indicated a non-random selection of woody species for non-medicinal uses and an under-use of plants of temperate forests for medicinal uses. Our study showed that selection criteria for plant uses (including medicinal) are not always based on phylogeny. The comparison of different statistical methods (regression, binomial and Bayesian) under different conditions led to the conclusion that the most conservative results are obtained using regression analysis.

  10. Olive oil polyphenols: A quantitative method by high-performance liquid-chromatography-diode-array detection for their determination and the assessment of the related health claim.

    PubMed

    Ricciutelli, Massimo; Marconi, Shara; Boarelli, Maria Chiara; Caprioli, Giovanni; Sagratini, Gianni; Ballini, Roberto; Fiorini, Dennis

    2017-01-20

    In order to assess if an extra virgin olive oil (EVOO) can be acknowledged with the health claim related to olive oil polyphenols (Reg. EU n.432/2012), a new method to quantify these species in EVOO, by means of liquid-liquid extraction followed by HPLC-DAD/MS/MS of the hydroalcoholic extract, has been developed and validated. Different extraction procedures, different types of reverse-phase analytical columns (Synergi Polar, Spherisorb ODS2 and Kinetex) and eluents have been tested. The chromatographic column Synergi Polar (250×4.6mm, 4μm), never used before in this kind of application, provided the best results, with water and methanol/isopropanol (9/1) as eluents. The method allows the quantification of the phenolic alcohols tyrosol and hydroxytyrosol, the phenolic acids vanillic, p-coumaric and ferulic acids, secoiridoids derivatives, the lignans, pinoresinol and acetoxypinoresinol and the flavonoids luteolin and apigenin. The new method has been applied to 20 commercial EVOOs belonging to two different price range categories (3.78-5.80 euros/L and 9.5-25.80 euros/L) and 5 olive oils. The obtained results highlight that acetoxypinoresinol, ferulic acid, vanillic acid and the total non secoiridoid phenolic substances resulted to be significantly higher in HEVOOs than in LEVOOs (P=0.0026, 0.0217, 0.0092, 0.0003 respectively). For most of the samples analysed there is excellent agreement between the results obtained by applying the HPLC method adopted by the International Olive Council and the results obtained by applying the presented HPLC method. Results obtained by HPLC methods have been also compared with the ones obtained by the colorimetric Folin-Ciocalteu method. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. [Optimized application of nested PCR method for detection of malaria].

    PubMed

    Yao-Guang, Z; Li, J; Zhen-Yu, W; Li, C

    2017-04-28

    Objective To optimize the application of the nested PCR method for the detection of malaria according to the working practice, so as to improve the efficiency of malaria detection. Methods Premixing solution of PCR, internal primers for further amplification and new designed primers that aimed at two Plasmodium ovale subspecies were employed to optimize the reaction system, reaction condition and specific primers of P . ovale on basis of routine nested PCR. Then the specificity and the sensitivity of the optimized method were analyzed. The positive blood samples and examination samples of malaria were detected by the routine nested PCR and the optimized method simultaneously, and the detection results were compared and analyzed. Results The optimized method showed good specificity, and its sensitivity could reach the pg to fg level. The two methods were used to detect the same positive malarial blood samples simultaneously, the results indicated that the PCR products of the two methods had no significant difference, but the non-specific amplification reduced obviously and the detection rates of P . ovale subspecies improved, as well as the total specificity also increased through the use of the optimized method. The actual detection results of 111 cases of malarial blood samples showed that the sensitivity and specificity of the routine nested PCR were 94.57% and 86.96%, respectively, and those of the optimized method were both 93.48%, and there was no statistically significant difference between the two methods in the sensitivity ( P > 0.05), but there was a statistically significant difference between the two methods in the specificity ( P < 0.05). Conclusion The optimized PCR can improve the specificity without reducing the sensitivity on the basis of the routine nested PCR, it also can save the cost and increase the efficiency of malaria detection as less experiment links.

  12. Estimating Soil Organic Carbon Stocks and Spatial Patterns with Statistical and GIS-Based Methods

    PubMed Central

    Zhi, Junjun; Jing, Changwei; Lin, Shengpan; Zhang, Cao; Liu, Qiankun; DeGloria, Stephen D.; Wu, Jiaping

    2014-01-01

    Accurately quantifying soil organic carbon (SOC) is considered fundamental to studying soil quality, modeling the global carbon cycle, and assessing global climate change. This study evaluated the uncertainties caused by up-scaling of soil properties from the county scale to the provincial scale and from lower-level classification of Soil Species to Soil Group, using four methods: the mean, median, Soil Profile Statistics (SPS), and pedological professional knowledge based (PKB) methods. For the SPS method, SOC stock is calculated at the county scale by multiplying the mean SOC density value of each soil type in a county by its corresponding area. For the mean or median method, SOC density value of each soil type is calculated using provincial arithmetic mean or median. For the PKB method, SOC density value of each soil type is calculated at the county scale considering soil parent materials and spatial locations of all soil profiles. A newly constructed 1∶50,000 soil survey geographic database of Zhejiang Province, China, was used for evaluation. Results indicated that with soil classification levels up-scaling from Soil Species to Soil Group, the variation of estimated SOC stocks among different soil classification levels was obviously lower than that among different methods. The difference in the estimated SOC stocks among the four methods was lowest at the Soil Species level. The differences in SOC stocks among the mean, median, and PKB methods for different Soil Groups resulted from the differences in the procedure of aggregating soil profile properties to represent the attributes of one soil type. Compared with the other three estimation methods (i.e., the SPS, mean and median methods), the PKB method holds significant promise for characterizing spatial differences in SOC distribution because spatial locations of all soil profiles are considered during the aggregation procedure. PMID:24840890

  13. Nicotine Metabolite Ratio (3-hydroxycotinine/cotinine) in Plasma and Urine by Different Analytical Methods and Laboratories: Implications for Clinical Implementation

    PubMed Central

    Tanner, Julie-Anne; Novalen, Maria; Jatlow, Peter; Huestis, Marilyn A.; Murphy, Sharon E.; Kaprio, Jaakko; Kankaanpää, Aino; Galanti, Laurence; Stefan, Cristiana; George, Tony P.; Benowitz, Neal L.; Lerman, Caryn; Tyndale, Rachel F.

    2015-01-01

    Background The highly genetically variable enzyme CYP2A6 metabolizes nicotine to cotinine (COT) and COT to trans-3′-hydroxycotinine (3HC). The nicotine metabolite ratio (NMR, 3HC/COT) is commonly used as a biomarker of CYP2A6 enzymatic activity, rate of nicotine metabolism, and total nicotine clearance; NMR is associated with numerous smoking phenotypes, including smoking cessation. Our objective was to investigate the impact of different measurement methods, at different sites, on plasma and urinary NMR measures from ad libitum smokers. Methods Plasma (n=35) and urine (n=35) samples were sent to eight different laboratories, which employed similar and different methods of COT and 3HC measurements to derive the NMR. We used Bland-Altman analysis to assess agreement, and Pearson correlations to evaluate associations, between NMR measured by different methods. Results Measures of plasma NMR were in strong agreement between methods according to Bland-Altman analysis (ratios 0.82–1.16) and were highly correlated (all Pearson r>0.96, P<0.0001). Measures of urinary NMR were in relatively weaker agreement (ratios 0.62–1.71) and less strongly correlated (Pearson r values of 0.66–0.98, P<0.0001) between different methods. Plasma and urinary COT and 3HC concentrations, while weaker than NMR, also showed good agreement in plasma, which was better than in urine, as was observed for NMR. Conclusions Plasma is a very reliable biological source for the determination of NMR, robust to differences in these analytical protocols or assessment site. Impact Together this indicates a reduced need for differential interpretation of plasma NMR results based on the approach used, allowing for direct comparison of different studies. PMID:26014804

  14. A Comparison of the Kernel Equating Method with Traditional Equating Methods Using SAT[R] Data

    ERIC Educational Resources Information Center

    Liu, Jinghua; Low, Albert C.

    2008-01-01

    This study applied kernel equating (KE) in two scenarios: equating to a very similar population and equating to a very different population, referred to as a distant population, using SAT[R] data. The KE results were compared to the results obtained from analogous traditional equating methods in both scenarios. The results indicate that KE results…

  15. BAYESIAN BICLUSTERING FOR PATIENT STRATIFICATION.

    PubMed

    Khakabimamaghani, Sahand; Ester, Martin

    2016-01-01

    The move from Empirical Medicine towards Personalized Medicine has attracted attention to Stratified Medicine (SM). Some methods are provided in the literature for patient stratification, which is the central task of SM, however, there are still significant open issues. First, it is still unclear if integrating different datatypes will help in detecting disease subtypes more accurately, and, if not, which datatype(s) are most useful for this task. Second, it is not clear how we can compare different methods of patient stratification. Third, as most of the proposed stratification methods are deterministic, there is a need for investigating the potential benefits of applying probabilistic methods. To address these issues, we introduce a novel integrative Bayesian biclustering method, called B2PS, for patient stratification and propose methods for evaluating the results. Our experimental results demonstrate the superiority of B2PS over a popular state-of-the-art method and the benefits of Bayesian approaches. Our results agree with the intuition that transcriptomic data forms a better basis for patient stratification than genomic data.

  16. A method of investigating the phase response characteristic of the ionospheric scattering communications channel

    NASA Technical Reports Server (NTRS)

    Yakovets, A. F.

    1972-01-01

    A method is proposed for measuring the phase difference fluctuations between vibrations at different frequencies that result from scattering properties of the medium. The measurement equipment is described, along with an ideal communication channel.

  17. Automatic 3D kidney segmentation based on shape constrained GC-OAAM

    NASA Astrophysics Data System (ADS)

    Chen, Xinjian; Summers, Ronald M.; Yao, Jianhua

    2011-03-01

    The kidney can be classified into three main tissue types: renal cortex, renal medulla and renal pelvis (or collecting system). Dysfunction of different renal tissue types may cause different kidney diseases. Therefore, accurate and efficient segmentation of kidney into different tissue types plays a very important role in clinical research. In this paper, we propose an automatic 3D kidney segmentation method which segments the kidney into the three different tissue types: renal cortex, medulla and pelvis. The proposed method synergistically combines active appearance model (AAM), live wire (LW) and graph cut (GC) methods, GC-OAAM for short. Our method consists of two main steps. First, a pseudo 3D segmentation method is employed for kidney initialization in which the segmentation is performed slice-by-slice via a multi-object oriented active appearance model (OAAM) method. An improved iterative model refinement algorithm is proposed for the AAM optimization, which synergistically combines the AAM and LW method. Multi-object strategy is applied to help the object initialization. The 3D model constraints are applied to the initialization result. Second, the object shape information generated from the initialization step is integrated into the GC cost computation. A multi-label GC method is used to segment the kidney into cortex, medulla and pelvis. The proposed method was tested on 19 clinical arterial phase CT data sets. The preliminary results showed the feasibility and efficiency of the proposed method.

  18. Thermal comfort of aeroplane seats: influence of different seat materials and the use of laboratory test methods.

    PubMed

    Bartels, Volkmar T

    2003-07-01

    This study determined the influence of different cover and cushion materials on the thermal comfort of aeroplane seats. Different materials as well as ready made seats were investigated by the physiological laboratory test methods Skin Model and seat comfort tester. Additionally, seat trials with human test subjects were performed in a climatic chamber. Results show that a fabric cover produces a considerably higher sweat transport than leather. A three-dimensional knitted spacer fabric turns out to be the better cushion alternative in comparison to a moulded foam pad. Results from the physiological laboratory test methods nicely correspond to the seat trials with human test subjects.

  19. Revealed and stated preference valuation and transfer: A within-sample comparison of water quality improvement values

    NASA Astrophysics Data System (ADS)

    Ferrini, Silvia; Schaafsma, Marije; Bateman, Ian

    2014-06-01

    Benefit transfer (BT) methods are becoming increasingly important for environmental policy, but the empirical findings regarding transfer validity are mixed. A novel valuation survey was designed to obtain both stated preference (SP) and revealed preference (RP) data concerning river water quality values from a large sample of households. Both dichotomous choice and payment card contingent valuation (CV) and travel cost (TC) data were collected. Resulting valuations were directly compared and used for BT analyses using both unit value and function transfer approaches. WTP estimates are found to pass the convergence validity test. BT results show that the CV data produce lower transfer errors, below 20% for both unit value and function transfer, than TC data especially when using function transfer. Further, comparison of WTP estimates suggests that in all cases, differences between methods are larger than differences between study areas. Results show that when multiple studies are available, using welfare estimates from the same area but based on a different method consistently results in larger errors than transfers across space keeping the method constant.

  20. An investigation of the impact of using different methods for network meta-analysis: a protocol for an empirical evaluation.

    PubMed

    Karahalios, Amalia Emily; Salanti, Georgia; Turner, Simon L; Herbison, G Peter; White, Ian R; Veroniki, Areti Angeliki; Nikolakopoulou, Adriani; Mckenzie, Joanne E

    2017-06-24

    Network meta-analysis, a method to synthesise evidence from multiple treatments, has increased in popularity in the past decade. Two broad approaches are available to synthesise data across networks, namely, arm- and contrast-synthesis models, with a range of models that can be fitted within each. There has been recent debate about the validity of the arm-synthesis models, but to date, there has been limited empirical evaluation comparing results using the methods applied to a large number of networks. We aim to address this gap through the re-analysis of a large cohort of published networks of interventions using a range of network meta-analysis methods. We will include a subset of networks from a database of network meta-analyses of randomised trials that have been identified and curated from the published literature. The subset of networks will include those where the primary outcome is binary, the number of events and participants are reported for each direct comparison, and there is no evidence of inconsistency in the network. We will re-analyse the networks using three contrast-synthesis methods and two arm-synthesis methods. We will compare the estimated treatment effects, their standard errors, treatment hierarchy based on the surface under the cumulative ranking (SUCRA) curve, the SUCRA value, and the between-trial heterogeneity variance across the network meta-analysis methods. We will investigate whether differences in the results are affected by network characteristics and baseline risk. The results of this study will inform whether, in practice, the choice of network meta-analysis method matters, and if it does, in what situations differences in the results between methods might arise. The results from this research might also inform future simulation studies.

  1. Influence of the antagonist material on the wear of different composites using two different wear simulation methods.

    PubMed

    Heintze, S D; Zellweger, G; Cavalleri, A; Ferracane, J

    2006-02-01

    The aim of the study was to evaluate two ceramic materials as possible substitutes for enamel using two wear simulation methods, and to compare both methods with regard to the wear results for different materials. Flat specimens (OHSU n=6, Ivoclar n=8) of one compomer and three composite materials (Dyract AP, Tetric Ceram, Z250, experimental composite) were fabricated and subjected to wear using two different wear testing methods and two pressable ceramic materials as stylus (Empress, experimental ceramic). For the OHSU method, enamel styli of the same dimensions as the ceramic stylus were fabricated additionally. Both wear testing methods differ with regard to loading force, lateral movement of stylus, stylus dimension, number of cycles, thermocycling and abrasive medium. In the OHSU method, the wear facets (mean vertical loss) were measured using a contact profilometer, while in the Ivoclar method (maximal vertical loss) a laser scanner was used for this purpose. Additionally, the vertical loss of the ceramic stylus was quantified for the Ivoclar method. The results obtained from each method were compared by ANOVA and Tukey's test (p<0.05). To compare both wear methods, the log-transformed data were used to establish relative ranks between material/stylus combinations and assessed by applying the Pearson correlation coefficient. The experimental ceramic material generated significantly less wear in Tetric Ceram and Z250 specimens compared to the Empress stylus in the Ivoclar method, whereas with the OHSU method, no difference between the two ceramic antagonists was found with regard to abrasion or attrition. The wear generated by the enamel stylus was not statistically different from that generated by the other two ceramic materials in the OHSU method. With the Ivoclar method, wear of the ceramic stylus was only statistically different when in contact with Tetric Ceram. There was a close correlation between the attrition wear of the OHSU and the wear of the Ivoclar method (Pearson coefficient 0.83, p=0.01). Pressable ceramic materials can be used as a substitute for enamel in wear testing machines. However, material ranking may be affected by the type of ceramic material chosen. The attrition wear of the OHSU method was comparable with the wear generated with the Ivoclar method.

  2. RCWA and FDTD modeling of light emission from internally structured OLEDs.

    PubMed

    Callens, Michiel Koen; Marsman, Herman; Penninck, Lieven; Peeters, Patrick; de Groot, Harry; ter Meulen, Jan Matthijs; Neyts, Kristiaan

    2014-05-05

    We report on the fabrication and simulation of a green OLED with an Internal Light Extraction (ILE) layer. The optical behavior of these devices is simulated using both Rigorous Coupled Wave Analysis (RCWA) and Finite Difference Time-Domain (FDTD) methods. Results obtained using these two different techniques show excellent agreement and predict the experimental results with good precision. By verifying the validity of both simulation methods on the internal light extraction structure we pave the way to optimization of ILE layers using either of these methods.

  3. Temperature Profiles of Different Cooling Methods in Porcine Pancreas Procurement

    PubMed Central

    Weegman, Brad P.; Suszynski, Thomas M.; Scott, William E.; Ferrer, Joana; Avgoustiniatos, Efstathios S.; Anazawa, Takayuki; O’Brien, Timothy D.; Rizzari, Michael D.; Karatzas, Theodore; Jie, Tun; Sutherland, David ER.; Hering, Bernhard J.; Papas, Klearchos K.

    2014-01-01

    Background Porcine islet xenotransplantation is a promising alternative to human islet allotransplantation. Porcine pancreas cooling needs to be optimized to reduce the warm ischemia time (WIT) following donation after cardiac death, which is associated with poorer islet isolation outcomes. Methods This study examines the effect of 4 different cooling Methods on core porcine pancreas temperature (n=24) and histopathology (n=16). All Methods involved surface cooling with crushed ice and chilled irrigation. Method A, which is the standard for porcine pancreas procurement, used only surface cooling. Method B involved an intravascular flush with cold solution through the pancreas arterial system. Method C involved an intraductal infusion with cold solution through the major pancreatic duct, and Method D combined all 3 cooling Methods. Results Surface cooling alone (Method A) gradually decreased core pancreas temperature to < 10 °C after 30 minutes. Using an intravascular flush (Method B) improved cooling during the entire duration of procurement, but incorporating an intraductal infusion (Method C) rapidly reduced core temperature 15–20 °C within the first 2 minutes of cooling. Combining all methods (Method D) was the most effective at rapidly reducing temperature and providing sustained cooling throughout the duration of procurement, although the recorded WIT was not different between Methods (p=0.36). Histological scores were different between the cooling Methods (p=0.02) and the worst with Method A. There were differences in histological scores between Methods A and C (p=0.02) and Methods A and D (p=0.02), but not between Methods C and D (p=0.95), which may highlight the importance of early cooling using an intraductal infusion. Conclusions In conclusion, surface cooling alone cannot rapidly cool large (porcine or human) pancreata. Additional cooling with an intravascular flush and intraductal infusion results in improved core porcine pancreas temperature profiles during procurement and histopathology scores. These data may also have implications on human pancreas procurement since use of an intraductal infusion is not common practice. PMID:25040217

  4. Testing an automated method to estimate ground-water recharge from streamflow records

    USGS Publications Warehouse

    Rutledge, A.T.; Daniel, C.C.

    1994-01-01

    The computer program, RORA, allows automated analysis of streamflow hydrographs to estimate ground-water recharge. Output from the program, which is based on the recession-curve-displacement method (often referred to as the Rorabaugh method, for whom the program is named), was compared to estimates of recharge obtained from a manual analysis of 156 years of streamflow record from 15 streamflow-gaging stations in the eastern United States. Statistical tests showed that there was no significant difference between paired estimates of annual recharge by the two methods. Tests of results produced by the four workers who performed the manual method showed that results can differ significantly between workers. Twenty-two percent of the variation between manual and automated estimates could be attributed to having different workers perform the manual method. The program RORA will produce estimates of recharge equivalent to estimates produced manually, greatly increase the speed od analysis, and reduce the subjectivity inherent in manual analysis.

  5. Morbidity and chronic pain following different techniques of caesarean section: A comparative study.

    PubMed

    Belci, D; Di Renzo, G C; Stark, M; Đurić, J; Zoričić, D; Belci, M; Peteh, L L

    2015-01-01

    Research examining long-term outcomes after childbirth performed with different techniques of caesarean section have been limited and do not provide information on morbidity and neuropathic pain. The study compares two groups of patients submitted to the 'Traditional' method using Pfannenstiel incision and patients submitted to the 'Misgav Ladach' method ≥ 5 years after the operation. We find better long-term postoperative results in the patients that were treated with the Misgav Ladach method compared with the Traditional method. The results were statistically better regarding the intensity of pain, presence of neuropathic and chronic pain and the level of satisfaction about cosmetic appearance of the scar.

  6. An investigation of the 'Overlap' between the Statistical-Discrete-Gust and the Power-Spectral-Density analysis methods

    NASA Technical Reports Server (NTRS)

    Perry, Boyd, III; Pototzky, Anthony S.; Woods, Jessica A.

    1989-01-01

    This paper presents the results of a NASA investigation of a claimed 'Overlap' between two gust response analysis methods: the Statistical Discrete Gust (SDG) method and the Power Spectral Density (PSD) method. The claim is that the ratio of an SDG response to the corresponding PSD response is 10.4. Analytical results presented in this paper for several different airplanes at several different flight conditions indicate that such an 'Overlap' does appear to exist. However, the claim was not met precisely: a scatter of up to about 10 percent about the 10.4 factor can be expected.

  7. Comparison of methods for determination of total oil sands-derived naphthenic acids in water samples.

    PubMed

    Hughes, Sarah A; Huang, Rongfu; Mahaffey, Ashley; Chelme-Ayala, Pamela; Klamerth, Nikolaus; Meshref, Mohamed N A; Ibrahim, Mohamed D; Brown, Christine; Peru, Kerry M; Headley, John V; Gamal El-Din, Mohamed

    2017-11-01

    There are several established methods for the determination of naphthenic acids (NAs) in waters associated with oil sands mining operations. Due to their highly complex nature, measured concentration and composition of NAs vary depending on the method used. This study compared different common sample preparation techniques, analytical instrument methods, and analytical standards to measure NAs in groundwater and process water samples collected from an active oil sands operation. In general, the high- and ultrahigh-resolution methods, namely high performance liquid chromatography time-of-flight mass spectrometry (UPLC-TOF-MS) and Orbitrap mass spectrometry (Orbitrap-MS), were within an order of magnitude of the Fourier transform infrared spectroscopy (FTIR) methods. The gas chromatography mass spectrometry (GC-MS) methods consistently had the highest NA concentrations and greatest standard error. Total NAs concentration was not statistically different between sample preparation of solid phase extraction and liquid-liquid extraction. Calibration standards influenced quantitation results. This work provided a comprehensive understanding of the inherent differences in the various techniques available to measure NAs and hence the potential differences in measured amounts of NAs in samples. Results from this study will contribute to the analytical method standardization for NA analysis in oil sands related water samples. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. How different are the results acquired from mathematical and subjective methods in dendrogeomorphology? Insights from landslide movements

    NASA Astrophysics Data System (ADS)

    Šilhán, Karel

    2016-01-01

    Knowledge of past landslide activity is crucial for understanding landslide behaviour and for modelling potential future landslide occurrence. Dendrogeomorphic approaches represent the most precise methods of landslide dating (where trees annually create tree-rings in the timescale of up to several hundred years). Despite the advantages of these methods, many open questions remain. One of the less researched uncertainties, and the focus of this study, is the impact of two common methods of geomorphic signal extraction on the spatial and temporal results of landslide reconstruction. In total, 93 Norway spruce (Picea abies (L.) Karst.) trees were sampled at one landslide location dominated by block-type movements in the forefield of the Orlické hory Mts., Bohemian Massif. Landslide signals were examined by the classical subjective method based on reaction (compression) wood analysis and by a numerical method based on eccentric growth analysis. The chronology of landslide movements obtained by the mathematical method resulted in twice the number of events detected compared to the subjective method. This finding indicates that eccentric growth is a more accurate indicator for landslide movements than the classical analysis of reaction wood. The reconstructed spatial activity of landslide movements shows a similar distribution of recurrence intervals (Ri) for both methods. The differences (maximally 30% of the total Ri ranges) in results obtained by both methods may be caused by differences in the ability of trees to react to tilting of their stems by a specific growth response (reaction wood formation or eccentric growth). Finally, the ability of trees to record tilting events (by both growth responses) in their tree-ring series was analysed for different decades of tree life. The highest sensitivity to external tilting events occurred at tree ages from 70 to 80 years for reaction wood formation and from 80 to 90 years for eccentric growth response. This means that the ability of P. abies to record geomorphic signals varies with not only eccentric growth responses but also with age.

  9. Study of EEG during Sternberg Tasks with Different Direction of Arrangement for Letters

    NASA Astrophysics Data System (ADS)

    Kamihoriuchi, Kenji; Nuruki, Atsuo; Matae, Tadashi; Kurono, Asutsugu; Yunokuchi, Kazutomo

    In previous study, we recorded electroencephalogram (EEG) of patients with dementia and healthy subjects during Sternberg task. But, only one presentation method of Sternberg task was considered in previous study. Therefore, we examined whether the EEG was different in two different presentation methods wrote letters horizontally and wrote letters vertically in this study. We recorded EEG of six healthy subjects during Sternberg task using two different presentation methods. The result was not different in EEG topography of all subjects. In all subjects, correct rate increased in case of vertically arranged letters.

  10. ICASE Semiannual Report, October 1, 1992 through March 31, 1993

    DTIC Science & Technology

    1993-06-01

    NUMERICAL MATHEMATICS Saul Abarbanel Further results have been obtained regarding long time integration of high order compact finite difference schemes...overall accuracy. These problems are common to all numerical methods: finite differences , finite elements and spectral methods. It should be noted that...fourth order finite difference scheme. * In the same case, the D6 wavelets provide a sixth order finite difference , noncompact formula. * The wavelets

  11. Conduct of a personal radiofrequency electromagnetic field measurement study: proposed study protocol

    PubMed Central

    2010-01-01

    Background The development of new wireless communication technologies that emit radio frequency electromagnetic fields (RF-EMF) is ongoing, but little is known about the RF-EMF exposure distribution in the general population. Previous attempts to measure personal exposure to RF-EMF have used different measurement protocols and analysis methods making comparisons between exposure situations across different study populations very difficult. As a result, observed differences in exposure levels between study populations may not reflect real exposure differences but may be in part, or wholly due to methodological differences. Methods The aim of this paper is to develop a study protocol for future personal RF-EMF exposure studies based on experience drawn from previous research. Using the current knowledge base, we propose procedures for the measurement of personal exposure to RF-EMF, data collection, data management and analysis, and methods for the selection and instruction of study participants. Results We have identified two basic types of personal RF-EMF measurement studies: population surveys and microenvironmental measurements. In the case of a population survey, the unit of observation is the individual and a randomly selected representative sample of the population is needed to obtain reliable results. For microenvironmental measurements, study participants are selected in order to represent typical behaviours in different microenvironments. These two study types require different methods and procedures. Conclusion Applying our proposed common core procedures in future personal measurement studies will allow direct comparisons of personal RF-EMF exposures in different populations and study areas. PMID:20487532

  12. Meta‐analysis of test accuracy studies using imputation for partial reporting of multiple thresholds

    PubMed Central

    Deeks, J.J.; Martin, E.C.; Riley, R.D.

    2017-01-01

    Introduction For tests reporting continuous results, primary studies usually provide test performance at multiple but often different thresholds. This creates missing data when performing a meta‐analysis at each threshold. A standard meta‐analysis (no imputation [NI]) ignores such missing data. A single imputation (SI) approach was recently proposed to recover missing threshold results. Here, we propose a new method that performs multiple imputation of the missing threshold results using discrete combinations (MIDC). Methods The new MIDC method imputes missing threshold results by randomly selecting from the set of all possible discrete combinations which lie between the results for 2 known bounding thresholds. Imputed and observed results are then synthesised at each threshold. This is repeated multiple times, and the multiple pooled results at each threshold are combined using Rubin's rules to give final estimates. We compared the NI, SI, and MIDC approaches via simulation. Results Both imputation methods outperform the NI method in simulations. There was generally little difference in the SI and MIDC methods, but the latter was noticeably better in terms of estimating the between‐study variances and generally gave better coverage, due to slightly larger standard errors of pooled estimates. Given selective reporting of thresholds, the imputation methods also reduced bias in the summary receiver operating characteristic curve. Simulations demonstrate the imputation methods rely on an equal threshold spacing assumption. A real example is presented. Conclusions The SI and, in particular, MIDC methods can be used to examine the impact of missing threshold results in meta‐analysis of test accuracy studies. PMID:29052347

  13. Evaluation of 99mTc-MIBI in thyroid gland imaging for the diagnosis of amiodarone-induced thyrotoxicosis

    PubMed Central

    Zhang, Ruiguo

    2017-01-01

    Objective: Amiodarone-induced thyrotoxicosis (AIT) is caused by amiodarone as a side effect of cardiovascular disease treatment. Based on the differences in their pathological and physiological mechanisms, many methods have been developed so far to differentiate AIT subtypes such as colour flow Doppler sonography (CFDS) and 24-h radioiodine uptake (RAIU). However, these methods suffer from inadequate accuracy in distinguishing different types of AITs and sometimes lead to misdiagnosis and delayed treatments. Therefore, there is an unmet demand for an efficient method for accurate classification of AIT. Methods: Technetium-99 methoxyisobutylisonitrile (99mTc-MIBI) thyroid imaging was performed on 15 patients for AIT classification, and the results were compared with other conventional methods such as CFDS, RAIU and 99mTcO4 imaging. Results: High uptake and retention of MIBI in thyroid tissue is characteristic in Type I AIT, while in sharp contrast, low uptake of MIBI in the thyroid tissue was observed in Type II AIT. Mixed-type AIT shows uptake value between Types I and II. MIBI imaging outperforms other methods with a lower misdiagnosis rate. Conclusion: Among the methods evaluated, MIBI imaging enables an accurate identification of Type I, II and mixed-type AITs by showing distinct images for different types of AITs. The results obtained from our selected subjects revealed that MIBI imaging is a reliable method for diagnosis and classification of AITs and MIBI imaging has potential in the treatment of thyroid diseases. Advances in knowledge: 99mTc-MIBI imaging is a useful method for the diagnosis of AIT. It can distinguish different types of AITs especially for mixed-type AIT, which is usually difficult to treat. 99mTc-MIBI has potential advantages over conventional methods in the efficient treatment of AIT. PMID:28106465

  14. Application of age estimation methods based on teeth eruption: how easy is Olze method to use?

    PubMed

    De Angelis, D; Gibelli, D; Merelli, V; Botto, M; Ventura, F; Cattaneo, C

    2014-09-01

    The development of new methods for age estimation has become with time an urgent issue because of the increasing immigration, in order to estimate accurately the age of those subjects who lack valid identity documents. Methods of age estimation are divided in skeletal and dental ones, and among the latter, Olze's method is one of the most recent, since it was introduced in 2010 with the aim to identify the legal age of 18 and 21 years by evaluating the different stages of development of the periodontal ligament of the third molars with closed root apices. The present study aims at verifying the applicability of the method to the daily forensic practice, with special focus on the interobserver repeatability. Olze's method was applied by three different observers (two physicians and one dentist without a specific training in Olze's method) to 61 orthopantomograms from subjects of mixed ethnicity aged between 16 and 51 years. The analysis took into consideration the lower third molars. The results provided by the different observers were then compared in order to verify the interobserver error. Results showed that interobserver error varies between 43 and 57 % for the right lower third molar (M48) and between 23 and 49 % for the left lower third molar (M38). Chi-square test did not show significant differences according to the side of teeth and type of professional figure. The results prove that Olze's method is not easy to apply when used by not adequately trained personnel, because of an intrinsic interobserver error. Since it is however a crucial method in age determination, it should be used only by experienced observers after an intensive and specific training.

  15. Development of a Double Glass Mounting Method Using Formaldehyde Alcohol Azocarmine Lactophenol (FAAL) and its Evaluation for Permanent Mounting of Small Nematodes

    PubMed Central

    ZAHABIUN, Farzaneh; SADJJADI, Seyed Mahmoud; ESFANDIARI, Farideh

    2015-01-01

    Background: Permanent slide preparation of nematodes especially small ones is time consuming, difficult and they become scarious margins. Regarding this problem, a modified double glass mounting method was developed and compared with classic method. Methods: A total of 209 nematode samples from human and animal origin were fixed and stained with Formaldehyde Alcohol Azocarmine Lactophenol (FAAL) followed by double glass mounting and classic dehydration method using Canada balsam as their mounting media. The slides were evaluated in different dates and times, more than four years. Different photos were made with different magnification during the evaluation time. Results: The double glass mounting method was stable during this time and comparable with classic method. There were no changes in morphologic structures of nematodes using double glass mounting method with well-defined and clear differentiation between different organs of nematodes in this method. Conclusion: Using this method is cost effective and fast for mounting of small nematodes comparing to classic method. PMID:26811729

  16. Using ontologies to model human navigation behavior in information networks: A study based on Wikipedia.

    PubMed

    Lamprecht, Daniel; Strohmaier, Markus; Helic, Denis; Nyulas, Csongor; Tudorache, Tania; Noy, Natalya F; Musen, Mark A

    The need to examine the behavior of different user groups is a fundamental requirement when building information systems. In this paper, we present Ontology-based Decentralized Search (OBDS), a novel method to model the navigation behavior of users equipped with different types of background knowledge. Ontology-based Decentralized Search combines decentralized search, an established method for navigation in social networks, and ontologies to model navigation behavior in information networks. The method uses ontologies as an explicit representation of background knowledge to inform the navigation process and guide it towards navigation targets. By using different ontologies, users equipped with different types of background knowledge can be represented. We demonstrate our method using four biomedical ontologies and their associated Wikipedia articles. We compare our simulation results with base line approaches and with results obtained from a user study. We find that our method produces click paths that have properties similar to those originating from human navigators. The results suggest that our method can be used to model human navigation behavior in systems that are based on information networks, such as Wikipedia. This paper makes the following contributions: (i) To the best of our knowledge, this is the first work to demonstrate the utility of ontologies in modeling human navigation and (ii) it yields new insights and understanding about the mechanisms of human navigation in information networks.

  17. Using ontologies to model human navigation behavior in information networks: A study based on Wikipedia

    PubMed Central

    Lamprecht, Daniel; Strohmaier, Markus; Helic, Denis; Nyulas, Csongor; Tudorache, Tania; Noy, Natalya F.; Musen, Mark A.

    2015-01-01

    The need to examine the behavior of different user groups is a fundamental requirement when building information systems. In this paper, we present Ontology-based Decentralized Search (OBDS), a novel method to model the navigation behavior of users equipped with different types of background knowledge. Ontology-based Decentralized Search combines decentralized search, an established method for navigation in social networks, and ontologies to model navigation behavior in information networks. The method uses ontologies as an explicit representation of background knowledge to inform the navigation process and guide it towards navigation targets. By using different ontologies, users equipped with different types of background knowledge can be represented. We demonstrate our method using four biomedical ontologies and their associated Wikipedia articles. We compare our simulation results with base line approaches and with results obtained from a user study. We find that our method produces click paths that have properties similar to those originating from human navigators. The results suggest that our method can be used to model human navigation behavior in systems that are based on information networks, such as Wikipedia. This paper makes the following contributions: (i) To the best of our knowledge, this is the first work to demonstrate the utility of ontologies in modeling human navigation and (ii) it yields new insights and understanding about the mechanisms of human navigation in information networks. PMID:26568745

  18. Fourth order difference methods for hyperbolic IBVP's

    NASA Technical Reports Server (NTRS)

    Gustafsson, Bertil; Olsson, Pelle

    1994-01-01

    Fourth order difference approximations of initial-boundary value problems for hyperbolic partial differential equations are considered. We use the method of lines approach with both explicit and compact implicit difference operators in space. The explicit operator satisfies an energy estimate leading to strict stability. For the implicit operator we develop boundary conditions and give a complete proof of strong stability using the Laplace transform technique. We also present numerical experiments for the linear advection equation and Burgers' equation with discontinuities in the solution or in its derivative. The first equation is used for modeling contact discontinuities in fluid dynamics, the second one for modeling shocks and rarefaction waves. The time discretization is done with a third order Runge-Kutta TVD method. For solutions with discontinuities in the solution itself we add a filter based on second order viscosity. In case of the non-linear Burger's equation we use a flux splitting technique that results in an energy estimate for certain different approximations, in which case also an entropy condition is fulfilled. In particular we shall demonstrate that the unsplit conservative form produces a non-physical shock instead of the physically correct rarefaction wave. In the numerical experiments we compare our fourth order methods with a standard second order one and with a third order TVD-method. The results show that the fourth order methods are the only ones that give good results for all the considered test problems.

  19. Simultaneous quantitative analysis of main components in linderae reflexae radix with one single marker.

    PubMed

    Wang, Li-Li; Zhang, Yun-Bin; Sun, Xiao-Ya; Chen, Sui-Qing

    2016-05-08

    Establish a quantitative analysis of multi-components by the single marker (QAMS) method for quality evaluation and validate its feasibilities by the simultaneous quantitative assay of four main components in Linderae Reflexae Radix. Four main components of pinostrobin, pinosylvin, pinocembrin, and 3,5-dihydroxy-2-(1- p -mentheneyl)- trans -stilbene were selected as analytes to evaluate the quality by RP-HPLC coupled with a UV-detector. The method was evaluated by a comparison of the quantitative results between the external standard method and QAMS with a different HPLC system. The results showed that no significant differences were found in the quantitative results of the four contents of Linderae Reflexae Radix determined by the external standard method and QAMS (RSD <3%). The contents of four analytes (pinosylvin, pinocembrin, pinostrobin, and Reflexanbene I) in Linderae Reflexae Radix were determined by the single marker of pinosylvin. This fingerprint was the spectra determined by Shimadzu LC-20AT and Waters e2695 HPLC that were equipped with three different columns.

  20. The legal aspects of parental rights in assisted reproductive technology.

    PubMed

    Ciccarelli, John K; Ciccarelli, Janice C

    2005-03-01

    This paper provides an overview of the different legal approaches that are used in various jurisdictions to determine parental rights and obligations of the parties involved in third party assisted reproduction. Additionally, the paper explores the differing legal models that are used depending on the method of surrogacy being utilized. The data demonstrates that a given method of surrogacy may well result in different procedures and outcomes regarding parental rights in different jurisdictions. This suggests the need for a uniform method to resolve parental rights where assisted reproductive technology is involved.

  1. Comparing the NIOSH Method 5040 to a Diesel Particulate Matter Meter for Elemental Carbon

    NASA Astrophysics Data System (ADS)

    Ayers, David Matthew

    Introduction: The sampling of elemental carbon has been associated with monitoring exposures in the trucking and mining industries. Recently, in the field of engineered nanomaterials, single wall and muti-wall carbon nanotubes (MWCNTs) are being produced in ever increasing quantities. The only approved atmospheric sampling for multi-wall carbon nanotubes in NIOSH Method 5040. These results are accurate but can take up to 30 days for sample results to be received. Objectives: Compare the results of elemental carbon sampling from the NIOSH Method 5040 to a Diesel Particulate Matter (DPM) Meter. Methods: MWCNTs were transferred and weighed between several trays placed on a scale. The NIOSH Method 5040 and DPM sampling train was hung 6 inches above the receiving tray. The transferring and weighing of the MWCNTs created an aerosol containing elemental carbon. Twenty-one total samples using both meters type were collected. Results: The assumptions for a Two-Way ANOVA were violated therefore, Mann-Whitney U Tests and a Kruskal-Wallis Test were performed. The hypotheses for both research questions were rejected. There was a significant difference in the EC concentrations obtained by the NIOSH Method 5040 and the DPM meter. There were also significant differences in elemental carbon level concentrations when sampled using a DPM meter versus a sampling pump based upon the three concentration levels (low, medium and high). Conclusions: The differences in the EC concentrations were statistically significant therefore, the two methods (NIOSH Method 5040 and DPM) are not the same. The NIOSH Method 5040 should continue to be the only authorized method of establishing an EC concentration for MWCNTs until a MWCNT specific method or an instantaneous meter is invented.

  2. Comparison of Anaerobic Susceptibility Results Obtained by Different Methods

    PubMed Central

    Rosenblatt, J. E.; Murray, P. R.; Sonnenwirth, A. C.; Joyce, J. L.

    1979-01-01

    Susceptibility tests using 7 antimicrobial agents (carbenicillin, chloramphenicol, clindamycin, penicillin, cephalothin, metronidazole, and tetracycline) were run against 35 anaerobes including Bacteroides fragilis (17), other gram-negative bacilli (7), clostridia (5), peptococci (4), and eubacteria (2). Results in triplicate obtained by the microbroth dilution method and the aerobic modification of the broth disk method were compared with those obtained with an agar dilution method using Wilkins-Chalgren agar. Media used in the microbroth dilution method included Wilkins-Chalgren broth, brain heart infusion broth, brucella broth, tryptic soy broth, thioglycolate broth, and Schaedler's broth. A result differing by more than one dilution from the Wilkins-Chalgren agar result was considered a discrepancy, and when there was a change in susceptibility status this was termed a significant discrepancy. The microbroth dilution method using Wilkins-Chalgren broth and thioglycolate broth produced the fewest total discrepancies (22 and 24, respectively), and Wilkins-Chalgren broth, thioglycolate, and Schaedler's broth had the fewest significant discrepancies (6, 5, and 5, respectively). With the broth disk method, there were 15 significant discrepancies, although half of these were with tetracycline, which was the antimicrobial agent associated with the highest number of significant discrepancies (33), considering all of the test methods and media. PMID:464560

  3. Comparability among four invertebrate sampling methods, Fountain Creek Basin, Colorado, 2010-2012

    USGS Publications Warehouse

    Zuellig, Robert E.; Bruce, James F.; Stogner, Sr., Robert W.; Brown, Krystal D.

    2014-01-01

    The U.S. Geological Survey, in cooperation with Colorado Springs City Engineering and Colorado Springs Utilities, designed a study to determine if sampling method and sample timing resulted in comparable samples and assessments of biological condition. To accomplish this task, annual invertebrate samples were collected concurrently using four sampling methods at 15 U.S. Geological Survey streamflow gages in the Fountain Creek basin from 2010 to 2012. Collectively, the four methods are used by local (U.S. Geological Survey cooperative monitoring program) and State monitoring programs (Colorado Department of Public Health and Environment) in the Fountain Creek basin to produce two distinct sample types for each program that target single-and multiple-habitats. This study found distinguishable differences between single-and multi-habitat sample types using both community similarities and multi-metric index values, while methods from each program within sample type were comparable. This indicates that the Colorado Department of Public Health and Environment methods were compatible with the cooperative monitoring program methods within multi-and single-habitat sample types. Comparisons between September and October samples found distinguishable differences based on community similarities for both sample types, whereas only differences were found for single-habitat samples when multi-metric index values were considered. At one site, differences between September and October index values from single-habitat samples resulted in opposing assessments of biological condition. Direct application of the results to inform the revision of the existing Fountain Creek basin U.S. Geological Survey cooperative monitoring program are discussed.

  4. A Comparison of the Sensitivity and Fecal Egg Counts of the McMaster Egg Counting and Kato-Katz Thick Smear Methods for Soil-Transmitted Helminths

    PubMed Central

    Levecke, Bruno; Behnke, Jerzy M.; Ajjampur, Sitara S. R.; Albonico, Marco; Ame, Shaali M.; Charlier, Johannes; Geiger, Stefan M.; Hoa, Nguyen T. V.; Kamwa Ngassam, Romuald I.; Kotze, Andrew C.; McCarthy, James S.; Montresor, Antonio; Periago, Maria V.; Roy, Sheela; Tchuem Tchuenté, Louis-Albert; Thach, D. T. C.; Vercruysse, Jozef

    2011-01-01

    Background The Kato-Katz thick smear (Kato-Katz) is the diagnostic method recommended for monitoring large-scale treatment programs implemented for the control of soil-transmitted helminths (STH) in public health, yet it is difficult to standardize. A promising alternative is the McMaster egg counting method (McMaster), commonly used in veterinary parasitology, but rarely so for the detection of STH in human stool. Methodology/Principal Findings The Kato-Katz and McMaster methods were compared for the detection of STH in 1,543 subjects resident in five countries across Africa, Asia and South America. The consistency of the performance of both methods in different trials, the validity of the fixed multiplication factor employed in the Kato-Katz method and the accuracy of these methods for estimating ‘true’ drug efficacies were assessed. The Kato-Katz method detected significantly more Ascaris lumbricoides infections (88.1% vs. 75.6%, p<0.001), whereas the difference in sensitivity between the two methods was non-significant for hookworm (78.3% vs. 72.4%) and Trichuris trichiura (82.6% vs. 80.3%). The sensitivity of the methods varied significantly across trials and magnitude of fecal egg counts (FEC). Quantitative comparison revealed a significant correlation (Rs >0.32) in FEC between both methods, and indicated no significant difference in FEC, except for A. lumbricoides, where the Kato-Katz resulted in significantly higher FEC (14,197 eggs per gram of stool (EPG) vs. 5,982 EPG). For the Kato-Katz, the fixed multiplication factor resulted in significantly higher FEC than the multiplication factor adjusted for mass of feces examined for A. lumbricoides (16,538 EPG vs. 15,396 EPG) and T. trichiura (1,490 EPG vs. 1,363 EPG), but not for hookworm. The McMaster provided more accurate efficacy results (absolute difference to ‘true’ drug efficacy: 1.7% vs. 4.5%). Conclusions/Significance The McMaster is an alternative method for monitoring large-scale treatment programs. It is a robust (accurate multiplication factor) and accurate (reliable efficacy results) method, which can be easily standardized. PMID:21695104

  5. DYNAMIC PLANE-STRAIN SHEAR RUPTURE WITH A SLIP-WEAKENING FRICTION LAW CALCULATED BY A BOUNDARY INTEGRAL METHOD.

    USGS Publications Warehouse

    Andrews, D.J.

    1985-01-01

    A numerical boundary integral method, relating slip and traction on a plane in an elastic medium by convolution with a discretized Green function, can be linked to a slip-dependent friction law on the fault plane. Such a method is developed here in two-dimensional plane-strain geometry. Spontaneous plane-strain shear ruptures can make a transition from sub-Rayleigh to near-P propagation velocity. Results from the boundary integral method agree with earlier results from a finite difference method on the location of this transition in parameter space. The methods differ in their prediction of rupture velocity following the transition. The trailing edge of the cohesive zone propagates at the P-wave velocity after the transition in the boundary integral calculations. Refs.

  6. A finite-volume Eulerian-Lagrangian Localized Adjoint Method for solution of the advection-dispersion equation

    USGS Publications Warehouse

    Healy, R.W.; Russell, T.F.

    1993-01-01

    A new mass-conservative method for solution of the one-dimensional advection-dispersion equation is derived and discussed. Test results demonstrate that the finite-volume Eulerian-Lagrangian localized adjoint method (FVELLAM) outperforms standard finite-difference methods, in terms of accuracy and efficiency, for solute transport problems that are dominated by advection. For dispersion-dominated problems, the performance of the method is similar to that of standard methods. Like previous ELLAM formulations, FVELLAM systematically conserves mass globally with all types of boundary conditions. FVELLAM differs from other ELLAM approaches in that integrated finite differences, instead of finite elements, are used to approximate the governing equation. This approach, in conjunction with a forward tracking scheme, greatly facilitates mass conservation. The mass storage integral is numerically evaluated at the current time level, and quadrature points are then tracked forward in time to the next level. Forward tracking permits straightforward treatment of inflow boundaries, thus avoiding the inherent problem in backtracking, as used by most characteristic methods, of characteristic lines intersecting inflow boundaries. FVELLAM extends previous ELLAM results by obtaining mass conservation locally on Lagrangian space-time elements. Details of the integration, tracking, and boundary algorithms are presented. Test results are given for problems in Cartesian and radial coordinates.

  7. Different equation-of-motion coupled cluster methods with different reference functions: The formyl radical

    NASA Astrophysics Data System (ADS)

    Kuś, Tomasz; Bartlett, Rodney J.

    2008-09-01

    The doublet and quartet excited states of the formyl radical have been studied by the equation-of-motion (EOM) coupled cluster (CC) method. The Sz spin-conserving singles and doubles (EOM-EE-CCSD) and singles, doubles, and triples (EOM-EE-CCSDT) approaches, as well as the spin-flipped singles and doubles (EOM-SF-CCSD) method have been applied, subject to unrestricted Hartree-Fock (HF), restricted open-shell HF, and quasirestricted HF references. The structural parameters, vertical and adiabatic excitation energies, and harmonic vibrational frequencies have been calculated. The issue of the reference function choice for the spin-flipped (SF) method and its impact on the results has been discussed using the experimental data and theoretical results available. The results show that if the appropriate reference function is chosen so that target states differ from the reference by only single excitations, then EOM-EE-CCSD and EOM-SF-CCSD methods give a very good description of the excited states. For the states that have a non-negligible contribution of the doubly excited configurations one is able to use the SF method with such a reference function, that in most cases the performance of the EOM-SF-CCSD method is better than that of the EOM-EE-CCSD approach.

  8. Imperial County baseline health survey potential impact of geothermal energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deane, M.

    The survey purpose, methods, and statistical methods are presented. Results are discussed according to: area differences in background variables, area differences in health variables, area differences in annoyance reactions, and comparison of symptom frequencies with age, smoking, and drinking. Included in appendices are tables of data, enumeration forms, the questionnaire, interviewer cards, and interviewer instructions. (MHR)

  9. Arrival-time picking method based on approximate negentropy for microseismic data

    NASA Astrophysics Data System (ADS)

    Li, Yue; Ni, Zhuo; Tian, Yanan

    2018-05-01

    Accurate and dependable picking of the first arrival time for microseismic data is an important part in microseismic monitoring, which directly affects analysis results of post-processing. This paper presents a new method based on approximate negentropy (AN) theory for microseismic arrival time picking in condition of much lower signal-to-noise ratio (SNR). According to the differences in information characteristics between microseismic data and random noise, an appropriate approximation of negentropy function is selected to minimize the effect of SNR. At the same time, a weighted function of the differences between maximum and minimum value of AN spectrum curve is designed to obtain a proper threshold function. In this way, the region of signal and noise is distinguished to pick the first arrival time accurately. To demonstrate the effectiveness of AN method, we make many experiments on a series of synthetic data with different SNR from -1 dB to -12 dB and compare it with previously published Akaike information criterion (AIC) and short/long time average ratio (STA/LTA) methods. Experimental results indicate that these three methods can achieve well picking effect when SNR is from -1 dB to -8 dB. However, when SNR is as low as -8 dB to -12 dB, the proposed AN method yields more accurate and stable picking result than AIC and STA/LTA methods. Furthermore, the application results of real three-component microseismic data also show that the new method is superior to the other two methods in accuracy and stability.

  10. Deformation of two-phase aggregates using standard numerical methods

    NASA Astrophysics Data System (ADS)

    Duretz, Thibault; Yamato, Philippe; Schmalholz, Stefan M.

    2013-04-01

    Geodynamic problems often involve the large deformation of material encompassing material boundaries. In geophysical fluids, such boundaries often coincide with a discontinuity in the viscosity (or effective viscosity) field and subsequently in the pressure field. Here, we employ popular implementations of the finite difference and finite element methods for solving viscous flow problems. On one hand, we implemented finite difference method coupled with a Lagrangian marker-in-cell technique to represent the deforming fluid. Thanks to it Eulerian nature, this method has a limited geometric flexibility but is characterized by a light and stable discretization. On the other hand, we employ the Lagrangian finite element method which offers full geometric flexibility at the cost of relatively heavier discretization. In order to test the accuracy of the finite difference scheme, we ran large strain simple shear deformation of aggregates containing either weak of strong circular inclusion (1e6 viscosity ratio). The results, obtained for different grid resolutions, are compared to Lagrangian finite element results which are considered as reference solution. The comparison is then used to establish up to which strain can finite difference simulations be run given the nature of the inclusions (dimensions, viscosity) and the resolution of the Eulerian mesh.

  11. The long-term strength of Europe and its implications for plate-forming processes.

    PubMed

    Pérez-Gussinyé, M; Watts, A B

    2005-07-21

    Field-based geological studies show that continental deformation preferentially occurs in young tectonic provinces rather than in old cratons. This partitioning of deformation suggests that the cratons are stronger than surrounding younger Phanerozoic provinces. However, although Archaean and Phanerozoic lithosphere differ in their thickness and composition, their relative strength is a matter of much debate. One proxy of strength is the effective elastic thickness of the lithosphere, Te. Unfortunately, spatial variations in Te are not well understood, as different methods yield different results. The differences are most apparent in cratons, where the 'Bouguer coherence' method yields large Te values (> 60 km) whereas the 'free-air admittance' method yields low values (< 25 km). Here we present estimates of the variability of Te in Europe using both methods. We show that when they are consistently formulated, both methods yield comparable Te values that correlate with geology, and that the strength of old lithosphere (> or = 1.5 Gyr old) is much larger (mean Te > 60 km) than that of younger lithosphere (mean Te < 30 km). We propose that this strength difference reflects changes in lithospheric plate structure (thickness, geothermal gradient and composition) that result from mantle temperature and volatile content decrease through Earth's history.

  12. MEASUREMENTS OF GAMMA-RAY DOSES OF DIFFERENT RADIOISOTOPES BY THE TEST-FILM METHOD (in German)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Domanus, J.; Halski, L.

    The test-film method seems to be most suitable for systematic, periodical measurements of individual doses of ionizing radiation. Persons handling radioisotopes are irradiated with gamma rays of different energies. The energy of gamma radiation lies within much broader limits than is the case with x rays. Therefore it was necessary to check whether the test-film method is suitable for measuring doses of gamma-rays of such different energies and to choose the proper combination of film and screen to reach the necessary measuring range. Polish films, Foton Rentgen and Foton Rentgen Super and films from the German Democratic Republic, Agfa Texomore » R and Agfa Texo S were tested. Expositions were made without intensifying screens as well as with lead and fluorescent screens. The investigations showed that for dosimetric purposes the Foton Rentgen Super films are most suitable. However, not one of the film-screen combinations gave satisfactory results for radioisotopes with radiation of different energies. In such a case the test-film method gives only approximate results. If, on the contrary, gamma energies do not differ greatly, the test- film method proves to be quite good. (auth)« less

  13. Comparison of Different Approach of Back Projection Method in Retrieving the Rupture Process of Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Tan, F.; Wang, G.; Chen, C.; Ge, Z.

    2016-12-01

    Back-projection of teleseismic P waves [Ishii et al., 2005] has been widely used to image the rupture of earthquakes. Besides the conventional narrowband beamforming in time domain, approaches in frequency domain such as MUSIC back projection (Meng 2011) and compressive sensing (Yao et al, 2011), are proposed to improve the resolution. Each method has its advantages and disadvantages and should be properly used in different cases. Therefore, a thorough research to compare and test these methods is needed. We write a GUI program, which puts the three methods together so that people can conveniently use different methods to process the same data and compare the results. Then we use all the methods to process several earthquake data, including 2008 Wenchuan Mw7.9 earthquake and 2011 Tohoku-Oki Mw9.0 earthquake, and theoretical seismograms of both simple sources and complex ruptures. Our results show differences in efficiency, accuracy and stability among the methods. Quantitative and qualitative analysis are applied to measure their dependence on data and parameters, such as station number, station distribution, grid size, calculate window length and so on. In general, back projection makes it possible to get a good result in a very short time using less than 20 lines of high-quality data with proper station distribution, but the swimming artifact can be significant. Some ways, for instance, combining global seismic data, could help ameliorate this method. Music back projection needs relatively more data to obtain a better and more stable result, which means it needs a lot more time since its runtime accumulates obviously faster than back projection with the increase of station number. Compressive sensing deals more effectively with multiple sources in a same time window, however, costs the longest time due to repeatedly solving matrix. Resolution of all the methods is complicated and depends on many factors. An important one is the grid size, which in turn influences runtime significantly. More detailed results in this research may help people to choose proper data, method and parameters.

  14. Memory-optimized shift operator alternating direction implicit finite difference time domain method for plasma

    NASA Astrophysics Data System (ADS)

    Song, Wanjun; Zhang, Hou

    2017-11-01

    Through introducing the alternating direction implicit (ADI) technique and the memory-optimized algorithm to the shift operator (SO) finite difference time domain (FDTD) method, the memory-optimized SO-ADI FDTD for nonmagnetized collisional plasma is proposed and the corresponding formulae of the proposed method for programming are deduced. In order to further the computational efficiency, the iteration method rather than Gauss elimination method is employed to solve the equation set in the derivation of the formulae. Complicated transformations and convolutions are avoided in the proposed method compared with the Z transforms (ZT) ADI FDTD method and the piecewise linear JE recursive convolution (PLJERC) ADI FDTD method. The numerical dispersion of the SO-ADI FDTD method with different plasma frequencies and electron collision frequencies is analyzed and the appropriate ratio of grid size to the minimum wavelength is given. The accuracy of the proposed method is validated by the reflection coefficient test on a nonmagnetized collisional plasma sheet. The testing results show that the proposed method is advantageous for improving computational efficiency and saving computer memory. The reflection coefficient of a perfect electric conductor (PEC) sheet covered by multilayer plasma and the RCS of the objects coated by plasma are calculated by the proposed method and the simulation results are analyzed.

  15. Standardization of glycohemoglobin results and reference values in whole blood studied in 103 laboratories using 20 methods.

    PubMed

    Weykamp, C W; Penders, T J; Miedema, K; Muskiet, F A; van der Slik, W

    1995-01-01

    We investigated the effect of calibration with lyophilized calibrators on whole-blood glycohemoglobin (glyHb) results. One hundred three laboratories, using 20 different methods, determined glyHb in two lyophilized calibrators and two whole-blood samples. For whole-blood samples with low (5%) and high (9%) glyHb percentages, respectively, calibration decreased overall interlaboratory variation (CV) from 16% to 9% and from 11% to 6% and decreased intermethod variation from 14% to 6% and from 12% to 5%. Forty-seven laboratories, using 14 different methods, determined mean glyHb percentages in self-selected groups of 10 nondiabetic volunteers each. With calibration their overall mean (2SD) was 5.0% (0.5%), very close to the 5.0% (0.3%) derived from the reference method used in the Diabetes Control and Complications Trial. In both experiments the Abbott IMx and Vision showed deviating results. We conclude that, irrespective of the analytical method used, calibration enables standardization of glyHb results, reference values, and interpretation criteria.

  16. Method for computing energy release rate using the elastic work factor approach

    NASA Astrophysics Data System (ADS)

    Rhee, K. Y.; Ernst, H. A.

    1992-01-01

    The elastic work factor eta(el) concept was applied to composite structures for the calculation of total energy release rate by using a single specimen. Cracked lap shear specimens with four different unidirectional fiber orientation were used to examine the dependence of eta(el) on the material properties. Also, three different thickness ratios (lap/strap) were used to determine how geometric conditions affect eta(el). The eta(el) values were calculated in two different ways: compliance method and crack closure method. The results show that the two methods produce comparable eta(el) values and, while eta(el) is affected significantly by geometric conditions, it is reasonably independent of material properties for the given geometry. The results also showed that the elastic work factor can be used to calculate total energy release rate using a single specimen.

  17. Precise Hypocenter Determination around Palu Koro Fault: a Preliminary Results

    NASA Astrophysics Data System (ADS)

    Fawzy Ismullah, M. Muhammad; Nugraha, Andri Dian; Ramdhan, Mohamad; Wandono

    2017-04-01

    Sulawesi area is located in complex tectonic pattern. High seismicity activity in the middle of Sulawesi is related to Palu Koro fault (PKF). In this study, we determined precise hypocenter around PKF by applying double-difference method. We attempt to investigate of the seismicity rate, geometry of the fault and distribution of focus depth around PKF. We first re-pick P-and S-wave arrival time of the PKF events to determine the initial hypocenter location using Hypoellipse method through updated 1-D seismic velocity. Later on, we relocated the earthquake event using double-difference method. Our preliminary results show the distribution of relocated events are located around PKF and have smaller residual time than the initial location. We will enhance the hypocenter location through updating of arrival time by applying waveform cross correlation method as input for double-difference relocation.

  18. An Artificial Neural Networks Method for Solving Partial Differential Equations

    NASA Astrophysics Data System (ADS)

    Alharbi, Abir

    2010-09-01

    While there already exists many analytical and numerical techniques for solving PDEs, this paper introduces an approach using artificial neural networks. The approach consists of a technique developed by combining the standard numerical method, finite-difference, with the Hopfield neural network. The method is denoted Hopfield-finite-difference (HFD). The architecture of the nets, energy function, updating equations, and algorithms are developed for the method. The HFD method has been used successfully to approximate the solution of classical PDEs, such as the Wave, Heat, Poisson and the Diffusion equations, and on a system of PDEs. The software Matlab is used to obtain the results in both tabular and graphical form. The results are similar in terms of accuracy to those obtained by standard numerical methods. In terms of speed, the parallel nature of the Hopfield nets methods makes them easier to implement on fast parallel computers while some numerical methods need extra effort for parallelization.

  19. Evaluating the High Risk Groups for Suicide: A Comparison of Logistic Regression, Support Vector Machine, Decision Tree and Artificial Neural Network

    PubMed Central

    AMINI, Payam; AHMADINIA, Hasan; POOROLAJAL, Jalal; MOQADDASI AMIRI, Mohammad

    2016-01-01

    Background: We aimed to assess the high-risk group for suicide using different classification methods includinglogistic regression (LR), decision tree (DT), artificial neural network (ANN), and support vector machine (SVM). Methods: We used the dataset of a study conducted to predict risk factors of completed suicide in Hamadan Province, the west of Iran, in 2010. To evaluate the high-risk groups for suicide, LR, SVM, DT and ANN were performed. The applied methods were compared using sensitivity, specificity, positive predicted value, negative predicted value, accuracy and the area under curve. Cochran-Q test was implied to check differences in proportion among methods. To assess the association between the observed and predicted values, Ø coefficient, contingency coefficient, and Kendall tau-b were calculated. Results: Gender, age, and job were the most important risk factors for fatal suicide attempts in common for four methods. SVM method showed the highest accuracy 0.68 and 0.67 for training and testing sample, respectively. However, this method resulted in the highest specificity (0.67 for training and 0.68 for testing sample) and the highest sensitivity for training sample (0.85), but the lowest sensitivity for the testing sample (0.53). Cochran-Q test resulted in differences between proportions in different methods (P<0.001). The association of SVM predictions and observed values, Ø coefficient, contingency coefficient, and Kendall tau-b were 0.239, 0.232 and 0.239, respectively. Conclusion: SVM had the best performance to classify fatal suicide attempts comparing to DT, LR and ANN. PMID:27957463

  20. Impact of enumeration method on diversity of Escherichia coli genotypes isolated from surface water.

    PubMed

    Martin, E C; Gentry, T J

    2016-11-01

    There are numerous regulatory-approved Escherichia coli enumeration methods, but it is not known whether differences in media composition and incubation conditions impact the diversity of E. coli populations detected by these methods. A study was conducted to determine if three standard water quality assessments, Colilert ® , USEPA Method 1603, (modified mTEC) and USEPA Method 1604 (MI), detect different populations of E. coli. Samples were collected from six watersheds and analysed using the three enumeration approaches followed by E. coli isolation and genotyping. Results indicated that the three methods generally produced similar enumeration data across the sites, although there were some differences on a site-by-site basis. The Colilert ® method consistently generated the least diverse collection of E. coli genotypes as compared to modified mTEC and MI, with those two methods being roughly equal to each other. Although the three media assessed in this study were designed to enumerate E. coli, the differences in the media composition, incubation temperature, and growth platform appear to have a strong selective influence on the populations of E. coli isolated. This study suggests that standardized methods of enumeration and isolation may be warranted if researchers intend to obtain individual E. coli isolates for further characterization. This study characterized the impact of three USEPA-approved Escherichia coli enumeration methods on observed E. coli population diversity in surface water samples. Results indicated that these methods produced similar E. coli enumeration data but were more variable in the diversity of E. coli genotypes observed. Although the three methods enumerate the same species, differences in media composition, growth platform, and incubation temperature likely contribute to the selection of different cultivable populations of E. coli, and thus caution should be used when implementing these methods interchangeably for downstream applications which require cultivated isolates. © 2016 The Society for Applied Microbiology.

  1. Applications of an exponential finite difference technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Handschuh, R.F.; Keith, T.G. Jr.

    1988-07-01

    An exponential finite difference scheme first presented by Bhattacharya for one dimensional unsteady heat conduction problems in Cartesian coordinates was extended. The finite difference algorithm developed was used to solve the unsteady diffusion equation in one dimensional cylindrical coordinates and was applied to two and three dimensional conduction problems in Cartesian coordinates. Heat conduction involving variable thermal conductivity was also investigated. The method was used to solve nonlinear partial differential equations in one and two dimensional Cartesian coordinates. Predicted results are compared to exact solutions where available or to results obtained by other numerical methods.

  2. A comparison of two central difference schemes for solving the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Maksymiuk, C. M.; Swanson, R. C.; Pulliam, T. H.

    1990-01-01

    Five viscous transonic airfoil cases were computed by two significantly different computational fluid dynamics codes: An explicit finite-volume algorithm with multigrid, and an implicit finite-difference approximate-factorization method with Eigenvector diagonalization. Both methods are described in detail, and their performance on the test cases is compared. The codes utilized the same grids, turbulence model, and computer to provide the truest test of the algorithms. The two approaches produce very similar results, which, for attached flows, also agree well with experimental results; however, the explicit code is considerably faster.

  3. An Improved Image Ringing Evaluation Method with Weighted Sum of Gray Extreme Value

    NASA Astrophysics Data System (ADS)

    Yang, Ling; Meng, Yanhua; Wang, Bo; Bai, Xu

    2018-03-01

    Blind image restoration algorithm usually produces ringing more obvious at the edges. Ringing phenomenon is mainly affected by noise, species of restoration algorithm, and the impact of the blur kernel estimation during restoration. Based on the physical mechanism of ringing, a method of evaluating the ringing on blind restoration images is proposed. The method extracts the ringing image overshooting and ripple region to make the weighted statistics for the regional gradient value. According to the weights set by multiple experiments, the edge information is used to characterize the details of the edge to determine the weight, quantify the seriousness of the ring effect, and propose the evaluation method of the ringing caused by blind restoration. The experimental results show that the method can effectively evaluate the ring effect in the restoration images under different restoration algorithms and different restoration parameters. The evaluation results are consistent with the visual evaluation results.

  4. A new, fast and semi-automated size determination method (SASDM) for studying multicellular tumor spheroids

    PubMed Central

    Monazzam, Azita; Razifar, Pasha; Lindhe, Örjan; Josephsson, Raymond; Långström, Bengt; Bergström, Mats

    2005-01-01

    Background Considering the width and importance of using Multicellular Tumor Spheroids (MTS) in oncology research, size determination of MTSs by an accurate and fast method is essential. In the present study an effective, fast and semi-automated method, SASDM, was developed to determinate the size of MTSs. The method was applied and tested in MTSs of three different cell-lines. Frozen section autoradiography and Hemotoxylin Eosin (H&E) staining was used for further confirmation. Results SASDM was shown to be effective, user-friendly, and time efficient, and to be more precise than the traditional methods and it was applicable for MTSs of different cell-lines. Furthermore, the results of image analysis showed high correspondence to the results of autoradiography and staining. Conclusion The combination of assessment of metabolic condition and image analysis in MTSs provides a good model to evaluate the effect of various anti-cancer treatments. PMID:16283948

  5. A diffuse-interface method for two-phase flows with soluble surfactants

    PubMed Central

    Teigen, Knut Erik; Song, Peng; Lowengrub, John; Voigt, Axel

    2010-01-01

    A method is presented to solve two-phase problems involving soluble surfactants. The incompressible Navier–Stokes equations are solved along with equations for the bulk and interfacial surfactant concentrations. A non-linear equation of state is used to relate the surface tension to the interfacial surfactant concentration. The method is based on the use of a diffuse interface, which allows a simple implementation using standard finite difference or finite element techniques. Here, finite difference methods on a block-structured adaptive grid are used, and the resulting equations are solved using a non-linear multigrid method. Results are presented for a drop in shear flow in both 2D and 3D, and the effect of solubility is discussed. PMID:21218125

  6. Cloud field classification based upon high spatial resolution textural features. II - Simplified vector approaches

    NASA Technical Reports Server (NTRS)

    Chen, D. W.; Sengupta, S. K.; Welch, R. M.

    1989-01-01

    This paper compares the results of cloud-field classification derived from two simplified vector approaches, the Sum and Difference Histogram (SADH) and the Gray Level Difference Vector (GLDV), with the results produced by the Gray Level Cooccurrence Matrix (GLCM) approach described by Welch et al. (1988). It is shown that the SADH method produces accuracies equivalent to those obtained using the GLCM method, while the GLDV method fails to resolve error clusters. Compared to the GLCM method, the SADH method leads to a 31 percent saving in run time and a 50 percent saving in storage requirements, while the GLVD approach leads to a 40 percent saving in run time and an 87 percent saving in storage requirements.

  7. Unsupervised fuzzy segmentation of 3D magnetic resonance brain images

    NASA Astrophysics Data System (ADS)

    Velthuizen, Robert P.; Hall, Lawrence O.; Clarke, Laurence P.; Bensaid, Amine M.; Arrington, J. A.; Silbiger, Martin L.

    1993-07-01

    Unsupervised fuzzy methods are proposed for segmentation of 3D Magnetic Resonance images of the brain. Fuzzy c-means (FCM) has shown promising results for segmentation of single slices. FCM has been investigated for volume segmentations, both by combining results of single slices and by segmenting the full volume. Different strategies and initializations have been tried. In particular, two approaches have been used: (1) a method by which, iteratively, the furthest sample is split off to form a new cluster center, and (2) the traditional FCM in which the membership grade matrix is initialized in some way. Results have been compared with volume segmentations by k-means and with two supervised methods, k-nearest neighbors and region growing. Results of individual segmentations are presented as well as comparisons on the application of the different methods to a number of tumor patient data sets.

  8. Effects of different preservation methods on inter simple sequence repeat (ISSR) and random amplified polymorphic DNA (RAPD) molecular markers in botanic samples.

    PubMed

    Wang, Xiaolong; Li, Lin; Zhao, Jiaxin; Li, Fangliang; Guo, Wei; Chen, Xia

    2017-04-01

    To evaluate the effects of different preservation methods (stored in a -20°C ice chest, preserved in liquid nitrogen and dried in silica gel) on inter simple sequence repeat (ISSR) or random amplified polymorphic DNA (RAPD) analyses in various botanical specimens (including broad-leaved plants, needle-leaved plants and succulent plants) for different times (three weeks and three years), we used a statistical analysis based on the number of bands, genetic index and cluster analysis. The results demonstrate that methods used to preserve samples can provide sufficient amounts of genomic DNA for ISSR and RAPD analyses; however, the effect of different preservation methods on these analyses vary significantly, and the preservation time has little effect on these analyses. Our results provide a reference for researchers to select the most suitable preservation method depending on their study subject for the analysis of molecular markers based on genomic DNA. Copyright © 2017 Académie des sciences. Published by Elsevier Masson SAS. All rights reserved.

  9. Chosen interval methods for solving linear interval systems with special type of matrix

    NASA Astrophysics Data System (ADS)

    Szyszka, Barbara

    2013-10-01

    The paper is devoted to chosen direct interval methods for solving linear interval systems with special type of matrix. This kind of matrix: band matrix with a parameter, from finite difference problem is obtained. Such linear systems occur while solving one dimensional wave equation (Partial Differential Equations of hyperbolic type) by using the central difference interval method of the second order. Interval methods are constructed so as the errors of method are enclosed in obtained results, therefore presented linear interval systems contain elements that determining the errors of difference method. The chosen direct algorithms have been applied for solving linear systems because they have no errors of method. All calculations were performed in floating-point interval arithmetic.

  10. Particle Density Substitution Method for Trafficability of Soil in Different Gravity Environments

    NASA Astrophysics Data System (ADS)

    Huang, Chuan; Gao, Feng; Xie, Xiaolin; Jiang, Hui; Zeng, Wen

    2017-12-01

    By selecting metal powders with comparable particle size class, similar shape and material and almost the same void ratio but different particle densities, the influence of different gravity on the trafficability of soil under different states of gravitational fields is found to be equivalent to the change in particle density. This method is named particle density substitution. The shearing and bearing characteristics of simulated soil were studied. An influence of different factors on the experimental results was achieved, and a minimal influence of factors other than particle density on experimental results was obtained. Regression of shearing and bearing characteristics of the simulated soil was designed. The relationship between particle density and mechanical parameters of soil was fitted with curves. The formulation between particle density and maximal static thrust was established. By analyzing these data, the maximal static thrust slowly decreased with increasing particle density, reached the minimum when particle density was 3 g/cm3, and then sharply increased. This trend is consistent with the theoretical result. It can also certify that the particle density substitution method established here is reasonable.

  11. Mutual information based feature selection for medical image retrieval

    NASA Astrophysics Data System (ADS)

    Zhi, Lijia; Zhang, Shaomin; Li, Yan

    2018-04-01

    In this paper, authors propose a mutual information based method for lung CT image retrieval. This method is designed to adapt to different datasets and different retrieval task. For practical applying consideration, this method avoids using a large amount of training data. Instead, with a well-designed training process and robust fundamental features and measurements, the method in this paper can get promising performance and maintain economic training computation. Experimental results show that the method has potential practical values for clinical routine application.

  12. Applications of cluster analysis to the creation of perfectionism profiles: a comparison of two clustering approaches.

    PubMed

    Bolin, Jocelyn H; Edwards, Julianne M; Finch, W Holmes; Cassady, Jerrell C

    2014-01-01

    Although traditional clustering methods (e.g., K-means) have been shown to be useful in the social sciences it is often difficult for such methods to handle situations where clusters in the population overlap or are ambiguous. Fuzzy clustering, a method already recognized in many disciplines, provides a more flexible alternative to these traditional clustering methods. Fuzzy clustering differs from other traditional clustering methods in that it allows for a case to belong to multiple clusters simultaneously. Unfortunately, fuzzy clustering techniques remain relatively unused in the social and behavioral sciences. The purpose of this paper is to introduce fuzzy clustering to these audiences who are currently relatively unfamiliar with the technique. In order to demonstrate the advantages associated with this method, cluster solutions of a common perfectionism measure were created using both fuzzy clustering and K-means clustering, and the results compared. Results of these analyses reveal that different cluster solutions are found by the two methods, and the similarity between the different clustering solutions depends on the amount of cluster overlap allowed for in fuzzy clustering.

  13. Applications of cluster analysis to the creation of perfectionism profiles: a comparison of two clustering approaches

    PubMed Central

    Bolin, Jocelyn H.; Edwards, Julianne M.; Finch, W. Holmes; Cassady, Jerrell C.

    2014-01-01

    Although traditional clustering methods (e.g., K-means) have been shown to be useful in the social sciences it is often difficult for such methods to handle situations where clusters in the population overlap or are ambiguous. Fuzzy clustering, a method already recognized in many disciplines, provides a more flexible alternative to these traditional clustering methods. Fuzzy clustering differs from other traditional clustering methods in that it allows for a case to belong to multiple clusters simultaneously. Unfortunately, fuzzy clustering techniques remain relatively unused in the social and behavioral sciences. The purpose of this paper is to introduce fuzzy clustering to these audiences who are currently relatively unfamiliar with the technique. In order to demonstrate the advantages associated with this method, cluster solutions of a common perfectionism measure were created using both fuzzy clustering and K-means clustering, and the results compared. Results of these analyses reveal that different cluster solutions are found by the two methods, and the similarity between the different clustering solutions depends on the amount of cluster overlap allowed for in fuzzy clustering. PMID:24795683

  14. Laboratory Evaluation of Acoustic Backscatter and LISST Methods for Measurements of Suspended Sediments

    PubMed Central

    Meral, Ramazan

    2008-01-01

    The limitation of traditional sampling method to provide detailed spatial and temporal profiles of suspended sediment concentration has led to an interest in alternative devices and methods based on scattering of underwater sound and light. In the present work, acoustic backscatter and LISST (the Laser In Situ Scattering Transmissometry) devices, and methodologies were given. Besides a laboratory study was conducted to compare pumping methods for different sediment radiuses at the same concentration. The glass spheres (ballotini) of three different radiuses of 115, 137 and 163 μm were used to obtain suspension in the sediment tower at laboratory. A quite good agreement was obtained between these methods and pumping results with the range at 60.6-94.2% for sediment concentration and 91.3-100% for radius measurements. These results and the other studies show that these methods have potential for research tools for sediment studies. In addition further studies are needed to determine the ability of these methods for sediment measurement under different water and sediment material conditions. PMID:27879747

  15. Divergence correction schemes in finite difference method for 3D tensor CSAMT in axial anisotropic media

    NASA Astrophysics Data System (ADS)

    Wang, Kunpeng; Tan, Handong; Zhang, Zhiyong; Li, Zhiqiang; Cao, Meng

    2017-05-01

    Resistivity anisotropy and full-tensor controlled-source audio-frequency magnetotellurics (CSAMT) have gradually become hot research topics. However, much of the current anisotropy research for tensor CSAMT only focuses on the one-dimensional (1D) solution. As the subsurface is rarely 1D, it is necessary to study three-dimensional (3D) model response. The staggered-grid finite difference method is an effective simulation method for 3D electromagnetic forward modelling. Previous studies have suggested using the divergence correction to constrain the iterative process when using a staggered-grid finite difference model so as to accelerate the 3D forward speed and enhance the computational accuracy. However, the traditional divergence correction method was developed assuming an isotropic medium. This paper improves the traditional isotropic divergence correction method and derivation process to meet the tensor CSAMT requirements for anisotropy using the volume integral of the divergence equation. This method is more intuitive, enabling a simple derivation of a discrete equation and then calculation of coefficients related to the anisotropic divergence correction equation. We validate the result of our 3D computational results by comparing them to the results computed using an anisotropic, controlled-source 2.5D program. The 3D resistivity anisotropy model allows us to evaluate the consequences of using the divergence correction at different frequencies and for two orthogonal finite length sources. Our results show that the divergence correction plays an important role in 3D tensor CSAMT resistivity anisotropy research and offers a solid foundation for inversion of CSAMT data collected over an anisotropic body.

  16. Does thorax EIT image analysis depend on the image reconstruction method?

    NASA Astrophysics Data System (ADS)

    Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut

    2013-04-01

    Different methods were proposed to analyze the resulting images of electrical impedance tomography (EIT) measurements during ventilation. The aim of our study was to examine if the analysis methods based on back-projection deliver the same results when applied on images based on other reconstruction algorithms. Seven mechanically ventilated patients with ARDS were examined by EIT. The thorax contours were determined from the routine CT images. EIT raw data was reconstructed offline with (1) filtered back-projection with circular forward model (BPC); (2) GREIT reconstruction method with circular forward model (GREITC) and (3) GREIT with individual thorax geometry (GREITT). Three parameters were calculated on the resulting images: linearity, global ventilation distribution and regional ventilation distribution. The results of linearity test are 5.03±2.45, 4.66±2.25 and 5.32±2.30 for BPC, GREITC and GREITT, respectively (median ±interquartile range). The differences among the three methods are not significant (p = 0.93, Kruskal-Wallis test). The proportions of ventilation in the right lung are 0.58±0.17, 0.59±0.20 and 0.59±0.25 for BPC, GREITC and GREITT, respectively (p = 0.98). The differences of the GI index based on different reconstruction methods (0.53±0.16, 0.51±0.25 and 0.54±0.16 for BPC, GREITC and GREITT, respectively) are also not significant (p = 0.93). We conclude that the parameters developed for images generated with GREITT are comparable with filtered back-projection and GREITC.

  17. Determination Instructions Efficiency of Teaching Methods in Teaching Physics in the Case of Teaching Unit "Viscosity. Newtonian and Stokes Law"

    ERIC Educational Resources Information Center

    Radulovic, Branka; Stojanovic, Maja

    2015-01-01

    The use of different teaching methods has resulted in different quality and quantity of students' knowledge. For this reason, it is important to constantly review the teaching methods and applied most effectively. One way of determining instruction efficiency is by using cognitive load and student achievement. Cognitive load can be generally…

  18. Integrated Data Collection Analysis (IDCA) Program - Statistical Analysis of RDX Standard Data Sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sandstrom, Mary M.; Brown, Geoffrey W.; Preston, Daniel N.

    2015-10-30

    The Integrated Data Collection Analysis (IDCA) program is conducting a Proficiency Test for Small- Scale Safety and Thermal (SSST) testing of homemade explosives (HMEs). Described here are statistical analyses of the results for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis of the RDX Type II Class 5 standard. The material was tested as a well-characterized standard several times during the proficiency study to assess differences among participants and the range of results that may arise for well-behaved explosive materials. The analyses show that there are detectable differences among the results from IDCA participants. While these differences are statisticallymore » significant, most of them can be disregarded for comparison purposes to assess potential variability when laboratories attempt to measure identical samples using methods assumed to be nominally the same. The results presented in this report include the average sensitivity results for the IDCA participants and the ranges of values obtained. The ranges represent variation about the mean values of the tests of between 26% and 42%. The magnitude of this variation is attributed to differences in operator, method, and environment as well as the use of different instruments that are also of varying age. The results appear to be a good representation of the broader safety testing community based on the range of methods, instruments, and environments included in the IDCA Proficiency Test.« less

  19. An Improved Method for Demonstrating Visual Selection by Wild Birds.

    ERIC Educational Resources Information Center

    Allen, J. A.; And Others

    1990-01-01

    An activity simulating natural selection in which wild birds are predators, green and brown pastry "baits" are prey, and trays containing colored stones as the backgrounds is presented. Two different methods of measuring selection are used to describe the results. The materials and methods, results, and discussion are included. (KR)

  20. Using spectral methods to obtain particle size information from optical data: applications to measurements from CARES 2010

    NASA Astrophysics Data System (ADS)

    Atkinson, Dean B.; Pekour, Mikhail; Chand, Duli; Radney, James G.; Kolesar, Katheryn R.; Zhang, Qi; Setyan, Ari; O'Neill, Norman T.; Cappa, Christopher D.

    2018-04-01

    Multi-wavelength in situ aerosol extinction, absorption and scattering measurements made at two ground sites during the 2010 Carbonaceous Aerosols and Radiative Effects Study (CARES) are analyzed using a spectral deconvolution method that allows extraction of particle-size-related information, including the fraction of extinction produced by the fine-mode particles and the effective radius of the fine mode. The spectral deconvolution method is typically applied to analysis of remote sensing measurements. Here, its application to in situ measurements allows for comparison with more direct measurement methods and validation of the retrieval approach. Overall, the retrieved fine-mode fraction and effective radius compare well with other in situ measurements, including size distribution measurements and scattering and absorption measurements made separately for PM1 and PM10, although there were some periods during which the different methods yielded different results. One key contributor to differences between the results obtained is the alternative, spectrally based definitions of fine and coarse modes from the optical methods, relative to instruments that use a physically defined cut point. These results indicate that for campaigns where size, composition and multi-wavelength optical property measurements are made, comparison of the results can result in closure or can identify unusual circumstances. The comparison here also demonstrates that in situ multi-wavelength optical property measurements can be used to determine information about particle size distributions in situations where direct size distribution measurements are not available.

  1. Using spectral methods to obtain particle size information from optical data: applications to measurements from CARES 2010

    DOE PAGES

    Atkinson, Dean B.; Pekour, Mikhail; Chand, Duli; ...

    2018-04-23

    Here, multi-wavelength in situ aerosol extinction, absorption and scattering measurements made at two ground sites during the 2010 Carbonaceous Aerosols and Radiative Effects Study (CARES) are analyzed using a spectral deconvolution method that allows extraction of particle-size-related information, including the fraction of extinction produced by the fine-mode particles and the effective radius of the fine mode. The spectral deconvolution method is typically applied to analysis of remote sensing measurements. Here, its application to in situ measurements allows for comparison with more direct measurement methods and validation of the retrieval approach. Overall, the retrieved fine-mode fraction and effective radius compare wellmore » with other in situ measurements, including size distribution measurements and scattering and absorption measurements made separately for PM 1 and PM 10, although there were some periods during which the different methods yielded different results. One key contributor to differences between the results obtained is the alternative, spectrally based definitions of fine and coarse modes from the optical methods, relative to instruments that use a physically defined cut point. These results indicate that for campaigns where size, composition and multi-wavelength optical property measurements are made, comparison of the results can result in closure or can identify unusual circumstances. The comparison here also demonstrates that in situ multi-wavelength optical property measurements can be used to determine information about particle size distributions in situations where direct size distribution measurements are not available.« less

  2. Using spectral methods to obtain particle size information from optical data: applications to measurements from CARES 2010

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Atkinson, Dean B.; Pekour, Mikhail; Chand, Duli

    Here, multi-wavelength in situ aerosol extinction, absorption and scattering measurements made at two ground sites during the 2010 Carbonaceous Aerosols and Radiative Effects Study (CARES) are analyzed using a spectral deconvolution method that allows extraction of particle-size-related information, including the fraction of extinction produced by the fine-mode particles and the effective radius of the fine mode. The spectral deconvolution method is typically applied to analysis of remote sensing measurements. Here, its application to in situ measurements allows for comparison with more direct measurement methods and validation of the retrieval approach. Overall, the retrieved fine-mode fraction and effective radius compare wellmore » with other in situ measurements, including size distribution measurements and scattering and absorption measurements made separately for PM 1 and PM 10, although there were some periods during which the different methods yielded different results. One key contributor to differences between the results obtained is the alternative, spectrally based definitions of fine and coarse modes from the optical methods, relative to instruments that use a physically defined cut point. These results indicate that for campaigns where size, composition and multi-wavelength optical property measurements are made, comparison of the results can result in closure or can identify unusual circumstances. The comparison here also demonstrates that in situ multi-wavelength optical property measurements can be used to determine information about particle size distributions in situations where direct size distribution measurements are not available.« less

  3. Using spectral methods to obtain particle size information from optical data: applications to measurements from CARES 2010

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Atkinson, Dean B.; Pekour, Mikhail; Chand, Duli

    Multi-wavelength in situ aerosol extinction, absorption and scattering measurements made at two ground sites during the 2010 Carbonaceous Aerosols and Radiative Effects Study (CARES) are analyzed using a spectral deconvolution method that allows extraction of particle-size-related information, including the fraction of extinction produced by the fine-mode particles and the effective radius of the fine mode. The spectral deconvolution method is typically applied to analysis of remote sensing measurements. Here, its application to in situ measurements allows for comparison with more direct measurement methods and validation of the retrieval approach. Overall, the retrieved fine-mode fraction and effective radius compare well withmore » other in situ measurements, including size distribution measurements and scattering and absorption measurements made separately for PM 1 and PM 10, although there were some periods during which the different methods yielded different results. One key contributor to differences between the results obtained is the alternative, spectrally based definitions of fine and coarse modes from the optical methods, relative to instruments that use a physically defined cut point. These results indicate that for campaigns where size, composition and multi-wavelength optical property measurements are made, comparison of the results can result in closure or can identify unusual circumstances. The comparison here also demonstrates that in situ multi-wavelength optical property measurements can be used to determine information about particle size distributions in situations where direct size distribution measurements are not available.« less

  4. Comparison of nine brands of membrane filter and the most-probable-number methods for total coliform enumeration in sewage-contaminated drinking water.

    PubMed Central

    Tobin, R S; Lomax, P; Kushner, D J

    1980-01-01

    Nine different brands of membrane filter were compared in the membrane filtration (MF) method, and those with the highest yields were compared against the most-probable-number (MPN) multiple-tube method for total coliform enumeration in simulated sewage-contaminated tap water. The water was chlorinated for 30 min to subject the organisms to stresses similar to those encountered during treatment and distribution of drinking water. Significant differences were observed among membranes in four of the six experiments, with two- to four-times-higher recoveries between the membranes at each extreme of recovery. When results from the membranes with the highest total coliform recovery rate were compared with the MPN results, the MF results were found significantly higher in one experiment and equivalent to the MPN results in the other five experiments. A comparison was made of the species enumerated by these methods; in general the two methods enumerated a similar spectrum of organisms, with some indication that the MF method was subject to greater interference by Aeromonas. PMID:7469407

  5. Selected problems with boron determination in water treatment processes. Part I: comparison of the reference methods for ICP-MS and ICP-OES determinations.

    PubMed

    Kmiecik, Ewa; Tomaszewska, Barbara; Wątor, Katarzyna; Bodzek, Michał

    2016-06-01

    The aim of the study was to compare the two reference methods for the determination of boron in water samples and further assess the impact of the method of preparation of samples for analysis on the results obtained. Samples were collected during different desalination processes, ultrafiltration and the double reverse osmosis system, connected in series. From each point, samples were prepared in four different ways: the first was filtered (through a membrane filter of 0.45 μm) and acidified (using 1 mL ultrapure nitric acid for each 100 mL of samples) (FA), the second was unfiltered and not acidified (UFNA), the third was filtered but not acidified (FNA), and finally, the fourth was unfiltered but acidified (UFA). All samples were analysed using two analytical methods: inductively coupled plasma mass spectrometry (ICP-MS) and inductively coupled plasma optical emission spectrometry (ICP-OES). The results obtained were compared and correlated, and the differences between them were studied. The results show that there are statistically significant differences between the concentrations obtained using the ICP-MS and ICP-OES techniques regardless of the methods of sampling preparation (sample filtration and preservation). Finally, both the ICP-MS and ICP-OES methods can be used for determination of the boron concentration in water. The differences in the boron concentrations obtained using these two methods can be caused by several high-level concentrations in selected whole-water digestates and some matrix effects. Higher concentrations of iron (from 1 to 20 mg/L) than chromium (0.02-1 mg/L) in the samples analysed can influence boron determination. When iron concentrations are high, we can observe the emission spectrum as a double joined and overlapping peak.

  6. Unconditionally stable finite-difference time-domain methods for modeling the Sagnac effect

    NASA Astrophysics Data System (ADS)

    Novitski, Roman; Scheuer, Jacob; Steinberg, Ben Z.

    2013-02-01

    We present two unconditionally stable finite-difference time-domain (FDTD) methods for modeling the Sagnac effect in rotating optical microsensors. The methods are based on the implicit Crank-Nicolson scheme, adapted to hold in the rotating system reference frame—the rotating Crank-Nicolson (RCN) methods. The first method (RCN-2) is second order accurate in space whereas the second method (RCN-4) is fourth order accurate. Both methods are second order accurate in time. We show that the RCN-4 scheme is more accurate and has better dispersion isotropy. The numerical results show good correspondence with the expression for the classical Sagnac resonant frequency splitting when using group refractive indices of the resonant modes of a microresonator. Also we show that the numerical results are consistent with the perturbation theory for the rotating degenerate microcavities. We apply our method to simulate the effect of rotation on an entire Coupled Resonator Optical Waveguide (CROW) consisting of a set of coupled microresonators. Preliminary results validate the formation of a rotation-induced gap at the center of a transfer function of a CROW.

  7. A demonstration of the antimicrobial effectiveness of various copper surfaces

    PubMed Central

    2013-01-01

    Background Bacterial contamination on touch surfaces results in increased risk of infection. In the last few decades, work has been done on the antimicrobial properties of copper and its alloys against a range of micro-organisms threatening public health in food processing, healthcare and air conditioning applications; however, an optimum copper method of surface deposition and mass structure has not been identified. Results A proof-of-concept study of the disinfection effectiveness of three copper surfaces was performed. The surfaces were produced by the deposition of copper using three methods of thermal spray, namely, plasma spray, wire arc spray and cold spray The surfaces were then inoculated with meticillin-resistant Staphylococcus aureus (MRSA). After a two hour exposure to the surfaces, the surviving MRSA were assayed and the results compared. The differences in the copper depositions produced by the three thermal spray methods were examined in order to explain the mechanism that causes the observed differences in MRSA killing efficiencies. The cold spray deposition method was significantly more effective than the other methods. It was determined that work hardening caused by the high velocity particle impacts created by the cold spray technique results in a copper microstructure that enhances ionic diffusion, and copper ions are principally responsible for antimicrobial activity. Conclusions This test showed significant microbiologic differences between coatings produced by different spray techniques and demonstrates the importance of the copper application technique. The cold spray technique shows superior anti-microbial effectiveness caused by the high impact velocity imparted to the sprayed particles which results in high dislocation density and high ionic diffusivity. PMID:23537176

  8. Validated univariate and multivariate spectrophotometric methods for the determination of pharmaceuticals mixture in complex wastewater

    NASA Astrophysics Data System (ADS)

    Riad, Safaa M.; Salem, Hesham; Elbalkiny, Heba T.; Khattab, Fatma I.

    2015-04-01

    Five, accurate, precise, and sensitive univariate and multivariate spectrophotometric methods were developed for the simultaneous determination of a ternary mixture containing Trimethoprim (TMP), Sulphamethoxazole (SMZ) and Oxytetracycline (OTC) in waste water samples collected from different cites either production wastewater or livestock wastewater after their solid phase extraction using OASIS HLB cartridges. In univariate methods OTC was determined at its λmax 355.7 nm (0D), while (TMP) and (SMZ) were determined by three different univariate methods. Method (A) is based on successive spectrophotometric resolution technique (SSRT). The technique starts with the ratio subtraction method followed by ratio difference method for determination of TMP and SMZ. Method (B) is successive derivative ratio technique (SDR). Method (C) is mean centering of the ratio spectra (MCR). The developed multivariate methods are principle component regression (PCR) and partial least squares (PLS). The specificity of the developed methods is investigated by analyzing laboratory prepared mixtures containing different ratios of the three drugs. The obtained results are statistically compared with those obtained by the official methods, showing no significant difference with respect to accuracy and precision at p = 0.05.

  9. Validated univariate and multivariate spectrophotometric methods for the determination of pharmaceuticals mixture in complex wastewater.

    PubMed

    Riad, Safaa M; Salem, Hesham; Elbalkiny, Heba T; Khattab, Fatma I

    2015-04-05

    Five, accurate, precise, and sensitive univariate and multivariate spectrophotometric methods were developed for the simultaneous determination of a ternary mixture containing Trimethoprim (TMP), Sulphamethoxazole (SMZ) and Oxytetracycline (OTC) in waste water samples collected from different cites either production wastewater or livestock wastewater after their solid phase extraction using OASIS HLB cartridges. In univariate methods OTC was determined at its λmax 355.7 nm (0D), while (TMP) and (SMZ) were determined by three different univariate methods. Method (A) is based on successive spectrophotometric resolution technique (SSRT). The technique starts with the ratio subtraction method followed by ratio difference method for determination of TMP and SMZ. Method (B) is successive derivative ratio technique (SDR). Method (C) is mean centering of the ratio spectra (MCR). The developed multivariate methods are principle component regression (PCR) and partial least squares (PLS). The specificity of the developed methods is investigated by analyzing laboratory prepared mixtures containing different ratios of the three drugs. The obtained results are statistically compared with those obtained by the official methods, showing no significant difference with respect to accuracy and precision at p=0.05. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Noise Reduction Design of the Volute for a Centrifugal Compressor

    NASA Astrophysics Data System (ADS)

    Song, Zhen; Wen, Huabing; Hong, Liangxing; Jin, Yudong

    2017-08-01

    In order to effectively control the aerodynamic noise of a compressor, this paper takes into consideration a marine exhaust turbocharger compressor as a research object. According to the different design concept of volute section, tongue and exit cone, six different volute models were established. The finite volume method is used to calculate the flow field, whiles the finite element method is used for the acoustic calculation. Comparison and analysis of different structure designs from three aspects: noise level, isentropic efficiency and Static pressure recovery coefficient. The results showed that under the concept of volute section model 1 yielded the best result, under the concept of tongue analysis model 3 yielded the best result and finally under exit cone analysis model 6 yielded the best results.

  11. Personality, Assessment Methods and Academic Performance

    ERIC Educational Resources Information Center

    Furnham, Adrian; Nuygards, Sarah; Chamorro-Premuzic, Tomas

    2013-01-01

    This study examines the relationship between personality and two different academic performance (AP) assessment methods, namely exams and coursework. It aimed to examine whether the relationship between traits and AP was consistent across self-reported versus documented exam results, two different assessment techniques and across different…

  12. Skeletal age estimation for forensic purposes: A comparison of GP, TW2 and TW3 methods on an Italian sample.

    PubMed

    Pinchi, Vilma; De Luca, Federica; Ricciardi, Federico; Focardi, Martina; Piredda, Valentina; Mazzeo, Elena; Norelli, Gian-Aristide

    2014-05-01

    Paediatricians, radiologists, anthropologists and medico-legal specialists are often called as experts in order to provide age estimation (AE) for forensic purposes. The literature recommends performing the X-rays of the left hand and wrist (HW-XR) for skeletal age estimation. The method most frequently employed is the Greulich and Pyle (GP) method. In addition, the so-called bone-specific techniques are also applied including the method of Tanner Whitehouse (TW) in the latest versions TW2 and TW3. To compare skeletal age and chronological age in a large sample of children and adolescents using GP, TW2 and TW3 methods in order to establish which of these is the most reliable for forensic purposes. The sample consisted of 307 HW-XRs of Italian children or adolescents, 145 females and 162 males aged between 6 and 20 years. The radiographies were scored according to the GP, TW2RUS and TW3RUS methods by one investigator. The results' reliability was assessed using intraclass correlation coefficient. Wilcoxon signed-rank test and Student t-test were performed to search for significant differences between skeletal and chronological ages. The distributions of the differences between estimated and chronological age, by means of boxplots, show how median differences for TW3 and GP methods are generally very close to 0. Hypothesis tests' results were obtained, with respect to the sex, both for the entire group of individuals and people grouped by age. Results show no significant differences among estimated and chronological age for TW3 and, to a lesser extent, GP. The TW2 proved to be the worst of the three methods. Our results support the conclusion that the TW2 method is not reliable for AE for forensic purpose. The GP and TW3 methods have proved to be reliable in males. For females, the best method was found to be TW3. When performing forensic age estimation in subjects around 14 years of age, it could be advisable to use and associate the TW3 and GP methods. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  13. Testing high SPF sunscreens: a demonstration of the accuracy and reproducibility of the results of testing high SPF formulations by two methods and at different testing sites.

    PubMed

    Agin, Patricia Poh; Edmonds, Susan H

    2002-08-01

    The goals of this study were (i) to demonstrate that existing and widely used sun protection factor (SPF) test methodologies can produce accurate and reproducible results for high SPF formulations and (ii) to provide data on the number of test-subjects needed, the variability of the data, and the appropriate exposure increments needed for testing high SPF formulations. Three high SPF formulations were tested, according to the Food and Drug Administration's (FDA) 1993 tentative final monograph (TFM) 'very water resistant' test method and/or the 1978 proposed monograph 'waterproof' test method, within one laboratory. A fourth high SPF formulation was tested at four independent SPF testing laboratories, using the 1978 waterproof SPF test method. All laboratories utilized xenon arc solar simulators. The data illustrate that the testing conducted within one laboratory, following either the 1978 proposed or the 1993 TFM SPF test method, was able to reproducibly determine the SPFs of the formulations tested, using either the statistical analysis method in the proposed monograph or the statistical method described in the TFM. When one formulation was tested at four different laboratories, the anticipated variation in the data owing to the equipment and other operational differences was minimized through the use of the statistical method described in the 1993 monograph. The data illustrate that either the 1978 proposed monograph SPF test method or the 1993 TFM SPF test method can provide accurate and reproducible results for high SPF formulations. Further, these results can be achieved with panels of 20-25 subjects with an acceptable level of variability. Utilization of the statistical controls from the 1993 sunscreen monograph can help to minimize lab-to-lab variability for well-formulated products.

  14. Validation of a modification to Performance-Tested Method 070601: Reveal Listeria Test for detection of Listeria spp. in selected foods and selected environmental samples.

    PubMed

    Alles, Susan; Peng, Linda X; Mozola, Mark A

    2009-01-01

    A modification to Performance-Tested Method (PTM) 070601, Reveal Listeria Test (Reveal), is described. The modified method uses a new media formulation, LESS enrichment broth, in single-step enrichment protocols for both foods and environmental sponge and swab samples. Food samples are enriched for 27-30 h at 30 degrees C and environmental samples for 24-48 h at 30 degrees C. Implementation of these abbreviated enrichment procedures allows test results to be obtained on a next-day basis. In testing of 14 food types in internal comparative studies with inoculated samples, there was a statistically significant difference in performance between the Reveal and reference culture [U.S. Food and Drug Administration's Bacteriological Analytical Manual (FDA/BAM) or U.S. Department of Agriculture-Food Safety and Inspection Service (USDA-FSIS)] methods for only a single food in one trial (pasteurized crab meat) at the 27 h enrichment time point, with more positive results obtained with the FDA/BAM reference method. No foods showed statistically significant differences in method performance at the 30 h time point. Independent laboratory testing of 3 foods again produced a statistically significant difference in results for crab meat at the 27 h time point; otherwise results of the Reveal and reference methods were statistically equivalent. Overall, considering both internal and independent laboratory trials, sensitivity of the Reveal method relative to the reference culture procedures in testing of foods was 85.9% at 27 h and 97.1% at 30 h. Results from 5 environmental surfaces inoculated with various strains of Listeria spp. showed that the Reveal method was more productive than the reference USDA-FSIS culture procedure for 3 surfaces (stainless steel, plastic, and cast iron), whereas results were statistically equivalent to the reference method for the other 2 surfaces (ceramic tile and sealed concrete). An independent laboratory trial with ceramic tile inoculated with L. monocytogenes confirmed the effectiveness of the Reveal method at the 24 h time point. Overall, sensitivity of the Reveal method at 24 h relative to that of the USDA-FSIS method was 153%. The Reveal method exhibited extremely high specificity, with only a single false-positive result in all trials combined for overall specificity of 99.5%.

  15. Optimization Based Efficiencies in First Order Reliability Analysis

    NASA Technical Reports Server (NTRS)

    Peck, Jeffrey A.; Mahadevan, Sankaran

    2003-01-01

    This paper develops a method for updating the gradient vector of the limit state function in reliability analysis using Broyden's rank one updating technique. In problems that use commercial code as a black box, the gradient calculations are usually done using a finite difference approach, which becomes very expensive for large system models. The proposed method replaces the finite difference gradient calculations in a standard first order reliability method (FORM) with Broyden's Quasi-Newton technique. The resulting algorithm of Broyden updates within a FORM framework (BFORM) is used to run several example problems, and the results compared to standard FORM results. It is found that BFORM typically requires fewer functional evaluations that FORM to converge to the same answer.

  16. An Ensemble Framework Coping with Instability in the Gene Selection Process.

    PubMed

    Castellanos-Garzón, José A; Ramos, Juan; López-Sánchez, Daniel; de Paz, Juan F; Corchado, Juan M

    2018-03-01

    This paper proposes an ensemble framework for gene selection, which is aimed at addressing instability problems presented in the gene filtering task. The complex process of gene selection from gene expression data faces different instability problems from the informative gene subsets found by different filter methods. This makes the identification of significant genes by the experts difficult. The instability of results can come from filter methods, gene classifier methods, different datasets of the same disease and multiple valid groups of biomarkers. Even though there is a wide number of proposals, the complexity imposed by this problem remains a challenge today. This work proposes a framework involving five stages of gene filtering to discover biomarkers for diagnosis and classification tasks. This framework performs a process of stable feature selection, facing the problems above and, thus, providing a more suitable and reliable solution for clinical and research purposes. Our proposal involves a process of multistage gene filtering, in which several ensemble strategies for gene selection were added in such a way that different classifiers simultaneously assess gene subsets to face instability. Firstly, we apply an ensemble of recent gene selection methods to obtain diversity in the genes found (stability according to filter methods). Next, we apply an ensemble of known classifiers to filter genes relevant to all classifiers at a time (stability according to classification methods). The achieved results were evaluated in two different datasets of the same disease (pancreatic ductal adenocarcinoma), in search of stability according to the disease, for which promising results were achieved.

  17. An analysis of the optimal multiobjective inventory clustering decision with small quantity and great variety inventory by applying a DPSO.

    PubMed

    Wang, Shen-Tsu; Li, Meng-Hua

    2014-01-01

    When an enterprise has thousands of varieties in its inventory, the use of a single management method could not be a feasible approach. A better way to manage this problem would be to categorise inventory items into several clusters according to inventory decisions and to use different management methods for managing different clusters. The present study applies DPSO (dynamic particle swarm optimisation) to a problem of clustering of inventory items. Without the requirement of prior inventory knowledge, inventory items are automatically clustered into near optimal clustering number. The obtained clustering results should satisfy the inventory objective equation, which consists of different objectives such as total cost, backorder rate, demand relevance, and inventory turnover rate. This study integrates the above four objectives into a multiobjective equation, and inputs the actual inventory items of the enterprise into DPSO. In comparison with other clustering methods, the proposed method can consider different objectives and obtain an overall better solution to obtain better convergence results and inventory decisions.

  18. An improved cellular automaton method to model multispecies biofilms.

    PubMed

    Tang, Youneng; Valocchi, Albert J

    2013-10-01

    Biomass-spreading rules used in previous cellular automaton methods to simulate multispecies biofilm introduced extensive mixing between different biomass species or resulted in spatially discontinuous biomass concentration and distribution; this caused results based on the cellular automaton methods to deviate from experimental results and those from the more computationally intensive continuous method. To overcome the problems, we propose new biomass-spreading rules in this work: Excess biomass spreads by pushing a line of grid cells that are on the shortest path from the source grid cell to the destination grid cell, and the fractions of different biomass species in the grid cells on the path change due to the spreading. To evaluate the new rules, three two-dimensional simulation examples are used to compare the biomass distribution computed using the continuous method and three cellular automaton methods, one based on the new rules and the other two based on rules presented in two previous studies. The relationship between the biomass species is syntrophic in one example and competitive in the other two examples. Simulation results generated using the cellular automaton method based on the new rules agree much better with the continuous method than do results using the other two cellular automaton methods. The new biomass-spreading rules are no more complex to implement than the existing rules. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. results obtained by the application of two different methods for the calculation of optimal coplanar orbital maneuvers with time limit

    NASA Astrophysics Data System (ADS)

    Rocco, Emr; Prado, Afbap; Souza, Mlos

    In this work, the problem of bi-impulsive orbital transfers between coplanar elliptical orbits with minimum fuel consumption but with a time limit for this transfer is studied. As a first method, the equations presented by Lawden (1993) were used. Those equations furnishes the optimal transfer orbit with fixed time for this transfer, between two elliptical coplanar orbits considering fixed terminal points. The method was adapted to cases with free terminal points and those equations was solved to develop a software for orbital maneuvers. As a second method, the equations presented by Eckel and Vinh (1984) were used, those equations provide the transfer orbit between non-coplanar elliptical orbits with minimum fuel and fixed time transfer, or minimum time transfer for a prescribed fuel consumption, considering free terminal points. But in this work only the problem with fixed time transfer was considered, the case of minimum time for a prescribed fuel consumption was already studied in Rocco et al. (2000). Then, the method was modified to consider cases of coplanar orbital transfer, and develop a software for orbital maneuvers. Therefore, two software that solve the same problem using different methods were developed. The first method, presented by Lawden, uses the primer vector theory. The second method, presented by Eckel and Vinh, uses the ordinary theory of maxima and minima. So, to test the methods we choose the same terminal orbits and the same time as input. We could verify that we didn't obtain exactly the same result. In this work, that is an extension of Rocco et al. (2002), these differences in the results are explored with objective of determining the reason of the occurrence of these differences and which modifications should be done to eliminate them.

  20. [Mahalanobis distance based hyperspectral characteristic discrimination of leaves of different desert tree species].

    PubMed

    Lin, Hai-jun; Zhang, Hui-fang; Gao, Ya-qi; Li, Xia; Yang, Fan; Zhou, Yan-fei

    2014-12-01

    The hyperspectral reflectance of Populus euphratica, Tamarix hispida, Haloxylon ammodendron and Calligonum mongolicum in the lower reaches of Tarim River and Turpan Desert Botanical Garden was measured by using the HR-768 field-portable spectroradiometer. The method of continuum removal, first derivative reflectance and second derivative reflectance were used to deal with the original spectral data of four tree species. The method of Mahalanobis Distance was used to select the bands with significant differences in the original spectral data and transform spectral data to identify the different tree species. The progressive discrimination analyses were used to test the selective bands used to identify different tree species. The results showed that The Mahalanobis Distance method was an effective method in feature band extraction. The bands for identifying different tree species were most near-infrared bands. The recognition accuracy of four methods was 85%, 93.8%, 92.4% and 95.5% respectively. Spectrum transform could improve the recognition accuracy. The recognition accuracy of different research objects and different spectrum transform methods were different. The research provided evidence for desert tree species classification, monitoring biodiversity and the analysis of area in desert by using large scale remote sensing method.

  1. A practical material decomposition method for x-ray dual spectral computed tomography.

    PubMed

    Hu, Jingjing; Zhao, Xing

    2016-03-17

    X-ray dual spectral CT (DSCT) scans the measured object with two different x-ray spectra, and the acquired rawdata can be used to perform the material decomposition of the object. Direct calibration methods allow a faster material decomposition for DSCT and can be separated in two groups: image-based and rawdata-based. The image-based method is an approximative method, and beam hardening artifacts remain in the resulting material-selective images. The rawdata-based method generally obtains better image quality than the image-based method, but this method requires geometrically consistent rawdata. However, today's clinical dual energy CT scanners usually measure different rays for different energy spectra and acquire geometrically inconsistent rawdata sets, and thus cannot meet the requirement. This paper proposes a practical material decomposition method to perform rawdata-based material decomposition in the case of inconsistent measurement. This method first yields the desired consistent rawdata sets from the measured inconsistent rawdata sets, and then employs rawdata-based technique to perform material decomposition and reconstruct material-selective images. The proposed method was evaluated by use of simulated FORBILD thorax phantom rawdata and dental CT rawdata, and simulation results indicate that this method can produce highly quantitative DSCT images in the case of inconsistent DSCT measurements.

  2. Comparison of epifluorescent viable bacterial count methods

    NASA Technical Reports Server (NTRS)

    Rodgers, E. B.; Huff, T. L.

    1992-01-01

    Two methods, the 2-(4-Iodophenyl) 3-(4-nitrophenyl) 5-phenyltetrazolium chloride (INT) method and the direct viable count (DVC), were tested and compared for their efficiency for the determination of the viability of bacterial populations. Use of the INT method results in the formation of a dark spot within each respiring cell. The DVC method results in elongation or swelling of growing cells that are rendered incapable of cell division. Although both methods are subjective and can result in false positive results, the DVC method is best suited to analysis of waters in which the number of different types of organisms present in the same sample is assumed to be small, such as processed waters. The advantages and disadvantages of each method are discussed.

  3. [Traceability of Wine Varieties Using Near Infrared Spectroscopy Combined with Cyclic Voltammetry].

    PubMed

    Li, Meng-hua; Li, Jing-ming; Li, Jun-hui; Zhang, Lu-da; Zhao, Long-lian

    2015-06-01

    To achieve the traceability of wine varieties, a method was proposed to fuse Near-infrared (NIR) spectra and cyclic voltammograms (CV) which contain different information using D-S evidence theory. NIR spectra and CV curves of three different varieties of wines (cabernet sauvignon, merlot, cabernet gernischt) which come from seven different geographical origins were collected separately. The discriminant models were built using PLS-DA method. Based on this, D-S evidence theory was then applied to achieve the integration of the two kinds of discrimination results. After integrated by D-S evidence theory, the accuracy rate of cross-validation is 95.69% and validation set is 94.12% for wine variety identification. When only considering the wine that come from Yantai, the accuracy rate of cross-validation is 99.46% and validation set is 100%. All the traceability models after fusion achieved better results on classification than individual method. These results suggest that the proposed method combining electrochemical information with spectral information using the D-S evidence combination formula is benefit to the improvement of model discrimination effect, and is a promising tool for discriminating different kinds of wines.

  4. SWECS tower dynamics analysis methods and results

    NASA Technical Reports Server (NTRS)

    Wright, A. D.; Sexton, J. H.; Butterfield, C. P.; Thresher, R. M.

    1981-01-01

    Several different tower dynamics analysis methods and computer codes were used to determine the natural frequencies and mode shapes of both guyed and freestanding wind turbine towers. These analysis methods are described and the results for two types of towers, a guyed tower and a freestanding tower, are shown. The advantages and disadvantages in the use of and the accuracy of each method are also described.

  5. Unified Least Squares Methods for the Evaluation of Diagnostic Tests With the Gold Standard

    PubMed Central

    Tang, Liansheng Larry; Yuan, Ao; Collins, John; Che, Xuan; Chan, Leighton

    2017-01-01

    The article proposes a unified least squares method to estimate the receiver operating characteristic (ROC) parameters for continuous and ordinal diagnostic tests, such as cancer biomarkers. The method is based on a linear model framework using the empirically estimated sensitivities and specificities as input “data.” It gives consistent estimates for regression and accuracy parameters when the underlying continuous test results are normally distributed after some monotonic transformation. The key difference between the proposed method and the method of Tang and Zhou lies in the response variable. The response variable in the latter is transformed empirical ROC curves at different thresholds. It takes on many values for continuous test results, but few values for ordinal test results. The limited number of values for the response variable makes it impractical for ordinal data. However, the response variable in the proposed method takes on many more distinct values so that the method yields valid estimates for ordinal data. Extensive simulation studies are conducted to investigate and compare the finite sample performance of the proposed method with an existing method, and the method is then used to analyze 2 real cancer diagnostic example as an illustration. PMID:28469385

  6. Comparison of different wind data interpolation methods for a region with complex terrain in Central Asia

    NASA Astrophysics Data System (ADS)

    Reinhardt, Katja; Samimi, Cyrus

    2018-01-01

    While climatological data of high spatial resolution are largely available in most developed countries, the network of climatological stations in many other regions of the world still constitutes large gaps. Especially for those regions, interpolation methods are important tools to fill these gaps and to improve the data base indispensible for climatological research. Over the last years, new hybrid methods of machine learning and geostatistics have been developed which provide innovative prospects in spatial predictive modelling. This study will focus on evaluating the performance of 12 different interpolation methods for the wind components \\overrightarrow{u} and \\overrightarrow{v} in a mountainous region of Central Asia. Thereby, a special focus will be on applying new hybrid methods on spatial interpolation of wind data. This study is the first evaluating and comparing the performance of several of these hybrid methods. The overall aim of this study is to determine whether an optimal interpolation method exists, which can equally be applied for all pressure levels, or whether different interpolation methods have to be used for the different pressure levels. Deterministic (inverse distance weighting) and geostatistical interpolation methods (ordinary kriging) were explored, which take into account only the initial values of \\overrightarrow{u} and \\overrightarrow{v} . In addition, more complex methods (generalized additive model, support vector machine and neural networks as single methods and as hybrid methods as well as regression-kriging) that consider additional variables were applied. The analysis of the error indices revealed that regression-kriging provided the most accurate interpolation results for both wind components and all pressure heights. At 200 and 500 hPa, regression-kriging is followed by the different kinds of neural networks and support vector machines and for 850 hPa it is followed by the different types of support vector machine and ordinary kriging. Overall, explanatory variables improve the interpolation results.

  7. Optimization of capillary zone electrophoresis for charge heterogeneity testing of biopharmaceuticals using enhanced method development principles.

    PubMed

    Moritz, Bernd; Locatelli, Valentina; Niess, Michele; Bathke, Andrea; Kiessig, Steffen; Entler, Barbara; Finkler, Christof; Wegele, Harald; Stracke, Jan

    2017-12-01

    CZE is a well-established technique for charge heterogeneity testing of biopharmaceuticals. It is based on the differences between the ratios of net charge and hydrodynamic radius. In an extensive intercompany study, it was recently shown that CZE is very robust and can be easily implemented in labs that did not perform it before. However, individual characteristics of some examined proteins resulted in suboptimal resolution. Therefore, enhanced method development principles were applied here to investigate possibilities for further method optimization. For this purpose, a high number of different method parameters was evaluated with the aim to improve CZE separation. For the relevant parameters, design of experiments (DoE) models were generated and optimized in several ways for different sets of responses like resolution, peak width and number of peaks. In spite of product specific DoE optimization it was found that the resulting combination of optimized parameters did result in significant improvement of separation for 13 out of 16 different antibodies and other molecule formats. These results clearly demonstrate generic applicability of the optimized CZE method. Adaptation to individual molecular properties may sometimes still be required in order to achieve optimal separation but the set screws discussed in this study [mainly pH, identity of the polymer additive (HPC versus HPMC) and the concentrations of additives like acetonitrile, butanolamine and TETA] are expected to significantly reduce the effort for specific optimization. 2017 The Authors. Electrophoresis published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. MPN estimation of qPCR target sequence recoveries from whole cell calibrator samples.

    PubMed

    Sivaganesan, Mano; Siefring, Shawn; Varma, Manju; Haugland, Richard A

    2011-12-01

    DNA extracts from enumerated target organism cells (calibrator samples) have been used for estimating Enterococcus cell equivalent densities in surface waters by a comparative cycle threshold (Ct) qPCR analysis method. To compare surface water Enterococcus density estimates from different studies by this approach, either a consistent source of calibrator cells must be used or the estimates must account for any differences in target sequence recoveries from different sources of calibrator cells. In this report we describe two methods for estimating target sequence recoveries from whole cell calibrator samples based on qPCR analyses of their serially diluted DNA extracts and most probable number (MPN) calculation. The first method employed a traditional MPN calculation approach. The second method employed a Bayesian hierarchical statistical modeling approach and a Monte Carlo Markov Chain (MCMC) simulation method to account for the uncertainty in these estimates associated with different individual samples of the cell preparations, different dilutions of the DNA extracts and different qPCR analytical runs. The two methods were applied to estimate mean target sequence recoveries per cell from two different lots of a commercially available source of enumerated Enterococcus cell preparations. The mean target sequence recovery estimates (and standard errors) per cell from Lot A and B cell preparations by the Bayesian method were 22.73 (3.4) and 11.76 (2.4), respectively, when the data were adjusted for potential false positive results. Means were similar for the traditional MPN approach which cannot comparably assess uncertainty in the estimates. Cell numbers and estimates of recoverable target sequences in calibrator samples prepared from the two cell sources were also used to estimate cell equivalent and target sequence quantities recovered from surface water samples in a comparative Ct method. Our results illustrate the utility of the Bayesian method in accounting for uncertainty, the high degree of precision attainable by the MPN approach and the need to account for the differences in target sequence recoveries from different calibrator sample cell sources when they are used in the comparative Ct method. Published by Elsevier B.V.

  9. Characterization of Graphite Oxide and Reduced Graphene Oxide Obtained from Different Graphite Precursors and Oxidized by Different Methods Using Raman Spectroscopy.

    PubMed

    Muzyka, Roksana; Drewniak, Sabina; Pustelny, Tadeusz; Chrubasik, Maciej; Gryglewicz, Grażyna

    2018-06-21

    In this paper, the influences of the graphite precursor and the oxidation method on the resulting reduced graphene oxide (especially its composition and morphology) are shown. Three types of graphite were used to prepare samples for analysis, and each of the precursors was oxidized by two different methods (all samples were reduced by the same method of thermal reduction). Each obtained graphite oxide and reduced graphene oxide was analysed by X-ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS) and Raman spectroscopy (RS).

  10. Palmprint Recognition Across Different Devices.

    PubMed

    Jia, Wei; Hu, Rong-Xiang; Gui, Jie; Zhao, Yang; Ren, Xiao-Ming

    2012-01-01

    In this paper, the problem of Palmprint Recognition Across Different Devices (PRADD) is investigated, which has not been well studied so far. Since there is no publicly available PRADD image database, we created a non-contact PRADD image database containing 12,000 grayscale captured from 100 subjects using three devices, i.e., one digital camera and two smart-phones. Due to the non-contact image acquisition used, rotation and scale changes between different images captured from a same palm are inevitable. We propose a robust method to calculate the palm width, which can be effectively used for scale normalization of palmprints. On this PRADD image database, we evaluate the recognition performance of three different methods, i.e., subspace learning method, correlation method, and orientation coding based method, respectively. Experiments results show that orientation coding based methods achieved promising recognition performance for PRADD.

  11. Palmprint Recognition across Different Devices

    PubMed Central

    Jia, Wei; Hu, Rong-Xiang; Gui, Jie; Zhao, Yang; Ren, Xiao-Ming

    2012-01-01

    In this paper, the problem of Palmprint Recognition Across Different Devices (PRADD) is investigated, which has not been well studied so far. Since there is no publicly available PRADD image database, we created a non-contact PRADD image database containing 12,000 grayscale captured from 100 subjects using three devices, i.e., one digital camera and two smart-phones. Due to the non-contact image acquisition used, rotation and scale changes between different images captured from a same palm are inevitable. We propose a robust method to calculate the palm width, which can be effectively used for scale normalization of palmprints. On this PRADD image database, we evaluate the recognition performance of three different methods, i.e., subspace learning method, correlation method, and orientation coding based method, respectively. Experiments results show that orientation coding based methods achieved promising recognition performance for PRADD. PMID:22969380

  12. Comparing Methods for Estimating Direct Costs of Adverse Drug Events.

    PubMed

    Gyllensten, Hanna; Jönsson, Anna K; Hakkarainen, Katja M; Svensson, Staffan; Hägg, Staffan; Rehnberg, Clas

    2017-12-01

    To estimate how direct health care costs resulting from adverse drug events (ADEs) and cost distribution are affected by methodological decisions regarding identification of ADEs, assigning relevant resource use to ADEs, and estimating costs for the assigned resources. ADEs were identified from medical records and diagnostic codes for a random sample of 4970 Swedish adults during a 3-month study period in 2008 and were assessed for causality. Results were compared for five cost evaluation methods, including different methods for identifying ADEs, assigning resource use to ADEs, and for estimating costs for the assigned resources (resource use method, proportion of registered cost method, unit cost method, diagnostic code method, and main diagnosis method). Different levels of causality for ADEs and ADEs' contribution to health care resource use were considered. Using the five methods, the maximum estimated overall direct health care costs resulting from ADEs ranged from Sk10,000 (Sk = Swedish krona; ~€1,500 in 2016 values) using the diagnostic code method to more than Sk3,000,000 (~€414,000) using the unit cost method in our study population. The most conservative definitions for ADEs' contribution to health care resource use and the causality of ADEs resulted in average costs per patient ranging from Sk0 using the diagnostic code method to Sk4066 (~€500) using the unit cost method. The estimated costs resulting from ADEs varied considerably depending on the methodological choices. The results indicate that costs for ADEs need to be identified through medical record review and by using detailed unit cost data. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  13. Linear programming phase unwrapping for dual-wavelength digital holography.

    PubMed

    Wang, Zhaomin; Jiao, Jiannan; Qu, Weijuan; Yang, Fang; Li, Hongru; Tian, Ailing; Asundi, Anand

    2017-01-20

    A linear programming phase unwrapping method in dual-wavelength digital holography is proposed and verified experimentally. The proposed method uses the square of height difference as a convergence standard and theoretically gives the boundary condition in a searching process. A simulation was performed by unwrapping step structures at different levels of Gaussian noise. As a result, our method is capable of recovering the discontinuities accurately. It is robust and straightforward. In the experiment, a microelectromechanical systems sample and a cylindrical lens were measured separately. The testing results were in good agreement with true values. Moreover, the proposed method is applicable not only in digital holography but also in other dual-wavelength interferometric techniques.

  14. The ADNI PET Core: 2015

    PubMed Central

    Jagust, William J.; Landau, Susan M.; Koeppe, Robert A.; Reiman, Eric M.; Chen, Kewei; Mathis, Chester A.; Price, Julie C.; Foster, Norman L.; Wang, Angela Y.

    2015-01-01

    INTRODUCTION This paper reviews the work done in the ADNI PET core over the past 5 years, largely concerning techniques, methods, and results related to amyloid imaging in ADNI. METHODS The PET Core has utilized [18F]florbetapir routinely on ADNI participants, with over 1600 scans available for download. Four different laboratories are involved in data analysis, and have examined factors such as longitudinal florbetapir analysis, use of FDG-PET in clinical trials, and relationships between different biomarkers and cognition. RESULTS Converging evidence from the PET Core has indicated that cross-sectional and longitudinal florbetapir analyses require different reference regions. Studies have also examined the relationship between florbetapir data obtained immediately after injection, which reflects perfusion, and FDG-PET results. Finally, standardization has included the translation of florbetapir PET data to a centiloid scale. CONCLUSION The PET Core has demonstrated a variety of methods for standardization of biomarkers such as florbetapir PET in a multicenter setting. PMID:26194311

  15. SU-F-T-687: Comparison of SPECT/CT-Based Methodologies for Estimating Lung Dose from Y-90 Radioembolization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kost, S; Yu, N; Lin, S

    2016-06-15

    Purpose: To compare mean lung dose (MLD) estimates from 99mTc macroaggregated albumin (MAA) SPECT/CT using two published methodologies for patients treated with {sup 90}Y radioembolization for liver cancer. Methods: MLD was estimated retrospectively using two methodologies for 40 patients from SPECT/CT images of 99mTc-MAA administered prior to radioembolization. In these two methods, lung shunt fractions (LSFs) were calculated as the ratio of scanned lung activity to the activity in the entire scan volume or to the sum of activity in the lung and liver respectively. Misregistration of liver activity into the lungs during SPECT acquisition was overcome by excluding lungmore » counts within either 2 or 1.5 cm of the diaphragm apex respectively. Patient lung density was assumed to be 0.3 g/cm{sup 3} or derived from CT densitovolumetry respectively. Results from both approaches were compared to MLD determined by planar scintigraphy (PS). The effect of patient size on the difference between MLD from PS and SPECT/CT was also investigated. Results: Lung density from CT densitovolumetry is not different from the reference density (p = 0.68). The second method resulted in lung dose of an average 1.5 times larger lung dose compared to the first method; however the difference between the means of the two estimates was not significant (p = 0.07). Lung dose from both methods were statistically different from those estimated from 2D PS (p < 0.001). There was no correlation between patient size and the difference between MLD from PS and both SPECT/CT methods (r < 0.22, p > 0.17). Conclusion: There is no statistically significant difference between MLD estimated from the two techniques. Both methods are statistically different from conventional PS, with PS overestimating dose by a factor of three or larger. The difference between lung doses estimated from 2D planar or 3D SPECT/CT is not dependent on patient size.« less

  16. Comparative evaluation of ultrasound scanner accuracy in distance measurement

    NASA Astrophysics Data System (ADS)

    Branca, F. P.; Sciuto, S. A.; Scorza, A.

    2012-10-01

    The aim of the present study is to develop and compare two different automatic methods for accuracy evaluation in ultrasound phantom measurements on B-mode images: both of them give as a result the relative error e between measured distances, performed by 14 brand new ultrasound medical scanners, and nominal distances, among nylon wires embedded in a reference test object. The first method is based on a least squares estimation, while the second one applies the mean value of the same distance evaluated at different locations in ultrasound image (same distance method). Results for both of them are proposed and explained.

  17. Teaching Business Simulation Games: Comparing Achievements Frontal Teaching vs. eLearning

    NASA Astrophysics Data System (ADS)

    Bregman, David; Keinan, Gila; Korman, Arik; Raanan, Yossi

    This paper addresses the issue of comparing results achieved by students taught the same course but in two drastically different - a regular, frontal method and an eLearning method. The subject taught required intensive communications among the students, thus making the eLearning students, a priori, less likely to do well in it. The research, comparing the achievements of students in a business simulation game over three semesters, shows that the use of eLearning method did not result in any differences in performance, grades or cooperation, thus strengthening the case for using eLearning in this type of course.

  18. Convergence acceleration of the Proteus computer code with multigrid methods

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.; Ibraheem, S. O.

    1992-01-01

    Presented here is the first part of a study to implement convergence acceleration techniques based on the multigrid concept in the Proteus computer code. A review is given of previous studies on the implementation of multigrid methods in computer codes for compressible flow analysis. Also presented is a detailed stability analysis of upwind and central-difference based numerical schemes for solving the Euler and Navier-Stokes equations. Results are given of a convergence study of the Proteus code on computational grids of different sizes. The results presented here form the foundation for the implementation of multigrid methods in the Proteus code.

  19. Tooth shape optimization of brushless permanent magnet motors for reducing torque ripples

    NASA Astrophysics Data System (ADS)

    Hsu, Liang-Yi; Tsai, Mi-Ching

    2004-11-01

    This paper presents a tooth shape optimization method based on a generic algorithm to reduce the torque ripple of brushless permanent magnet motors under two different magnetization directions. The analysis of this design method mainly focuses on magnetic saturation and cogging torque and the computation of the optimization process is based on an equivalent magnetic network circuit. The simulation results, obtained from the finite element analysis, are used to confirm the accuracy and performance. Finite element analysis results from different tooth shapes are compared to show the effectiveness of the proposed method.

  20. Investigation of Attitudinal Differences among Individuals of Different Employment Status

    DTIC Science & Technology

    2010-10-28

    be included in order to statistically control for common method variance (see Podsakoff , MacKenzie, Lee, & Podsakoff , 2003). Results Hypotheses 1...social identity theory. Social Psychology Quarterly, 58, 255-269. Podsakoff , P. M., MacKenzie, S. B., Lee, J., & Podsakoff , N. P. (2003). Common method

  1. The Effects of Three Methods of Observation on Couples in Interactional Research.

    ERIC Educational Resources Information Center

    Carpenter, Linda J.; Merkel, William T.

    1988-01-01

    Assessed the effects of three different methods of observation of couples (one-way mirror, audio recording, and video recording) on 30 volunteer, nonclinical married couples. Results suggest that types of observation do not produce significantly different effects on nonclinical couples. (Author/ABL)

  2. A comparative study of different methods for calculating electronic transition rates

    NASA Astrophysics Data System (ADS)

    Kananenka, Alexei A.; Sun, Xiang; Schubert, Alexander; Dunietz, Barry D.; Geva, Eitan

    2018-03-01

    We present a comprehensive comparison of the following mixed quantum-classical methods for calculating electronic transition rates: (1) nonequilibrium Fermi's golden rule, (2) mixed quantum-classical Liouville method, (3) mean-field (Ehrenfest) mixed quantum-classical method, and (4) fewest switches surface-hopping method (in diabatic and adiabatic representations). The comparison is performed on the Garg-Onuchic-Ambegaokar benchmark charge-transfer model, over a broad range of temperatures and electronic coupling strengths, with different nonequilibrium initial states, in the normal and inverted regimes. Under weak to moderate electronic coupling, the nonequilibrium Fermi's golden rule rates are found to be in good agreement with the rates obtained via the mixed quantum-classical Liouville method that coincides with the fully quantum-mechanically exact results for the model system under study. Our results suggest that the nonequilibrium Fermi's golden rule can serve as an inexpensive yet accurate alternative to Ehrenfest and the fewest switches surface-hopping methods.

  3. Evaluating core technology capacity based on an improved catastrophe progression method: the case of automotive industry

    NASA Astrophysics Data System (ADS)

    Zhao, Shijia; Liu, Zongwei; Wang, Yue; Zhao, Fuquan

    2017-01-01

    Subjectivity usually causes large fluctuations in evaluation results. Many scholars attempt to establish new mathematical methods to make evaluation results consistent with actual objective situations. An improved catastrophe progression method (ICPM) is constructed to overcome the defects of the original method. The improved method combines the merits of the principal component analysis' information coherence and the catastrophe progression method's none index weight and has the advantage of highly objective comprehensive evaluation. Through the systematic analysis of the influencing factors of the automotive industry's core technology capacity, the comprehensive evaluation model is established according to the different roles that different indices play in evaluating the overall goal with a hierarchical structure. Moreover, ICPM is developed for evaluating the automotive industry's core technology capacity for the typical seven countries in the world, which demonstrates the effectiveness of the method.

  4. Unsupervised change detection of multispectral images based on spatial constraint chi-squared transform and Markov random field model

    NASA Astrophysics Data System (ADS)

    Shi, Aiye; Wang, Chao; Shen, Shaohong; Huang, Fengchen; Ma, Zhenli

    2016-10-01

    Chi-squared transform (CST), as a statistical method, can describe the difference degree between vectors. The CST-based methods operate directly on information stored in the difference image and are simple and effective methods for detecting changes in remotely sensed images that have been registered and aligned. However, the technique does not take spatial information into consideration, which leads to much noise in the result of change detection. An improved unsupervised change detection method is proposed based on spatial constraint CST (SCCST) in combination with a Markov random field (MRF) model. First, the mean and variance matrix of the difference image of bitemporal images are estimated by an iterative trimming method. In each iteration, spatial information is injected to reduce scattered changed points (also known as "salt and pepper" noise). To determine the key parameter confidence level in the SCCST method, a pseudotraining dataset is constructed to estimate the optimal value. Then, the result of SCCST, as an initial solution of change detection, is further improved by the MRF model. The experiments on simulated and real multitemporal and multispectral images indicate that the proposed method performs well in comprehensive indices compared with other methods.

  5. Wave propagation in anisotropic elastic materials and curvilinear coordinates using a summation-by-parts finite difference method

    DOE PAGES

    Petersson, N. Anders; Sjogreen, Bjorn

    2015-07-20

    We develop a fourth order accurate finite difference method for solving the three-dimensional elastic wave equation in general heterogeneous anisotropic materials on curvilinear grids. The proposed method is an extension of the method for isotropic materials, previously described in the paper by Sjögreen and Petersson (2012) [11]. The method we proposed discretizes the anisotropic elastic wave equation in second order formulation, using a node centered finite difference method that satisfies the principle of summation by parts. The summation by parts technique results in a provably stable numerical method that is energy conserving. Also, we generalize and evaluate the super-grid far-fieldmore » technique for truncating unbounded domains. Unlike the commonly used perfectly matched layers (PML), the super-grid technique is stable for general anisotropic material, because it is based on a coordinate stretching combined with an artificial dissipation. Moreover, the discretization satisfies an energy estimate, proving that the numerical approximation is stable. We demonstrate by numerical experiments that sufficiently wide super-grid layers result in very small artificial reflections. Applications of the proposed method are demonstrated by three-dimensional simulations of anisotropic wave propagation in crystals.« less

  6. Advantages and Disadvantages of Different Methods of Hospitals' Downsizing: A Narrative Systematic Review

    PubMed Central

    Mousazadeh, Yalda; Jannati, Ali; Jabbari Beiramy, Hossein; AsghariJafarabadi, Mohammad; Ebadi, Ali

    2013-01-01

    Background: Hospitals as key actors in health systems face growing pres­sures especially cost cutting and search for costeffective ways to resources management. Downsizing is one of these ways. This study was conducted to identify advantages and disadvantages of different methods of hospital' downsizing. Methods:The search was conducted in databases of Medlib, SID, Pub Med, Science Direct and Google Scholar Meta search engine by keywords of Downsizing, Hospital Downsizing, Hospital Rightsizing, Hospital Restructuring, Staff Downsizing, Hospital Merging, Hospital Reorganization and the Persian equivalents. Resulted 815 articles were studied and refined step by step. Finally, 27 articles were selected for analysis. Results: Five hospital downsizing methods were identified during searching. These methods were reducing the number of employees and beds, outsourcing, integration of hospital units, and the combination of these methods. The most important benefits were cost reduction, increasing patient satisfaction, increasing home care and outpatient services. The most important disadvantage included reducing access, reducing the rate of hospital admissions and increasing employees’ workload and dissatisfaction. Conclusion: Each downsizing method has strengths and weaknesses. Using different methods of downsizing, according to circumstances and applying appropriate interventions after implementation, is necessary for promotion. PMID:24688978

  7. Methods of scaling threshold color difference using printed samples

    NASA Astrophysics Data System (ADS)

    Huang, Min; Cui, Guihua; Liu, Haoxue; Luo, M. Ronnier

    2012-01-01

    A series of printed samples on substrate of semi-gloss paper and with the magnitude of threshold color difference were prepared for scaling the visual color difference and to evaluate the performance of different method. The probabilities of perceptibly was used to normalized to Z-score and different color differences were scaled to the Z-score. The visual color difference was got, and checked with the STRESS factor. The results indicated that only the scales have been changed but the relative scales between pairs in the data are preserved.

  8. Elongation measurement using 1-dimensional image correlation method

    NASA Astrophysics Data System (ADS)

    Phongwisit, Phachara; Kamoldilok, Surachart; Buranasiri, Prathan

    2016-11-01

    Aim of this paper was to study, setup, and calibrate an elongation measurement by using 1- Dimensional Image Correlation method (1-DIC). To confirm our method and setup correctness, we need calibration with other methods. In this paper, we used a small spring as a sample to find a result in terms of spring constant. With a fundamental of Image Correlation method, images of formed and deformed samples were compared to understand the difference between deformed process. By comparing the location of reference point on both image's pixel, the spring's elongation were calculated. Then, the results have been compared with the spring constants, which were found from Hooke's law. The percentage of 5 percent error has been found. This DIC method, then, would be applied to measure the elongation of some different kinds of small fiber samples.

  9. Different spectrophotometric methods applied for the analysis of simeprevir in the presence of its oxidative degradation product: Acomparative study

    NASA Astrophysics Data System (ADS)

    Attia, Khalid A. M.; El-Abasawi, Nasr M.; El-Olemy, Ahmed; Serag, Ahmed

    2018-02-01

    Five simple spectrophotometric methods were developed for the determination of simeprevir in the presence of its oxidative degradation product namely, ratio difference, mean centering, derivative ratio using the Savitsky-Golay filters, second derivative and continuous wavelet transform. These methods are linear in the range of 2.5-40 μg/mL and validated according to the ICH guidelines. The obtained results of accuracy, repeatability and precision were found to be within the acceptable limits. The specificity of the proposed methods was tested using laboratory prepared mixtures and assessed by applying the standard addition technique. Furthermore, these methods were statistically comparable to RP-HPLC method and good results were obtained. So, they can be used for the routine analysis of simeprevir in quality-control laboratories.

  10. Deterministic absorbed dose estimation in computed tomography using a discrete ordinates method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Norris, Edward T.; Liu, Xin, E-mail: xinliu@mst.edu; Hsieh, Jiang

    Purpose: Organ dose estimation for a patient undergoing computed tomography (CT) scanning is very important. Although Monte Carlo methods are considered gold-standard in patient dose estimation, the computation time required is formidable for routine clinical calculations. Here, the authors instigate a deterministic method for estimating an absorbed dose more efficiently. Methods: Compared with current Monte Carlo methods, a more efficient approach to estimating the absorbed dose is to solve the linear Boltzmann equation numerically. In this study, an axial CT scan was modeled with a software package, Denovo, which solved the linear Boltzmann equation using the discrete ordinates method. Themore » CT scanning configuration included 16 x-ray source positions, beam collimators, flat filters, and bowtie filters. The phantom was the standard 32 cm CT dose index (CTDI) phantom. Four different Denovo simulations were performed with different simulation parameters, including the number of quadrature sets and the order of Legendre polynomial expansions. A Monte Carlo simulation was also performed for benchmarking the Denovo simulations. A quantitative comparison was made of the simulation results obtained by the Denovo and the Monte Carlo methods. Results: The difference in the simulation results of the discrete ordinates method and those of the Monte Carlo methods was found to be small, with a root-mean-square difference of around 2.4%. It was found that the discrete ordinates method, with a higher order of Legendre polynomial expansions, underestimated the absorbed dose near the center of the phantom (i.e., low dose region). Simulations of the quadrature set 8 and the first order of the Legendre polynomial expansions proved to be the most efficient computation method in the authors’ study. The single-thread computation time of the deterministic simulation of the quadrature set 8 and the first order of the Legendre polynomial expansions was 21 min on a personal computer. Conclusions: The simulation results showed that the deterministic method can be effectively used to estimate the absorbed dose in a CTDI phantom. The accuracy of the discrete ordinates method was close to that of a Monte Carlo simulation, and the primary benefit of the discrete ordinates method lies in its rapid computation speed. It is expected that further optimization of this method in routine clinical CT dose estimation will improve its accuracy and speed.« less

  11. Referenceless MR thermometry-a comparison of five methods.

    PubMed

    Zou, Chao; Tie, Changjun; Pan, Min; Wan, Qian; Liang, Changhong; Liu, Xin; Chung, Yiu-Cho

    2017-01-07

    Proton resonance frequency shift (PRFS) MR thermometry is commonly used to measure temperature in thermotherapy. The method requires a baseline temperature map and is therefore motion sensitive. Several referenceless MR thermometry methods were proposed to address this problem but their performances have never been compared. This study compared the performance of five referenceless methods through simulation, heating of ex vivo tissues and in vivo imaging of the brain and liver of healthy volunteers. Mean, standard deviation, root mean square, 2/98 percentiles of error were used as performance metrics. Probability density functions (PDF) of the error distribution for these methods in the different tests were also compared. The results showed that the phase gradient method (PG) exhibited largest error in all scenarios. The original method (ORG) and the complex field estimation method (CFE) had similar performance in all experiments. The phase finite difference method (PFD) and the near harmonic method (NH) were better than other methods, especially in the lower signal-to-noise ratio (SNR) and fast changing field cases. Except for PG, the PDFs of each method were very similar among the different experiments. Since phase unwrapping in ORG and NH is computationally demanding and subject to image SNR, PFD and CFE would be good choices as they do not need phase unwrapping. The results here would facilitate the choice of appropriate referenceless methods in various MR thermometry applications.

  12. Referenceless MR thermometry—a comparison of five methods

    NASA Astrophysics Data System (ADS)

    Zou, Chao; Tie, Changjun; Pan, Min; Wan, Qian; Liang, Changhong; Liu, Xin; Chung, Yiu-Cho

    2017-01-01

    Proton resonance frequency shift (PRFS) MR thermometry is commonly used to measure temperature in thermotherapy. The method requires a baseline temperature map and is therefore motion sensitive. Several referenceless MR thermometry methods were proposed to address this problem but their performances have never been compared. This study compared the performance of five referenceless methods through simulation, heating of ex vivo tissues and in vivo imaging of the brain and liver of healthy volunteers. Mean, standard deviation, root mean square, 2/98 percentiles of error were used as performance metrics. Probability density functions (PDF) of the error distribution for these methods in the different tests were also compared. The results showed that the phase gradient method (PG) exhibited largest error in all scenarios. The original method (ORG) and the complex field estimation method (CFE) had similar performance in all experiments. The phase finite difference method (PFD) and the near harmonic method (NH) were better than other methods, especially in the lower signal-to-noise ratio (SNR) and fast changing field cases. Except for PG, the PDFs of each method were very similar among the different experiments. Since phase unwrapping in ORG and NH is computationally demanding and subject to image SNR, PFD and CFE would be good choices as they do not need phase unwrapping. The results here would facilitate the choice of appropriate referenceless methods in various MR thermometry applications.

  13. Magnetic Field Suppression of Flow in Semiconductor Melt

    NASA Technical Reports Server (NTRS)

    Fedoseyev, A. I.; Kansa, E. J.; Marin, C.; Volz, M. P.; Ostrogorsky, A. G.

    2000-01-01

    One of the most promising approaches for the reduction of convection during the crystal growth of conductive melts (semiconductor crystals) is the application of magnetic fields. Current technology allows the experimentation with very intense static fields (up to 80 KGauss) for which nearly convection free results are expected from simple scaling analysis in stabilized systems (vertical Bridgman method with axial magnetic field). However, controversial experimental results were obtained. The computational methods are, therefore, a fundamental tool in the understanding of the phenomena accounting during the solidification of semiconductor materials. Moreover, effects like the bending of the isomagnetic lines, different aspect ratios and misalignments between the direction of the gravity and magnetic field vectors can not be analyzed with analytical methods. The earliest numerical results showed controversial conclusions and are not able to explain the experimental results. Although the generated flows are extremely low, the computational task is a complicated because of the thin boundary layers. That is one of the reasons for the discrepancy in the results that numerical studies reported. Modeling of these magnetically damped crystal growth experiments requires advanced numerical methods. We used, for comparison, three different approaches to obtain the solution of the problem of thermal convection flows: (1) Spectral method in spectral superelement implementation, (2) Finite element method with regularization for boundary layers, (3) Multiquadric method, a novel method with global radial basis functions, that is proven to have exponential convergence. The results obtained by these three methods are presented for a wide region of Rayleigh and Hartman numbers. Comparison and discussion of accuracy, efficiency, reliability and agreement with experimental results will be presented as well.

  14. Automatic correction of intensity nonuniformity from sparseness of gradient distribution in medical images.

    PubMed

    Zheng, Yuanjie; Grossman, Murray; Awate, Suyash P; Gee, James C

    2009-01-01

    We propose to use the sparseness property of the gradient probability distribution to estimate the intensity nonuniformity in medical images, resulting in two novel automatic methods: a non-parametric method and a parametric method. Our methods are easy to implement because they both solve an iteratively re-weighted least squares problem. They are remarkably accurate as shown by our experiments on images of different imaged objects and from different imaging modalities.

  15. Automatic Correction of Intensity Nonuniformity from Sparseness of Gradient Distribution in Medical Images

    PubMed Central

    Zheng, Yuanjie; Grossman, Murray; Awate, Suyash P.; Gee, James C.

    2013-01-01

    We propose to use the sparseness property of the gradient probability distribution to estimate the intensity nonuniformity in medical images, resulting in two novel automatic methods: a non-parametric method and a parametric method. Our methods are easy to implement because they both solve an iteratively re-weighted least squares problem. They are remarkably accurate as shown by our experiments on images of different imaged objects and from different imaging modalities. PMID:20426191

  16. Determining the depth of certain gravity sources without a priori specification of their structural index

    NASA Astrophysics Data System (ADS)

    Zhou, Shuai; Huang, Danian

    2015-11-01

    We have developed a new method for the interpretation of gravity tensor data based on the generalized Tilt-depth method. Cooper (2011, 2012) extended the magnetic Tilt-depth method to gravity data. We take the gradient-ratio method of Cooper (2011, 2012) and modify it so that the source type does not need to be specified a priori. We develop the new method by generalizing the Tilt-depth method for depth estimation for different types of source bodies. The new technique uses only the three vertical tensor components of the full gravity tensor data observed or calculated at different height plane to estimate the depth of the buried bodies without a priori specification of their structural index. For severely noise-corrupted data, our method utilizes different upward continuation height data, which can effectively reduce the influence of noise. Theoretical simulations of the gravity source model with and without noise illustrate the ability of the method to provide source depth information. Additionally, the simulations demonstrate that the new method is simple, computationally fast and accurate. Finally, we apply the method using the gravity data acquired over the Humble Salt Dome in the USA as an example. The results show a good correspondence to the previous drilling and seismic interpretation results.

  17. The intervals method: a new approach to analyse finite element outputs using multivariate statistics

    PubMed Central

    De Esteban-Trivigno, Soledad; Püschel, Thomas A.; Fortuny, Josep

    2017-01-01

    Background In this paper, we propose a new method, named the intervals’ method, to analyse data from finite element models in a comparative multivariate framework. As a case study, several armadillo mandibles are analysed, showing that the proposed method is useful to distinguish and characterise biomechanical differences related to diet/ecomorphology. Methods The intervals’ method consists of generating a set of variables, each one defined by an interval of stress values. Each variable is expressed as a percentage of the area of the mandible occupied by those stress values. Afterwards these newly generated variables can be analysed using multivariate methods. Results Applying this novel method to the biological case study of whether armadillo mandibles differ according to dietary groups, we show that the intervals’ method is a powerful tool to characterize biomechanical performance and how this relates to different diets. This allows us to positively discriminate between specialist and generalist species. Discussion We show that the proposed approach is a useful methodology not affected by the characteristics of the finite element mesh. Additionally, the positive discriminating results obtained when analysing a difficult case study suggest that the proposed method could be a very useful tool for comparative studies in finite element analysis using multivariate statistical approaches. PMID:29043107

  18. Measurement of susceptibility artifacts with histogram-based reference value on magnetic resonance images according to standard ASTM F2119.

    PubMed

    Heinrich, Andreas; Teichgräber, Ulf K; Güttler, Felix V

    2015-12-01

    The standard ASTM F2119 describes a test method for measuring the size of a susceptibility artifact based on the example of a passive implant. A pixel in an image is considered to be a part of an image artifact if the intensity is changed by at least 30% in the presence of a test object, compared to a reference image in which the test object is absent (reference value). The aim of this paper is to simplify and accelerate the test method using a histogram-based reference value. Four test objects were scanned parallel and perpendicular to the main magnetic field, and the largest susceptibility artifacts were measured using two methods of reference value determination (reference image-based and histogram-based reference value). The results between both methods were compared using the Mann-Whitney U-test. The difference between both reference values was 42.35 ± 23.66. The difference of artifact size was 0.64 ± 0.69 mm. The artifact sizes of both methods did not show significant differences; the p-value of the Mann-Whitney U-test was between 0.710 and 0.521. A standard-conform method for a rapid, objective, and reproducible evaluation of susceptibility artifacts could be implemented. The result of the histogram-based method does not significantly differ from the ASTM-conform method.

  19. Comparison of water extraction methods in Tibet based on GF-1 data

    NASA Astrophysics Data System (ADS)

    Jia, Lingjun; Shang, Kun; Liu, Jing; Sun, Zhongqing

    2018-03-01

    In this study, we compared four different water extraction methods with GF-1 data according to different water types in Tibet, including Support Vector Machine (SVM), Principal Component Analysis (PCA), Decision Tree Classifier based on False Normalized Difference Water Index (FNDWI-DTC), and PCA-SVM. The results show that all of the four methods can extract large area water body, but only SVM and PCA-SVM can obtain satisfying extraction results for small size water body. The methods were evaluated by both overall accuracy (OAA) and Kappa coefficient (KC). The OAA of PCA-SVM, SVM, FNDWI-DTC, PCA are 96.68%, 94.23%, 93.99%, 93.01%, and the KCs are 0.9308, 0.8995, 0.8962, 0.8842, respectively, in consistent with visual inspection. In summary, SVM is better for narrow rivers extraction and PCA-SVM is suitable for water extraction of various types. As for dark blue lakes, the methods using PCA can extract more quickly and accurately.

  20. Infrared target tracking via weighted correlation filter

    NASA Astrophysics Data System (ADS)

    He, Yu-Jie; Li, Min; Zhang, JinLi; Yao, Jun-Ping

    2015-11-01

    Design of an effective target tracker is an important and challenging task for many applications due to multiple factors which can cause disturbance in infrared video sequences. In this paper, an infrared target tracking method under tracking by detection framework based on a weighted correlation filter is presented. This method consists of two parts: detection and filtering. For the detection stage, we propose a sequential detection method for the infrared target based on low-rank representation. For the filtering stage, a new multi-feature weighted function which fuses different target features is proposed, which takes the importance of the different regions into consideration. The weighted function is then incorporated into a correlation filter to compute a confidence map more accurately, in order to indicate the best target location based on the detection results obtained from the first stage. Extensive experimental results on different video sequences demonstrate that the proposed method performs favorably for detection and tracking compared with baseline methods in terms of efficiency and accuracy.

  1. The effectiveness of digital microscopy as a teaching tool in medical laboratory science curriculum.

    PubMed

    Castillo, Demetra

    2012-01-01

    A fundamental component to the practice of Medical Laboratory Science (MLS) is the microscope. While traditional microscopy (TM) is gold standard, the high cost of maintenance has led to an increased demand for alternative methods, such as digital microscopy (DM). Slides embedded with blood specimens are converted into a digital form that can be run with computer driven software. The aim of this study was to investigate the effectiveness of digital microscopy as a teaching tool in the field of Medical Laboratory Science. Participants reviewed known study slides using both traditional and digital microscopy methods and were assessed using both methods. Participants were randomly divided into two groups. Group 1 performed TM as the primary method and DM as the alternate. Group 2 performed DM as the primary and TM as the alternate. Participants performed differentials with their primary method, were assessed with both methods, and then performed differentials with their alternate method. A detailed assessment rubric was created to determine the accuracy of student responses through comparison of clinical laboratory and instructor results. Student scores were reflected as a percentage correct from these methods. This assessment was done over two different classes. When comparing results between methods for each, independent of the primary method used, results were not statistically different. However, when comparing methods between groups, Group 1 (n = 11) (TM = 73.79% +/- 9.19, DM = 81.43% +/- 8.30; paired t10 = 0.182, p < 0.001) showed a significant difference from Group 2 (n = 14) (TM = 85.64% +/- 5.30, DM = 85.91% +/- 7.62; paired t13 = 3.647, p = 0.860). In the subsequent class, results between both groups (n = 13, n = 16, respectively) did not show any significant difference between groups (Group 1 TM = 86.38% +/- 8.17, Group 1 DM = 88.69% +/- 3.86; paired t12 = 1.253, p = 0.234; Group 2 TM = 86.75% +/- 5.37, Group 2 DM = 86.25% +/- 7.01, paired t15 = 0.280, p = 0.784). The data suggest that DM is comparable to TM. DM could be used as an enhancement model after foundational information was provided using TM.

  2. Kinds of access: different methods for report reveal different kinds of metacognitive access

    PubMed Central

    Overgaard, Morten; Sandberg, Kristian

    2012-01-01

    In experimental investigations of consciousness, participants are asked to reflect upon their own experiences by issuing reports about them in different ways. For this reason, a participant needs some access to the content of her own conscious experience in order to report. In such experiments, the reports typically consist of some variety of ratings of confidence or direct descriptions of one's own experiences. Whereas different methods of reporting are typically used interchangeably, recent experiments indicate that different results are obtained with different kinds of reporting. We argue that there is not only a theoretical, but also an empirical difference between different methods of reporting. We hypothesize that differences in the sensitivity of different scales may reveal that different types of access are used to issue direct reports about experiences and metacognitive reports about the classification process. PMID:22492747

  3. No major differences found between the effects of microwave-based and conventional heat treatment methods on two different liquid foods.

    PubMed

    Géczi, Gábor; Horváth, Márk; Kaszab, Tímea; Alemany, Gonzalo Garnacho

    2013-01-01

    Extension of shelf life and preservation of products are both very important for the food industry. However, just as with other processes, speed and higher manufacturing performance are also beneficial. Although microwave heating is utilized in a number of industrial processes, there are many unanswered questions about its effects on foods. Here we analyze whether the effects of microwave heating with continuous flow are equivalent to those of traditional heat transfer methods. In our study, the effects of heating of liquid foods by conventional and continuous flow microwave heating were studied. Among other properties, we compared the stability of the liquid foods between the two heat treatments. Our goal was to determine whether the continuous flow microwave heating and the conventional heating methods have the same effects on the liquid foods, and, therefore, whether microwave heat treatment can effectively replace conventional heat treatments. We have compared the colour, separation phenomena of the samples treated by different methods. For milk, we also monitored the total viable cell count, for orange juice, vitamin C contents in addition to the taste of the product by sensory analysis. The majority of the results indicate that the circulating coil microwave method used here is equivalent to the conventional heating method based on thermal conduction and convection. However, some results in the analysis of the milk samples show clear differences between heat transfer methods. According to our results, the colour parameters (lightness, red-green and blue-yellow values) of the microwave treated samples differed not only from the untreated control, but also from the traditional heat treated samples. The differences are visually undetectable, however, they become evident through analytical measurement with spectrophotometer. This finding suggests that besides thermal effects, microwave-based food treatment can alter product properties in other ways as well.

  4. No Major Differences Found between the Effects of Microwave-Based and Conventional Heat Treatment Methods on Two Different Liquid Foods

    PubMed Central

    Géczi, Gábor; Horváth, Márk; Kaszab, Tímea; Alemany, Gonzalo Garnacho

    2013-01-01

    Extension of shelf life and preservation of products are both very important for the food industry. However, just as with other processes, speed and higher manufacturing performance are also beneficial. Although microwave heating is utilized in a number of industrial processes, there are many unanswered questions about its effects on foods. Here we analyze whether the effects of microwave heating with continuous flow are equivalent to those of traditional heat transfer methods. In our study, the effects of heating of liquid foods by conventional and continuous flow microwave heating were studied. Among other properties, we compared the stability of the liquid foods between the two heat treatments. Our goal was to determine whether the continuous flow microwave heating and the conventional heating methods have the same effects on the liquid foods, and, therefore, whether microwave heat treatment can effectively replace conventional heat treatments. We have compared the colour, separation phenomena of the samples treated by different methods. For milk, we also monitored the total viable cell count, for orange juice, vitamin C contents in addition to the taste of the product by sensory analysis. The majority of the results indicate that the circulating coil microwave method used here is equivalent to the conventional heating method based on thermal conduction and convection. However, some results in the analysis of the milk samples show clear differences between heat transfer methods. According to our results, the colour parameters (lightness, red-green and blue-yellow values) of the microwave treated samples differed not only from the untreated control, but also from the traditional heat treated samples. The differences are visually undetectable, however, they become evident through analytical measurement with spectrophotometer. This finding suggests that besides thermal effects, microwave-based food treatment can alter product properties in other ways as well. PMID:23341982

  5. Experiments to Evaluate and Implement Passive Tracer Gas Methods to Measure Ventilation Rates in Homes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lunden, Melissa; Faulkner, David; Heredia, Elizabeth

    2012-10-01

    This report documents experiments performed in three homes to assess the methodology used to determine air exchange rates using passive tracer techniques. The experiments used four different tracer gases emitted simultaneously but implemented with different spatial coverage in the home. Two different tracer gas sampling methods were used. The results characterize the factors of the execution and analysis of the passive tracer technique that affect the uncertainty in the calculated air exchange rates. These factors include uncertainties in tracer gas emission rates, differences in measured concentrations for different tracer gases, temporal and spatial variability of the concentrations, the comparison betweenmore » different gas sampling methods, and the effect of different ventilation conditions.« less

  6. Emotion Recognition from EEG Signals Using Multidimensional Information in EMD Domain.

    PubMed

    Zhuang, Ning; Zeng, Ying; Tong, Li; Zhang, Chi; Zhang, Hanming; Yan, Bin

    2017-01-01

    This paper introduces a method for feature extraction and emotion recognition based on empirical mode decomposition (EMD). By using EMD, EEG signals are decomposed into Intrinsic Mode Functions (IMFs) automatically. Multidimensional information of IMF is utilized as features, the first difference of time series, the first difference of phase, and the normalized energy. The performance of the proposed method is verified on a publicly available emotional database. The results show that the three features are effective for emotion recognition. The role of each IMF is inquired and we find that high frequency component IMF1 has significant effect on different emotional states detection. The informative electrodes based on EMD strategy are analyzed. In addition, the classification accuracy of the proposed method is compared with several classical techniques, including fractal dimension (FD), sample entropy, differential entropy, and discrete wavelet transform (DWT). Experiment results on DEAP datasets demonstrate that our method can improve emotion recognition performance.

  7. Methods for specifying spatial boundaries of cities in the world: The impacts of delineation methods on city sustainability indices.

    PubMed

    Uchiyama, Yuta; Mori, Koichiro

    2017-08-15

    The purpose of this paper is to analyze how different definitions and methods for delineating the spatial boundaries of cities have an impact on the values of city sustainability indicators. It is necessary to distinguish the inside of cities from the outside when calculating the values of sustainability indicators that assess the impacts of human activities within cities on areas beyond their boundaries. For this purpose, spatial boundaries of cities should be practically detected on the basis of a relevant definition of a city. Although no definition of a city is commonly shared among academic fields, three practical methods for identifying urban areas are available in remote sensing science. Those practical methods are based on population density, landcover, and night-time lights. These methods are correlated, but non-negligible differences exist in their determination of urban extents and urban population. Furthermore, critical and statistically significant differences in some urban environmental sustainability indicators result from the three different urban detection methods. For example, the average values of CO 2 emissions per capita and PM 10 concentration in cities with more than 1 million residents are significantly different among the definitions. When analyzing city sustainability indicators and disseminating the implication of the results, the values based on the different definitions should be simultaneously investigated. It is necessary to carefully choose a relevant definition to analyze sustainability indicators for policy making. Otherwise, ineffective and inefficient policies will be developed. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Full-field stress determination in photoelasticity with phase shifting technique

    NASA Astrophysics Data System (ADS)

    Guo, Enhai; Liu, Yonggang; Han, Yongsheng; Arola, Dwayne; Zhang, Dongsheng

    2018-04-01

    Photoelasticity is an effective method for evaluating the stress and its spatial variations within a stressed body. In the present study, a method to determine the stress distribution by means of phase shifting and a modified shear-difference is proposed. First, the orientation of the first principal stress and the retardation between the principal stresses are determined in the full-field through phase shifting. Then, through bicubic interpolation and derivation of a modified shear-difference method, the internal stress is calculated from the point with a free boundary along its normal direction. A method to reduce integration error in the shear difference scheme is proposed and compared to the existing methods; the integration error is reduced when using theoretical photoelastic parameters to calculate the stress component with the same points. Results show that when the value of Δx/Δy approaches one, the error is minimum, and although the interpolation error is inevitable, it has limited influence on the accuracy of the result. Finally, examples are presented for determining the stresses in a circular plate and ring subjected to diametric loading. Results show that the proposed approach provides a complete solution for determining the full-field stresses in photoelastic models.

  9. Reproducibility of a silicone-based test food to masticatory performance evaluation by different sieve methods.

    PubMed

    Sánchez-Ayala, Alfonso; Vilanova, Larissa Soares Reis; Costa, Marina Abrantes; Farias-Neto, Arcelino

    2014-01-01

    The aim of this study was to evaluate the reproducibility of the condensation silicone Optosil Comfort® as an artificial test food for masticatory performance evaluation. Twenty dentate subjects with mean age of 23.3±0.7 years were selected. Masticatory performance was evaluated using the simple (MPI), the double (IME) and the multiple sieve methods. Trials were carried out five times by three examiners: three times by the first, and once by the second and third examiners. Friedman's test was used to find the differences among time trials. Reproducibility was determined by the intra-class correlation (ICC) test (α=0.05). No differences among time trials were found, except for MPI-4 mm (p=0.022) from the first examiner results. The intra-examiner reproducibility (ICC) of almost all data was high (ICC≥0.92, p<0.001), being moderate only for MPI-0.50 mm (ICC=0.89, p<0.001). The inter-examiner reproducibility was high (ICC>0.93, p<0.001) for all results. For the multiple sieve method, the average mean of absolute difference from repeated measurements were lower than 1 mm. This trend was observed only from MPI-0.50 to MPI-1.4 for the single sieve method, and from IME-0.71/0.50 to IME-1.40/1.00 for the double sieve method. The results suggest that regardless of the method used, the reproducibility of Optosil Comfort® is high.

  10. The impact of the learning contract on self-directed learning and satisfaction in nursing students in a clinical setting.

    PubMed

    Sajadi, Mahboobeh; Fayazi, Neda; Fournier, Andrew; Abedi, Ahmad Reza

    2017-01-01

    Background: The most important responsibilities of an education system are to create self-directed learning opportunities and develop the required skills for taking the responsibility for change. The present study aimed at determining the impact of a learning contract on self-directed learning and satisfaction of nursing students. Methods: A total of 59 nursing students participated in this experimental study. They were divided into six 10-member groups. To control the communications among the groups, the first 3 groups were trained using conventional learning methods and the second 3 groups using learning contract method. In the first session, a pretest was performed based on educational objectives. At the end of the training, the students in each group completed the questionnaires of self-directed learning and satisfaction. The results of descriptive and inferential statistical methods (dependent and independent t tests) were presented using SPSS. Results: There were no significant differences between the 2 groups in gender, grade point average of previous years, and interest toward nursing. However, the results revealed a significant difference between the 2 groups in the total score of self-directed learning (p= 0.019). Although the mean satisfaction score was higher in the intervention group, the difference was not statistically significant. Conclusion: This study suggested that the use of learning contract method in clinical settings enhances self-directed learning among nursing students. Because this model focuses on individual differences, the researcher highly recommends the application of this new method to educators.

  11. Experimental Quasi-Microwave Whole-Body Averaged SAR Estimation Method Using Cylindrical-External Field Scanning

    NASA Astrophysics Data System (ADS)

    Kawamura, Yoshifumi; Hikage, Takashi; Nojima, Toshio

    The aim of this study is to develop a new whole-body averaged specific absorption rate (SAR) estimation method based on the external-cylindrical field scanning technique. This technique is adopted with the goal of simplifying the dosimetry estimation of human phantoms that have different postures or sizes. An experimental scaled model system is constructed. In order to examine the validity of the proposed method for realistic human models, we discuss the pros and cons of measurements and numerical analyses based on the finite-difference time-domain (FDTD) method. We consider the anatomical European human phantoms and plane-wave in the 2GHz mobile phone frequency band. The measured whole-body averaged SAR results obtained by the proposed method are compared with the results of the FDTD analyses.

  12. Divergence preserving discrete surface integral methods for Maxwell's curl equations using non-orthogonal unstructured grids

    NASA Technical Reports Server (NTRS)

    Madsen, Niel K.

    1992-01-01

    Several new discrete surface integral (DSI) methods for solving Maxwell's equations in the time-domain are presented. These methods, which allow the use of general nonorthogonal mixed-polyhedral unstructured grids, are direct generalizations of the canonical staggered-grid finite difference method. These methods are conservative in that they locally preserve divergence or charge. Employing mixed polyhedral cells, (hexahedral, tetrahedral, etc.) these methods allow more accurate modeling of non-rectangular structures and objects because the traditional stair-stepped boundary approximations associated with the orthogonal grid based finite difference methods can be avoided. Numerical results demonstrating the accuracy of these new methods are presented.

  13. New decision criteria for selecting delta check methods based on the ratio of the delta difference to the width of the reference range can be generally applicable for each clinical chemistry test item.

    PubMed

    Park, Sang Hyuk; Kim, So-Young; Lee, Woochang; Chun, Sail; Min, Won-Ki

    2012-09-01

    Many laboratories use 4 delta check methods: delta difference, delta percent change, rate difference, and rate percent change. However, guidelines regarding decision criteria for selecting delta check methods have not yet been provided. We present new decision criteria for selecting delta check methods for each clinical chemistry test item. We collected 811,920 and 669,750 paired (present and previous) test results for 27 clinical chemistry test items from inpatients and outpatients, respectively. We devised new decision criteria for the selection of delta check methods based on the ratio of the delta difference to the width of the reference range (DD/RR). Delta check methods based on these criteria were compared with those based on the CV% of the absolute delta difference (ADD) as well as those reported in 2 previous studies. The delta check methods suggested by new decision criteria based on the DD/RR ratio corresponded well with those based on the CV% of the ADD except for only 2 items each in inpatients and outpatients. Delta check methods based on the DD/RR ratio also corresponded with those suggested in the 2 previous studies, except for 1 and 7 items in inpatients and outpatients, respectively. The DD/RR method appears to yield more feasible and intuitive selection criteria and can easily explain changes in the results by reflecting both the biological variation of the test item and the clinical characteristics of patients in each laboratory. We suggest this as a measure to determine delta check methods.

  14. Accuracy improvement of the H-drive air-levitating wafer inspection stage based on error analysis and compensation

    NASA Astrophysics Data System (ADS)

    Zhang, Fan; Liu, Pinkuan

    2018-04-01

    In order to improve the inspection precision of the H-drive air-bearing stage for wafer inspection, in this paper the geometric error of the stage is analyzed and compensated. The relationship between the positioning errors and error sources are initially modeled, and seven error components are identified that are closely related to the inspection accuracy. The most effective factor that affects the geometric error is identified by error sensitivity analysis. Then, the Spearman rank correlation method is applied to find the correlation between different error components, aiming at guiding the accuracy design and error compensation of the stage. Finally, different compensation methods, including the three-error curve interpolation method, the polynomial interpolation method, the Chebyshev polynomial interpolation method, and the B-spline interpolation method, are employed within the full range of the stage, and their results are compared. Simulation and experiment show that the B-spline interpolation method based on the error model has better compensation results. In addition, the research result is valuable for promoting wafer inspection accuracy and will greatly benefit the semiconductor industry.

  15. Bimodal Biometric Verification Using the Fusion of Palmprint and Infrared Palm-Dorsum Vein Images

    PubMed Central

    Lin, Chih-Lung; Wang, Shih-Hung; Cheng, Hsu-Yung; Fan, Kuo-Chin; Hsu, Wei-Lieh; Lai, Chin-Rong

    2015-01-01

    In this paper, we present a reliable and robust biometric verification method based on bimodal physiological characteristics of palms, including the palmprint and palm-dorsum vein patterns. The proposed method consists of five steps: (1) automatically aligning and cropping the same region of interest from different palm or palm-dorsum images; (2) applying the digital wavelet transform and inverse wavelet transform to fuse palmprint and vein pattern images; (3) extracting the line-like features (LLFs) from the fused image; (4) obtaining multiresolution representations of the LLFs by using a multiresolution filter; and (5) using a support vector machine to verify the multiresolution representations of the LLFs. The proposed method possesses four advantages: first, both modal images are captured in peg-free scenarios to improve the user-friendliness of the verification device. Second, palmprint and vein pattern images are captured using a low-resolution digital scanner and infrared (IR) camera. The use of low-resolution images results in a smaller database. In addition, the vein pattern images are captured through the invisible IR spectrum, which improves antispoofing. Third, since the physiological characteristics of palmprint and vein pattern images are different, a hybrid fusing rule can be introduced to fuse the decomposition coefficients of different bands. The proposed method fuses decomposition coefficients at different decomposed levels, with different image sizes, captured from different sensor devices. Finally, the proposed method operates automatically and hence no parameters need to be set manually. Three thousand palmprint images and 3000 vein pattern images were collected from 100 volunteers to verify the validity of the proposed method. The results show a false rejection rate of 1.20% and a false acceptance rate of 1.56%. It demonstrates the validity and excellent performance of our proposed method comparing to other methods. PMID:26703596

  16. Bimodal Biometric Verification Using the Fusion of Palmprint and Infrared Palm-Dorsum Vein Images.

    PubMed

    Lin, Chih-Lung; Wang, Shih-Hung; Cheng, Hsu-Yung; Fan, Kuo-Chin; Hsu, Wei-Lieh; Lai, Chin-Rong

    2015-12-12

    In this paper, we present a reliable and robust biometric verification method based on bimodal physiological characteristics of palms, including the palmprint and palm-dorsum vein patterns. The proposed method consists of five steps: (1) automatically aligning and cropping the same region of interest from different palm or palm-dorsum images; (2) applying the digital wavelet transform and inverse wavelet transform to fuse palmprint and vein pattern images; (3) extracting the line-like features (LLFs) from the fused image; (4) obtaining multiresolution representations of the LLFs by using a multiresolution filter; and (5) using a support vector machine to verify the multiresolution representations of the LLFs. The proposed method possesses four advantages: first, both modal images are captured in peg-free scenarios to improve the user-friendliness of the verification device. Second, palmprint and vein pattern images are captured using a low-resolution digital scanner and infrared (IR) camera. The use of low-resolution images results in a smaller database. In addition, the vein pattern images are captured through the invisible IR spectrum, which improves antispoofing. Third, since the physiological characteristics of palmprint and vein pattern images are different, a hybrid fusing rule can be introduced to fuse the decomposition coefficients of different bands. The proposed method fuses decomposition coefficients at different decomposed levels, with different image sizes, captured from different sensor devices. Finally, the proposed method operates automatically and hence no parameters need to be set manually. Three thousand palmprint images and 3000 vein pattern images were collected from 100 volunteers to verify the validity of the proposed method. The results show a false rejection rate of 1.20% and a false acceptance rate of 1.56%. It demonstrates the validity and excellent performance of our proposed method comparing to other methods.

  17. SU-F-T-308: Mobius FX Evaluation and Comparison Against a Commercial 4D Detector Array for VMAT Plan QA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vazquez Quino, L; Huerta Hernandez, C; Morrow, A

    2016-06-15

    Purpose: To evaluate the use of MobiusFX as a pre-treatment verification IMRT QA tool and compare it with a commercial 4D detector array for VMAT plan QA. Methods: 15 VMAT plan QA of different treatment sites were delivered and measured by traditional means with the 4D detector array ArcCheck (Sun Nuclear corporation) and at the same time measurement in linac treatment logs (Varian Dynalogs files) were analyzed from the same delivery with MobiusFX software (Mobius Medical Systems). VMAT plan QAs created in Eclipse treatment planning system (Varian) in a TrueBeam linac machine (Varian) were delivered and analyzed with the gammamore » analysis routine from SNPA software (Sun Nuclear corporation). Results: Comparable results in terms of the gamma analysis with 99.06% average gamma passing with 3%,3mm passing rate is observed in the comparison among MobiusFX, ArcCheck measurements, and the Treatment Planning System dose calculated. When going to a stricter criterion (1%,1mm) larger discrepancies are observed in different regions of the measurements with an average gamma of 66.24% between MobiusFX and ArcCheck. Conclusion: This work indicates the potential for using MobiusFX as a routine pre-treatment patient specific IMRT method for quality assurance purposes and its advantages as a phantom-less method which reduce the time for IMRT QA measurement. MobiusFX is capable of produce similar results of those by traditional methods used for patient specific pre-treatment verification VMAT QA. Even the gamma results comparing to the TPS are similar the analysis of both methods show that the errors being identified by each method are found in different regions. Traditional methods like ArcCheck are sensitive to setup errors and dose difference errors coming from the linac output. On the other hand linac log files analysis record different errors in the VMAT QA associated with the MLCs and gantry motion that by traditional methods cannot be detected.« less

  18. Do the methods used to analyse missing data really matter? An examination of data from an observational study of Intermediate Care patients.

    PubMed

    Kaambwa, Billingsley; Bryan, Stirling; Billingham, Lucinda

    2012-06-27

    Missing data is a common statistical problem in healthcare datasets from populations of older people. Some argue that arbitrarily assuming the mechanism responsible for the missingness and therefore the method for dealing with this missingness is not the best option-but is this always true? This paper explores what happens when extra information that suggests that a particular mechanism is responsible for missing data is disregarded and methods for dealing with the missing data are chosen arbitrarily. Regression models based on 2,533 intermediate care (IC) patients from the largest evaluation of IC done and published in the UK to date were used to explain variation in costs, EQ-5D and Barthel index. Three methods for dealing with missingness were utilised, each assuming a different mechanism as being responsible for the missing data: complete case analysis (assuming missing completely at random-MCAR), multiple imputation (assuming missing at random-MAR) and Heckman selection model (assuming missing not at random-MNAR). Differences in results were gauged by examining the signs of coefficients as well as the sizes of both coefficients and associated standard errors. Extra information strongly suggested that missing cost data were MCAR. The results show that MCAR and MAR-based methods yielded similar results with sizes of most coefficients and standard errors differing by less than 3.4% while those based on MNAR-methods were statistically different (up to 730% bigger). Significant variables in all regression models also had the same direction of influence on costs. All three mechanisms of missingness were shown to be potential causes of the missing EQ-5D and Barthel data. The method chosen to deal with missing data did not seem to have any significant effect on the results for these data as they led to broadly similar conclusions with sizes of coefficients and standard errors differing by less than 54% and 322%, respectively. Arbitrary selection of methods to deal with missing data should be avoided. Using extra information gathered during the data collection exercise about the cause of missingness to guide this selection would be more appropriate.

  19. Calculation of transonic flows using an extended integral equation method

    NASA Technical Reports Server (NTRS)

    Nixon, D.

    1976-01-01

    An extended integral equation method for transonic flows is developed. In the extended integral equation method velocities in the flow field are calculated in addition to values on the aerofoil surface, in contrast with the less accurate 'standard' integral equation method in which only surface velocities are calculated. The results obtained for aerofoils in subcritical flow and in supercritical flow when shock waves are present compare satisfactorily with the results of recent finite difference methods.

  20. Solving the dynamic rupture problem with different numerical approaches and constitutive laws

    USGS Publications Warehouse

    Bizzarri, A.; Cocco, M.; Andrews, D.J.; Boschi, Enzo

    2001-01-01

    We study the dynamic initiation, propagation and arrest of a 2-D in-plane shear rupture by solving the elastodynamic equation by using both a boundary integral equation method and a finite difference approach. For both methods we adopt different constitutive laws: a slip-weakening (SW) law, with constant weakening rate, and rate- and state-dependent friction laws (Dieterich-Ruina). Our numerical procedures allow the use of heterogeneous distributions of constitutive parameters along the fault for both formulations. We first compare the two solution methods with an SW law, emphasizing the required stability conditions to achieve a good resolution of the cohesive zone and to avoid artificial complexity in the solutions. Our modelling results show that the two methods provide very similar time histories of dynamic source parameters. We point out that, if a careful control of resolution and stability is performed, the two methods yield identical solutions. We have also compared the rupture evolution resulting from an SW and a rate- and state-dependent friction law. This comparison shows that despite the different constitutive formulations, a similar behaviour is simulated during the rupture propagation and arrest. We also observe a crack tip bifurcation and a jump in rupture velocity (approaching the P-wave speed) with the Dieterich-Ruina (DR) law. The rupture arrest at a barrier (high strength zone) and the barrier-healing mechanism are also reproduced by this law. However, this constitutive formulation allows the simulation of a more general and complex variety of rupture behaviours. By assuming different heterogeneous distributions of the initial constitutive parameters, we are able to model a barrier-healing as well as a self-healing process. This result suggests that if the heterogeneity of the constitutive parameters is taken into account, the different healing mechanisms can be simulated. We also study the nucleation phase duration Tn, defined as the time necessary for the crack to reach the half-length Ic. We compare the Tn values resulting from distinct simulations calculated using different constitutive laws and different sets of constitutive parameters. Our results confirm that the DR law provides a different description of the nucleation process than the SW law adopted in this study. We emphasize that the DR law yields a complete description of the rupture process, which includes the most prominent features of SW.

  1. Self-Developed Testing System for Determining the Temperature Behavior of Concrete.

    PubMed

    Zhu, He; Li, Qingbin; Hu, Yu

    2017-04-16

    Cracking due to temperature and restraint in mass concrete is an important issue. A temperature stress testing machine (TSTM) is an effective test method to study the mechanism of temperature cracking. A synchronous closed loop federated control TSTM system has been developed by adopting the design concepts of a closed loop federated control, a detachable mold design, a direct measuring deformation method, and a temperature deformation compensation method. The results show that the self-developed system has the comprehensive ability of simulating different restraint degrees, multiple temperature and humidity modes, and closed-loop control of multi-TSTMs during one test period. Additionally, the direct measuring deformation method can obtain a more accurate deformation and restraint degree result with little local damage. The external temperature deformation affecting the concrete specimen can be eliminated by adopting the temperature deformation compensation method with different considerations of steel materials. The concrete quality of different TSTMs can be guaranteed by being vibrated on the vibrating stand synchronously. The detachable mold design and assembled method has greatly overcome the difficulty of eccentric force and deformation.

  2. Self-Developed Testing System for Determining the Temperature Behavior of Concrete

    PubMed Central

    Zhu, He; Li, Qingbin; Hu, Yu

    2017-01-01

    Cracking due to temperature and restraint in mass concrete is an important issue. A temperature stress testing machine (TSTM) is an effective test method to study the mechanism of temperature cracking. A synchronous closed loop federated control TSTM system has been developed by adopting the design concepts of a closed loop federated control, a detachable mold design, a direct measuring deformation method, and a temperature deformation compensation method. The results show that the self-developed system has the comprehensive ability of simulating different restraint degrees, multiple temperature and humidity modes, and closed-loop control of multi-TSTMs during one test period. Additionally, the direct measuring deformation method can obtain a more accurate deformation and restraint degree result with little local damage. The external temperature deformation affecting the concrete specimen can be eliminated by adopting the temperature deformation compensation method with different considerations of steel materials. The concrete quality of different TSTMs can be guaranteed by being vibrated on the vibrating stand synchronously. The detachable mold design and assembled method has greatly overcome the difficulty of eccentric force and deformation. PMID:28772778

  3. Development of a new quantitative gas permeability method for dental implant-abutment connection tightness assessment

    PubMed Central

    2011-01-01

    Background Most dental implant systems are presently made of two pieces: the implant itself and the abutment. The connection tightness between those two pieces is a key point to prevent bacterial proliferation, tissue inflammation and bone loss. The leak has been previously estimated by microbial, color tracer and endotoxin percolation. Methods A new nitrogen flow technique was developed for implant-abutment connection leakage measurement, adapted from a recent, sensitive, reproducible and quantitative method used to assess endodontic sealing. Results The results show very significant differences between various sealing and screwing conditions. The remaining flow was lower after key screwing compared to hand screwing (p = 0.03) and remained different from the negative test (p = 0.0004). The method reproducibility was very good, with a coefficient of variation of 1.29%. Conclusions Therefore, the presented new gas flow method appears to be a simple and robust method to compare different implant systems. It allows successive measures without disconnecting the abutment from the implant and should in particular be used to assess the behavior of the connection before and after mechanical stress. PMID:21492459

  4. Methods comparison for microsatellite marker development: Different isolation methods, different yield efficiency

    NASA Astrophysics Data System (ADS)

    Zhan, Aibin; Bao, Zhenmin; Hu, Xiaoli; Lu, Wei; Hu, Jingjie

    2009-06-01

    Microsatellite markers have become one kind of the most important molecular tools used in various researches. A large number of microsatellite markers are required for the whole genome survey in the fields of molecular ecology, quantitative genetics and genomics. Therefore, it is extremely necessary to select several versatile, low-cost, efficient and time- and labor-saving methods to develop a large panel of microsatellite markers. In this study, we used Zhikong scallop ( Chlamys farreri) as the target species to compare the efficiency of the five methods derived from three strategies for microsatellite marker development. The results showed that the strategy of constructing small insert genomic DNA library resulted in poor efficiency, while the microsatellite-enriched strategy highly improved the isolation efficiency. Although the mining public database strategy is time- and cost-saving, it is difficult to obtain a large number of microsatellite markers, mainly due to the limited sequence data of non-model species deposited in public databases. Based on the results in this study, we recommend two methods, microsatellite-enriched library construction method and FIASCO-colony hybridization method, for large-scale microsatellite marker development. Both methods were derived from the microsatellite-enriched strategy. The experimental results obtained from Zhikong scallop also provide the reference for microsatellite marker development in other species with large genomes.

  5. Assessment of the effect of population and diary sampling methods on estimation of school-age children exposure to fine particles.

    PubMed

    Che, W W; Frey, H Christopher; Lau, Alexis K H

    2014-12-01

    Population and diary sampling methods are employed in exposure models to sample simulated individuals and their daily activity on each simulation day. Different sampling methods may lead to variations in estimated human exposure. In this study, two population sampling methods (stratified-random and random-random) and three diary sampling methods (random resampling, diversity and autocorrelation, and Markov-chain cluster [MCC]) are evaluated. Their impacts on estimated children's exposure to ambient fine particulate matter (PM2.5 ) are quantified via case studies for children in Wake County, NC for July 2002. The estimated mean daily average exposure is 12.9 μg/m(3) for simulated children using the stratified population sampling method, and 12.2 μg/m(3) using the random sampling method. These minor differences are caused by the random sampling among ages within census tracts. Among the three diary sampling methods, there are differences in the estimated number of individuals with multiple days of exposures exceeding a benchmark of concern of 25 μg/m(3) due to differences in how multiday longitudinal diaries are estimated. The MCC method is relatively more conservative. In case studies evaluated here, the MCC method led to 10% higher estimation of the number of individuals with repeated exposures exceeding the benchmark. The comparisons help to identify and contrast the capabilities of each method and to offer insight regarding implications of method choice. Exposure simulation results are robust to the two population sampling methods evaluated, and are sensitive to the choice of method for simulating longitudinal diaries, particularly when analyzing results for specific microenvironments or for exposures exceeding a benchmark of concern. © 2014 Society for Risk Analysis.

  6. Wind Tunnel Force Balance Calibration Study - Interim Results

    NASA Technical Reports Server (NTRS)

    Rhew, Ray D.

    2012-01-01

    Wind tunnel force balance calibration is preformed utilizing a variety of different methods and does not have a direct traceable standard such as standards used for most calibration practices (weights, and voltmeters). These different calibration methods and practices include, but are not limited to, the loading schedule, the load application hardware, manual and automatic systems, re-leveling and non-re-leveling. A study of the balance calibration techniques used by NASA was undertaken to develop metrics for reviewing and comparing results using sample calibrations. The study also includes balances of different designs, single and multi-piece. The calibration systems include, the manual, and the automatic that are provided by NASA and its vendors. The results to date will be presented along with the techniques for comparing the results. In addition, future planned calibrations and investigations based on the results will be provided.

  7. Value of the polymerase chain reaction method for detecting tuberculosis in the bronchial tissue involved by anthracosis.

    PubMed

    Mirsadraee, Majid; Shafahie, Ahmad; Reza Khakzad, Mohammad; Sankian, Mojtaba

    2014-04-01

    Anthracofibrosis is the black discoloration of the bronchial mucosa with deformity and obstruction. Association of this disease with tuberculosis (TB) was approved. The objective of this study was to find the additional benefit of assessment of TB by the polymerase chain reaction (PCR) method. Bronchoscopy was performed on 103 subjects (54 anthracofibrosis and 49 control subjects) who required bronchoscopy for their pulmonary problems. According to bronchoscopic findings, participants were classified to anthracofibrosis and nonanthracotic groups. They were examined for TB with traditional methods such as direct smear (Ziehl-Neelsen staining), Löwenstein-Jensen culture, and histopathology and the new method "PCR" for Mycobacterium tuberculosis genome (IS6110). Age, sex, smoking, and clinical findings were not significantly different in the TB and the non-TB groups. Acid-fast bacilli could be detected by a direct smear in 12 (25%) of the anthracofibrosis subjects, and adding the results of culture and histopathology traditional tests indicated TB in 27 (31%) of the cases. Mycobacterium tuberculosis was diagnosed by PCR in 18 (33%) patients, but the difference was not significant. Detection of acid-fast bacilli in control nonanthracosis subjects was significantly lower (3, 6%), but PCR (20, 40%) and accumulation of results from all traditional methods (22, 44%) showed a nonsignificant difference. The PCR method showed a result equal to traditional methods including accumulation of smear, culture, and histopathology.

  8. A comparison of methods to assess the antimicrobial activity of nanoparticle combinations on bacterial cells.

    PubMed

    Bankier, Claire; Cheong, Yuen; Mahalingam, Suntharavathanan; Edirisinghe, Mohan; Ren, Guogang; Cloutman-Green, Elaine; Ciric, Lena

    2018-01-01

    Bacterial cell quantification after exposure to antimicrobial compounds varies widely throughout industry and healthcare. Numerous methods are employed to quantify these antimicrobial effects. With increasing demand for new preventative methods for disease control, we aimed to compare and assess common analytical methods used to determine antimicrobial effects of novel nanoparticle combinations on two different pathogens. Plate counts of total viable cells, flow cytometry (LIVE/DEAD BacLight viability assay) and qPCR (viability qPCR) were used to assess the antimicrobial activity of engineered nanoparticle combinations (NPCs) on Gram-positive (Staphylococcus aureus) and Gram-negative (Pseudomonas aeruginosa) bacteria at different concentrations (0.05, 0.10 and 0.25 w/v%). Results were analysed using linear models to assess the effectiveness of different treatments. Strong antimicrobial effects of the three NPCs (AMNP0-2) on both pathogens could be quantified using the plate count method and flow cytometry. The plate count method showed a high log reduction (>8-log) for bacteria exposed to high NPC concentrations. We found similar antimicrobial results using the flow cytometry live/dead assay. Viability qPCR analysis of antimicrobial activity could not be quantified due to interference of NPCs with qPCR amplification. Flow cytometry was determined to be the best method to measure antimicrobial activity of the novel NPCs due to high-throughput, rapid and quantifiable results.

  9. A review of the application of propensity score methods yielded increasing use, advantages in specific settings, but not substantially different estimates compared with conventional multivariable methods

    PubMed Central

    Stürmer, Til; Joshi, Manisha; Glynn, Robert J.; Avorn, Jerry; Rothman, Kenneth J.; Schneeweiss, Sebastian

    2006-01-01

    Objective Propensity score analyses attempt to control for confounding in non-experimental studies by adjusting for the likelihood that a given patient is exposed. Such analyses have been proposed to address confounding by indication, but there is little empirical evidence that they achieve better control than conventional multivariate outcome modeling. Study design and methods Using PubMed and Science Citation Index, we assessed the use of propensity scores over time and critically evaluated studies published through 2003. Results Use of propensity scores increased from a total of 8 papers before 1998 to 71 in 2003. Most of the 177 published studies abstracted assessed medications (N=60) or surgical interventions (N=51), mainly in cardiology and cardiac surgery (N=90). Whether PS methods or conventional outcome models were used to control for confounding had little effect on results in those studies in which such comparison was possible. Only 9 out of 69 studies (13%) had an effect estimate that differed by more than 20% from that obtained with a conventional outcome model in all PS analyses presented. Conclusions Publication of results based on propensity score methods has increased dramatically, but there is little evidence that these methods yield substantially different estimates compared with conventional multivariable methods. PMID:16632131

  10. Some remarks on using circulation classifications to evaluate circulation model and atmospheric reanalysis data

    NASA Astrophysics Data System (ADS)

    Stryhal, Jan; Huth, Radan

    2017-04-01

    Automated classifications of atmospheric circulation patterns represent a tool widely used for studying the circulation in both the real atmosphere, represented by atmospheric reanalyses, and in circulation model outputs. It is well known that the results of studies utilizing one of these methods are influenced by several subjective choices, of which one of the most crucial is the selection of the method itself. Authors of the present study used eight methods from the COST733 classification software (Grosswettertypes, two variants of Jenkinson-Collison, Lund, T-mode PCA with oblique rotation of principal components, k-medoids, k-means with differing starting partitions, and SANDRA) to assess the winter 1961-2000 daily sea level pressure patterns in five reanalysis datasets (ERA-40, NCEP-1, JRA-55, 20CRv2, and ERA-20C), as well as in the historical runs and 21st century projections of an ensemble of CMIP5 GCMs. The classification methods were quite consistent in displaying the strongest biases in GCM simulations. However, the results also showed that multiple classifications are required to quantify the biases in certain types of circulation (e.g., zonal circulation or blocking-like patterns). There was no sign that any method should have a tendency to over- or underestimate the biases in circulation type frequency. The bias found by a particular method for a particular domain clearly reflects the ability of the algorithm to detect groups of similar patterns within the data space, and whether these groups do or do not differ one dataset to another is to a large extend coincidental. There were, nevertheless, systematic differences between groups of methods that use some form of correlation to classify the patterns to circulation types (CTs) and those which use the Euclidean distance. The comparison of reanalyses, which was conducted over eight European domains, showed that there is even a weak negative correlation between the average differences of CT frequency found by cluster analysis methods on one hand, and the remaining methods on the other. This suggests that groups of different methods capture different kinds of errors and that averaging the results obtained by an ensemble of methods very likely leads to an underestimation of the errors actually present in the data.

  11. Investigating the fate of activated sludge extracellular proteins in sludge digestion using sodium dodecyl sulfate polyacrylamide gel electrophoresis.

    PubMed

    Park, Chul; Helm, Richard F; Novak, John T

    2008-12-01

    The fate of activated sludge extracellular proteins in sludge digestion was investigated using three different cation-associated extraction methods and sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE). Extraction methods used were the cation exchange resin (CER) method for extracting calcium (Ca2+) and magnesium (Mg2+), sulfide extraction for removing iron, and base treatment (pH 10.5) for dissolving aluminum. Extracellular polymeric substances extracted were then subjected to SDS-PAGE, and the resultant protein profiles were examined before and after sludge digestion. The SDS-PAGE results showed that three methods led to different SDS-PAGE profiles for both undigested and digested sludges. The results further revealed that CER-extracted proteins remained mainly undegraded in anaerobic digestion, but were degraded in aerobic digestion. While the fate of sulfide- and base-extracted proteins was not clear for aerobic digestion, their changes in anaerobic digestion were elucidated. Most sulfide-extracted proteins were removed by anaerobic digestion, while the increase in protein band intensity and diversity was observed for base-extracted proteins. These results suggest that activated sludge flocs contain different fractions of proteins that are distinguishable by their association with certain cations and that each fraction undergoes different fates in anaerobic and aerobic digestion. The proteins that were resistant to degradation and generated during anaerobic digestion were identified by liquid chromatography tandem mass spectrometry. Protein identification results and their putative roles in activated sludge and anaerobic digestion are discussed in this study.

  12. Elemental Analysis of Beryllium Samples Using a Microzond-EGP-10 Unit

    NASA Astrophysics Data System (ADS)

    Buzoverya, M. E.; Karpov, I. A.; Gorodnov, A. A.; Shishpor, I. V.; Kireycheva, V. I.

    2017-12-01

    Results concerning the structural and elemental analysis of beryllium samples obtained via different technologies using a Microzond-EGP-10 unit with the help of the PIXE and RBS methods are presented. As a result, the overall chemical composition and the nature of inclusions were determined. The mapping method made it possible to reveal the structural features of beryllium samples: to select the grains of the main substance having different size and chemical composition, to visualize the interfaces between the regions of different composition, and to describe the features of the distribution of impurities in the samples.

  13. A new mathematical approach for the estimation of the AUC and its variability under different experimental designs in preclinical studies.

    PubMed

    Navarro-Fontestad, Carmen; González-Álvarez, Isabel; Fernández-Teruel, Carlos; Bermejo, Marival; Casabó, Vicente Germán

    2012-01-01

    The aim of the present work was to develop a new mathematical method for estimating the area under the curve (AUC) and its variability that could be applied in different preclinical experimental designs and amenable to be implemented in standard calculation worksheets. In order to assess the usefulness of the new approach, different experimental scenarios were studied and the results were compared with those obtained with commonly used software: WinNonlin® and Phoenix WinNonlin®. The results do not show statistical differences among the AUC values obtained by both procedures, but the new method appears to be a better estimator of the AUC standard error, measured as the coverage of 95% confidence interval. In this way, the new proposed method demonstrates to be as useful as WinNonlin® software when it was applicable. Copyright © 2011 John Wiley & Sons, Ltd.

  14. Detection of Temperature Difference in Neuronal Cells.

    PubMed

    Tanimoto, Ryuichi; Hiraiwa, Takumi; Nakai, Yuichiro; Shindo, Yutaka; Oka, Kotaro; Hiroi, Noriko; Funahashi, Akira

    2016-03-01

    For a better understanding of the mechanisms behind cellular functions, quantification of the heterogeneity in an organism or cells is essential. Recently, the importance of quantifying temperature has been highlighted, as it correlates with biochemical reaction rates. Several methods for detecting intracellular temperature have recently been established. Here we develop a novel method for sensing temperature in living cells based on the imaging technique of fluorescence of quantum dots. We apply the method to quantify the temperature difference in a human derived neuronal cell line, SH-SY5Y. Our results show that temperatures in the cell body and neurites are different and thus suggest that inhomogeneous heat production and dissipation happen in a cell. We estimate that heterogeneous heat dissipation results from the characteristic shape of neuronal cells, which consist of several compartments formed with different surface-volume ratios. Inhomogeneous heat production is attributable to the localization of specific organelles as the heat source.

  15. Convergence and divergence across construction methods for human brain white matter networks: an assessment based on individual differences.

    PubMed

    Zhong, Suyu; He, Yong; Gong, Gaolang

    2015-05-01

    Using diffusion MRI, a number of studies have investigated the properties of whole-brain white matter (WM) networks with differing network construction methods (node/edge definition). However, how the construction methods affect individual differences of WM networks and, particularly, if distinct methods can provide convergent or divergent patterns of individual differences remain largely unknown. Here, we applied 10 frequently used methods to construct whole-brain WM networks in a healthy young adult population (57 subjects), which involves two node definitions (low-resolution and high-resolution) and five edge definitions (binary, FA weighted, fiber-density weighted, length-corrected fiber-density weighted, and connectivity-probability weighted). For these WM networks, individual differences were systematically analyzed in three network aspects: (1) a spatial pattern of WM connections, (2) a spatial pattern of nodal efficiency, and (3) network global and local efficiencies. Intriguingly, we found that some of the network construction methods converged in terms of individual difference patterns, but diverged with other methods. Furthermore, the convergence/divergence between methods differed among network properties that were adopted to assess individual differences. Particularly, high-resolution WM networks with differing edge definitions showed convergent individual differences in the spatial pattern of both WM connections and nodal efficiency. For the network global and local efficiencies, low-resolution and high-resolution WM networks for most edge definitions consistently exhibited a highly convergent pattern in individual differences. Finally, the test-retest analysis revealed a decent temporal reproducibility for the patterns of between-method convergence/divergence. Together, the results of the present study demonstrated a measure-dependent effect of network construction methods on the individual difference of WM network properties. © 2015 Wiley Periodicals, Inc.

  16. Optimization of enzymes-microwave-ultrasound assisted extraction of Lentinus edodes polysaccharides and determination of its antioxidant activity.

    PubMed

    Yin, Chaomin; Fan, Xiuzhi; Fan, Zhe; Shi, Defang; Gao, Hong

    2018-05-01

    Enzymes-microwave-ultrasound assisted extraction (EMUE) method had been used to extract Lentinus edodes polysaccharides (LEPs). The enzymatic temperature, enzymatic pH, microwave power and microwave time were optimized by response surface methodology. The yields, properties and antioxidant activities of LEPs from EMUE and other extraction methods including hot-water extraction, enzymes-assisted extraction, microwave-assisted extraction and ultrasound-assisted extraction were evaluated. The results showed that the highest LEPs yield of 9.38% was achieved with enzymatic temperature of 48°C, enzymatic pH of 5.0, microwave power of 440W and microwave time of 10min, which correlated well with the predicted value of 9.79%. Additionally, LEPs from different extraction methods possessed typical absorption peak of polysaccharides, which meant different extraction methods had no significant effects on type of glycosidic bonds and sugar ring of LEPs. However, SEM images of LEPs from different extraction methods were significantly different. Moreover, the different LEPs all showed antioxidant activities, but LEPs from EMUE showed the highest reducing power when compared to other LEPs. The results indicated LEPs from EMUE can be used as natural antioxidant component in the pharmaceutical and functional food industries. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Using artificial intelligence strategies for process-related automated inspection in the production environment

    NASA Astrophysics Data System (ADS)

    Anding, K.; Kuritcyn, P.; Garten, D.

    2016-11-01

    In this paper a new method for the automatic visual inspection of metallic surfaces is proposed by using Convolutional Neural Networks (CNN). The different combinations of network parameters were developed and tested. The obtained results of CNN were analysed and compared with the results of our previous investigations with color and texture features as input parameters for a Support Vector Machine. Advantages and disadvantages of the different classifying methods are explained.

  18. A Comparison of a Maximum Exertion Method and a Model-Based, Sub-Maximum Exertion Method for Normalizing Trunk EMG

    PubMed Central

    Cholewicki, Jacek; van Dieën, Jaap; Lee, Angela S.; Reeves, N. Peter

    2011-01-01

    The problem with normalizing EMG data from patients with painful symptoms (e.g. low back pain) is that such patients may be unwilling or unable to perform maximum exertions. Furthermore, the normalization to a reference signal, obtained from a maximal or sub-maximal task, tends to mask differences that might exist as a result of pathology. Therefore, we presented a novel method (GAIN method) for normalizing trunk EMG data that overcomes both problems. The GAIN method does not require maximal exertions (MVC) and tends to preserve distinct features in the muscle recruitment patterns for various tasks. Ten healthy subjects performed various isometric trunk exertions, while EMG data from 10 muscles were recorded and later normalized using the GAIN and MVC methods. The MVC method resulted in smaller variation between subjects when tasks were executed at the three relative force levels (10%, 20%, and 30% MVC), while the GAIN method resulted in smaller variation between subjects when the tasks were executed at the three absolute force levels (50 N, 100 N, and 145 N). This outcome implies that the MVC method provides a relative measure of muscle effort, while the GAIN-normalized EMG data gives an estimate of the absolute muscle force. Therefore, the GAIN-normalized EMG data tends to preserve the EMG differences between subjects in the way they recruit their muscles to execute various tasks, while the MVC-normalized data will tend to suppress such differences. The appropriate choice of the EMG normalization method will depend on the specific question that an experimenter is attempting to answer. PMID:21665489

  19. Three-dimensional numerical modeling of full-space transient electromagnetic responses of water in goaf

    NASA Astrophysics Data System (ADS)

    Chang, Jiang-Hao; Yu, Jing-Cun; Liu, Zhi-Xin

    2016-09-01

    The full-space transient electromagnetic response of water-filled goaves in coal mines were numerically modeled. Traditional numerical modeling methods cannot be used to simulate the underground full-space transient electromagnetic field. We used multiple transmitting loops instead of the traditional single transmitting loop to load the transmitting loop into Cartesian grids. We improved the method for calculating the z-component of the magnetic field based on the characteristics of full space. Then, we established the fullspace 3D geoelectrical model using geological data for coalmines. In addition, the transient electromagnetic responses of water-filled goaves of variable shape at different locations were simulated by using the finite-difference time-domain (FDTD) method. Moreover, we evaluated the apparent resistivity results. The numerical modeling results suggested that the resistivity differences between the coal seam and its roof and floor greatly affect the distribution of apparent resistivity, resulting in nearly circular contours with the roadway head at the center. The actual distribution of apparent resistivity for different geoelectrical models of water in goaves was consistent with the models. However, when the goaf water was located in one side, a false low-resistivity anomaly would appear on the other side owing to the full-space effect but the response was much weaker. Finally, the modeling results were subsequently confirmed by drilling, suggesting that the proposed method was effective.

  20. Determination of fossil carbon content in Swedish waste fuel by four different methods.

    PubMed

    Jones, Frida C; Blomqvist, Evalena W; Bisaillon, Mattias; Lindberg, Daniel K; Hupa, Mikko

    2013-10-01

    This study aimed to determine the content of fossil carbon in waste combusted in Sweden by using four different methods at seven geographically spread combustion plants. In total, the measurement campaign included 42 solid samples, 21 flue gas samples, 3 sorting analyses and 2 investigations using the balance method. The fossil carbon content in the solid samples and in the flue gas samples was determined using (14)C-analysis. From the analyses it was concluded that about a third of the carbon in mixed Swedish waste (municipal solid waste and industrial waste collected at Swedish industry sites) is fossil. The two other methods (the balance method and calculations from sorting analyses), based on assumptions and calculations, gave similar results in the plants in which they were used. Furthermore, the results indicate that the difference between samples containing as much as 80% industrial waste and samples consisting of solely municipal solid waste was not as large as expected. Besides investigating the fossil content of the waste, the project was also established to investigate the usability of various methods. However, it is difficult to directly compare the different methods used in this project because besides the estimation of emitted fossil carbon the methods provide other information, which is valuable to the plant owner. Therefore, the choice of method can also be controlled by factors other than direct determination of the fossil fuel emissions when considering implementation in the combustion plants.

  1. Determination of serum albumin, analytical challenges: a French multicenter study.

    PubMed

    Rossary, Adrien; Blondé-Cynober, Françoise; Bastard, Jean-Philippe; Beauvieux, Marie-Christine; Beyne, Pascale; Drai, Jocelyne; Lombard, Christine; Anglard, Ingrid; Aussel, Christian; Claeyssens, Sophie; Vasson, Marie-Paule

    2017-06-01

    Among the biological markers of morbidity and mortality, albumin holds a key place in the range of criteria used by the High Authority for Health (HAS) for the assessment of malnutrition and the coding of information system medicalization program (PMSI). If the principle of quantification methods have not changed in recent years, the dispersion of external evaluations of the quality (EEQ) data shows that the standardization using the certified reference material (CRM) 470 is not optimal. The aim of this multicenter study involving 7 sites, conducted by a working group of the French Society of Clinical Biology (SFBC), was to assess whether the albuminemia values depend on the analytical system used. The albumin from plasma (n=30) and serum (n=8) pools was quantified by 5 different methods [bromocresol green (VBC) and bromocresol purple (PBC) colorimetry, immunoturbidimetry (IT), immunonephelometry (IN) and capillary electrophoresis (CE)] using 12 analyzers. Bland and Altman's test evaluated the difference between the results obtained by the different methods. For example, a difference as high as 13 g/L was observed for the same sample between the methods (p <0.001) in the concentration range of 30 to 35 g/L. The VBC overestimates albumin across the range of values tested compared to PBC (p <0.05). PBC method gives similar results to IN for values lower than 40 g/L. For IT methods, one of the technical/analyzer tandem underestimates the albumin values inducing a difference of performance between the immunoprecipitation methods (IT vs IN, p <0.05). Although, the albumin results are related to the technical/analyzer tandem used. This variability is usually not taken into account by the clinician. Thus, clinicians and biologists have to be aware and have to check, depending on the method used, the albumin thresholds identified as risk factors for complications related to malnutrition and PMSI coding.

  2. Weighted small subdomain filtering technology

    NASA Astrophysics Data System (ADS)

    Tai, Zhenhua; Zhang, Fengxu; Zhang, Fengqin; Zhang, Xingzhou; Hao, Mengcheng

    2017-09-01

    A high-resolution method to define the horizontal edges of gravity sources is presented by improving the three-directional small subdomain filtering (TDSSF). This proposed method is the weighted small subdomain filtering (WSSF). The WSSF uses a numerical difference instead of the phase conversion in the TDSSF to reduce the computational complexity. To make the WSSF more insensitive to noise, the numerical difference is combined with the average algorithm. Unlike the TDSSF, the WSSF uses a weighted sum to integrate the numerical difference results along four directions into one contour, for making its interpretation more convenient and accurate. The locations of tightened gradient belts are used to define the edges of sources in the WSSF result. This proposed method is tested on synthetic data. The test results show that the WSSF provides the horizontal edges of sources more clearly and correctly, even if the sources are interfered with one another and the data is corrupted with random noise. Finally, the WSSF and two other known methods are applied to a real data respectively. The detected edges by the WSSF are sharper and more accurate.

  3. Discontinuous Spectral Difference Method for Conservation Laws on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Liu, Yen; Vinokur, Marcel; Wang, Z. J.

    2004-01-01

    A new, high-order, conservative, and efficient method for conservation laws on unstructured grids is developed. The concept of discontinuous and high-order local representations to achieve conservation and high accuracy is utilized in a manner similar to the Discontinuous Galerkin (DG) and the Spectral Volume (SV) methods, but while these methods are based on the integrated forms of the equations, the new method is based on the differential form to attain a simpler formulation and higher efficiency. A discussion on the Discontinuous Spectral Difference (SD) Method, locations of the unknowns and flux points and numerical results are also presented.

  4. Performance and separation occurrence of binary probit regression estimator using maximum likelihood method and Firths approach under different sample size

    NASA Astrophysics Data System (ADS)

    Lusiana, Evellin Dewi

    2017-12-01

    The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.

  5. Assessment of forward head posture in females: observational and photogrammetry methods.

    PubMed

    Salahzadeh, Zahra; Maroufi, Nader; Ahmadi, Amir; Behtash, Hamid; Razmjoo, Arash; Gohari, Mahmoud; Parnianpour, Mohamad

    2014-01-01

    There are different methods to assess forward head posture (FHP) but the accuracy and discrimination ability of these methods are not clear. Here, we want to compare three postural angles for FHP assessment and also study the discrimination accuracy of three photogrammetric methods to differentiate groups categorized based on observational method. All Seventy-eight healthy female participants (23 ± 2.63 years), were classified into three groups: moderate-severe FHP, slight FHP and non FHP based on observational postural assessment rules. Applying three photogrammetric methods - craniovertebral angle, head title angle and head position angle - to measure FHP objectively. One - way ANOVA test showed a significant difference in three categorized group's craniovertebral angle (P< 0.05, F=83.07). There was no dramatic difference in head tilt angle and head position angle methods in three groups. According to Linear Discriminate Analysis (LDA) results, the canonical discriminant function (Wilks'Lambda) was 0.311 for craniovertebral angle with 79.5% of cross-validated grouped cases correctly classified. Our results showed that, craniovertebral angle method may discriminate the females with moderate-severe and non FHP more accurate than head position angle and head tilt angle. The photogrammetric method had excellent inter and intra rater reliability to assess the head and cervical posture.

  6. A statistical assessment of differences and equivalences between genetically modified and reference plant varieties

    PubMed Central

    2011-01-01

    Background Safety assessment of genetically modified organisms is currently often performed by comparative evaluation. However, natural variation of plant characteristics between commercial varieties is usually not considered explicitly in the statistical computations underlying the assessment. Results Statistical methods are described for the assessment of the difference between a genetically modified (GM) plant variety and a conventional non-GM counterpart, and for the assessment of the equivalence between the GM variety and a group of reference plant varieties which have a history of safe use. It is proposed to present the results of both difference and equivalence testing for all relevant plant characteristics simultaneously in one or a few graphs, as an aid for further interpretation in safety assessment. A procedure is suggested to derive equivalence limits from the observed results for the reference plant varieties using a specific implementation of the linear mixed model. Three different equivalence tests are defined to classify any result in one of four equivalence classes. The performance of the proposed methods is investigated by a simulation study, and the methods are illustrated on compositional data from a field study on maize grain. Conclusions A clear distinction of practical relevance is shown between difference and equivalence testing. The proposed tests are shown to have appropriate performance characteristics by simulation, and the proposed simultaneous graphical representation of results was found to be helpful for the interpretation of results from a practical field trial data set. PMID:21324199

  7. Methodology for Benefit Analysis of CAD/CAM (Computer-Aided Design/Computer-Aided Manufacturing) in USN Shipyards.

    DTIC Science & Technology

    1984-03-01

    difference calcula- tion, would result in erroneously lower productivity ratios. Only two topics are not adequately addressed with the method --the first...for determination of this term. 3. CAIDQS P.22li y §easuI~gZ._ Jelbo The next method differs significantly from the previous two in that it deals with...Chasen’s Method (as applied by Lonq Beach M.S.) . . . . . . . o . o . . . . . 31 2. Shah G Yans Method . . . . . . . . . . . 34 3. CARDOS Productivity

  8. [Full Sibling Identification by IBS Scoring Method and Establishment of the Query Table of Its Critical Value].

    PubMed

    Li, R; Li, C T; Zhao, S M; Li, H X; Li, L; Wu, R G; Zhang, C C; Sun, H Y

    2017-04-01

    To establish a query table of IBS critical value and identification power for the detection systems with different numbers of STR loci under different false judgment standards. Samples of 267 pairs of full siblings and 360 pairs of unrelated individuals were collected and 19 autosomal STR loci were genotyped by Golden e ye™ 20A system. The full siblings were determined using IBS scoring method according to the 'Regulation for biological full sibling testing'. The critical values and identification power for the detection systems with different numbers of STR loci under different false judgment standards were calculated by theoretical methods. According to the formal IBS scoring criteria, the identification power of full siblings and unrelated individuals was 0.764 0 and the rate of false judgment was 0. The results of theoretical calculation were consistent with that of sample observation. The query table of IBS critical value for identification of full sibling detection systems with different numbers of STR loci was successfully established. The IBS scoring method defined by the regulation has high detection efficiency and low false judgment rate, which provides a relatively conservative result. The query table of IBS critical value for identification of full sibling detection systems with different numbers of STR loci provides an important reference data for the result judgment of full sibling testing and owns a considerable practical value. Copyright© by the Editorial Department of Journal of Forensic Medicine

  9. Flow Cytometry of Human Primary Epidermal and Follicular Keratinocytes

    PubMed Central

    Gragnani, Alfredo; Ipolito, Michelle Zampieri; Sobral, Christiane S; Brunialti, Milena Karina Coló; Salomão, Reinaldo; Ferreira, Lydia Masako

    2008-01-01

    Objective: The aim of this study was to characterize using flow cytometry cultured human primary keratinocytes isolated from the epidermis and hair follicles by different methods. Methods: Human keratinocytes derived from discarded fragments of total skin and scalp hair follicles from patients who underwent plastic surgery in the Plastic Surgery Division at UNIFESP were used. The epidermal keratinocytes were isolated by using 3 different methods: the standard method, upon exposure to trypsin for 30 minutes; the second, by treatment with dispase for 18 hours and with trypsin for 10 minutes; and the third, by treatment with dispase for 18 hours and with trypsin for 30 minutes. Follicular keratinocytes were isolated using the standard method. Results: On comparing the group treated with dispase for 18 hours and with trypsin for 10 minutes with the group treated with dispase for 18 hours and with trypsin for 30 minutes, it was observed that the first group presented the largest number of viable cells, the smallest number of cells in late apoptosis and necrosis with statistical significance, and no difference in apoptosis. When we compared the group treated with dispase for 18 hours and with trypsin for 10 minutes with the group treated with trypsin, the first group presented the largest number of viable cells, the smallest number of cells in apoptosis with statistical significance, and no difference in late apoptosis and necrosis. When we compared the results of the group treated with dispase for 18 hours and with trypsin for 10 minutes with the results for follical isolation, there was a statistical difference in apoptosis and viable cells. Conclusion: The isolation method of treatment with dispase for 18 hours and with trypsin for 10 minutes produced the largest number of viable cells and the smallest number of cells in apoptosis/necrosis. PMID:18350110

  10. Simulation of 2D rarefied gas flows based on the numerical solution of the Boltzmann equation

    NASA Astrophysics Data System (ADS)

    Poleshkin, Sergey O.; Malkov, Ewgenij A.; Kudryavtsev, Alexey N.; Shershnev, Anton A.; Bondar, Yevgeniy A.; Kohanchik, A. A.

    2017-10-01

    There are various methods for calculating rarefied gas flows, in particular, statistical methods and deterministic methods based on the finite-difference solutions of the Boltzmann nonlinear kinetic equation and on the solutions of model kinetic equations. There is no universal method; each has its disadvantages in terms of efficiency or accuracy. The choice of the method depends on the problem to be solved and on parameters of calculated flows. Qualitative theoretical arguments help to determine the range of parameters of effectively solved problems for each method; however, it is advisable to perform comparative tests of calculations of the classical problems performed by different methods and with different parameters to have quantitative confirmation of this reasoning. The paper provides the results of the calculations performed by the authors with the help of the Direct Simulation Monte Carlo method and finite-difference methods of solving the Boltzmann equation and model kinetic equations. Based on this comparison, conclusions are made on selecting a particular method for flow simulations in various ranges of flow parameters.

  11. Validation for Vegetation Green-up Date Extracted from GIMMS NDVI and NDVI3g Using Variety of Methods

    NASA Astrophysics Data System (ADS)

    Chang, Q.; Jiao, W.

    2017-12-01

    Phenology is a sensitive and critical feature of vegetation change that has regarded as a good indicator in climate change studies. So far, variety of remote sensing data sources and phenology extraction methods from satellite datasets have been developed to study the spatial-temporal dynamics of vegetation phenology. However, the differences between vegetation phenology results caused by the varies satellite datasets and phenology extraction methods are not clear, and the reliability for different phenology results extracted from remote sensing datasets is not verified and compared using the ground observation data. Based on three most popular remote sensing phenology extraction methods, this research calculated the Start of the growing season (SOS) for each pixels in the Northern Hemisphere for two kinds of long time series satellite datasets: GIMMS NDVIg (SOSg) and GIMMS NDVI3g (SOS3g). The three methods used in this research are: maximum increase method, dynamic threshold method and midpoint method. Then, this study used SOS calculated from NEE datasets (SOS_NEE) monitored by 48 eddy flux tower sites in global flux website to validate the reliability of six phenology results calculated from remote sensing datasets. Results showed that both SOSg and SOS3g extracted by maximum increase method are not correlated with ground observed phenology metrics. SOSg and SOS3g extracted by the dynamic threshold method and midpoint method are both correlated with SOS_NEE significantly. Compared with SOSg extracted by the dynamic threshold method, SOSg extracted by the midpoint method have a stronger correlation with SOS_NEE. And, the same to SOS3g. Additionally, SOSg showed stronger correlation with SOS_NEE than SOS3g extracted by the same method. SOS extracted by the midpoint method from GIMMS NDVIg datasets seemed to be the most reliable results when validated with SOS_NEE. These results can be used as reference for data and method selection in future's phenology study.

  12. SU-E-I-08: Investigation of Deconvolution Methods for Blocker-Based CBCT Scatter Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, C; Jin, M; Ouyang, L

    2015-06-15

    Purpose: To investigate whether deconvolution methods can improve the scatter estimation under different blurring and noise conditions for blocker-based scatter correction methods for cone-beam X-ray computed tomography (CBCT). Methods: An “ideal” projection image with scatter was first simulated for blocker-based CBCT data acquisition by assuming no blurring effect and no noise. The ideal image was then convolved with long-tail point spread functions (PSF) with different widths to mimic the blurring effect from the finite focal spot and detector response. Different levels of noise were also added. Three deconvolution Methods: 1) inverse filtering; 2) Wiener; and 3) Richardson-Lucy, were used tomore » recover the scatter signal in the blocked region. The root mean square error (RMSE) of estimated scatter serves as a quantitative measure for the performance of different methods under different blurring and noise conditions. Results: Due to the blurring effect, the scatter signal in the blocked region is contaminated by the primary signal in the unblocked region. The direct use of the signal in the blocked region to estimate scatter (“direct method”) leads to large RMSE values, which increase with the increased width of PSF and increased noise. The inverse filtering is very sensitive to noise and practically useless. The Wiener and Richardson-Lucy deconvolution methods significantly improve scatter estimation compared to the direct method. For a typical medium PSF and medium noise condition, both methods (∼20 RMSE) can achieve 4-fold improvement over the direct method (∼80 RMSE). The Wiener method deals better with large noise and Richardson-Lucy works better on wide PSF. Conclusion: We investigated several deconvolution methods to recover the scatter signal in the blocked region for blocker-based scatter correction for CBCT. Our simulation results demonstrate that Wiener and Richardson-Lucy deconvolution can significantly improve the scatter estimation compared to the direct method.« less

  13. The influence of different black carbon and sulfate mixing methods on their optical and radiative properties

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Zhou, Chen; Wang, Zhili; Zhao, Shuyun; Li, Jiangnan

    2015-08-01

    Three different internal mixing methods (Core-Shell, Maxwell-Garnett, and Bruggeman) and one external mixing method are used to study the impact of mixing methods of black carbon (BC) with sulfate aerosol on their optical properties, radiative flux, and heating rate. The optical properties of a mixture of BC and sulfate aerosol particles are considered for three typical bands. The results show that mixing methods, the volume ratio of BC to sulfate, and relative humidity have a strong influence on the optical properties of mixed aerosols. Compared to internal mixing, external mixing underestimates the particle mass absorption coefficient by 20-70% and the particle mass scattering coefficient by up to 50%, whereas it overestimates the particle single scattering albedo by 20-50% in most cases. However, the asymmetry parameter is strongly sensitive to the equivalent particle radius, but is only weakly sensitive to the different mixing methods. Of the internal methods, there is less than 2% difference in all optical properties between the Maxwell-Garnett and Bruggeman methods in all bands; however, the differences between the Core-Shell and Maxwell-Garnett/Bruggeman methods are usually larger than 15% in the ultraviolet and visible bands. A sensitivity test is conducted with the Beijing Climate Center Radiation transfer model (BCC-RAD) using a simulated BC concentration that is typical of east-central China and a sulfate volume ratio of 75%. The results show that the internal mixing methods could reduce the radiative flux more effectively because they produce a higher absorption. The annual mean instantaneous radiative force due to BC-sulfate aerosol is about -3.18 W/m2 for the external method and -6.91 W/m2 for the internal methods at the surface, and -3.03/-1.56/-1.85 W/m2 for the external/Core-Shell/(Maxwell-Garnett/Bruggeman) methods, respectively, at the tropopause.

  14. On the effectiveness of recession analysis methods for capturing the characteristic storage-discharge relation: An intercomparison study

    NASA Astrophysics Data System (ADS)

    Chen, X.; Kumar, M.; Basso, S.; Marani, M.

    2017-12-01

    Storage-discharge (S-Q) relations are widely used to derive watershed properties and predict streamflow responses. These relations are often obtained using different recession analysis methods, which vary in recession period identification criteria and Q vs. -dQ/dt fitting scheme. Although previous studies have indicated that different recession analysis methods can result in significantly different S-Q relations and subsequently derived hydrological variables, this observation has often been overlooked and S-Q relations have been used in as is form. This study evaluated the effectiveness of four recession analysis methods in obtaining the characteristic S-Q relation and reconstructing the streamflow. Results indicate that while some methods generally performed better than others, none of them consistently outperformed the others. Even the best-performing method could not yield accurate reconstructed streamflow time series and its PDFs in some watersheds, implying that either derived S-Q relations might not be reliable or S-Q relations cannot be used for hydrological simulations. Notably, accuracy of the methods is influenced by the extent of scatter in the ln(-dQ/dt) vs. ln(Q) plot. In addition, the derived S-Q relation was very sensitive to the criteria used for identifying recession periods. This result raises a warning sign against indiscriminate application of recession analysis methods and derived S-Q relations for watershed characterizations or hydrologic simulations. Thorough evaluation of representativeness of the derived S-Q relation should be performed before it is used for hydrologic analysis.

  15. A comparison of four porewater sampling methods for metal mixtures and dissolved organic carbon and the implications for sediment toxicity evaluations

    USGS Publications Warehouse

    Cleveland, Danielle; Brumbaugh, William G.; MacDonald, Donald D.

    2017-01-01

    Evaluations of sediment quality conditions are commonly conducted using whole-sediment chemistry analyses but can be enhanced by evaluating multiple lines of evidence, including measures of the bioavailable forms of contaminants. In particular, porewater chemistry data provide information that is directly relevant for interpreting sediment toxicity data. Various methods for sampling porewater for trace metals and dissolved organic carbon (DOC), which is an important moderator of metal bioavailability, have been employed. The present study compares the peeper, push point, centrifugation, and diffusive gradients in thin films (DGT) methods for the quantification of 6 metals and DOC. The methods were evaluated at low and high concentrations of metals in 3 sediments having different concentrations of total organic carbon and acid volatile sulfide and different particle-size distributions. At low metal concentrations, centrifugation and push point sampling resulted in up to 100 times higher concentrations of metals and DOC in porewater compared with peepers and DGTs. At elevated metal levels, the measured concentrations were in better agreement among the 4 sampling techniques. The results indicate that there can be marked differences among operationally different porewater sampling methods, and it is unclear if there is a definitive best method for sampling metals and DOC in porewater.

  16. A Novel Method to Identify Differential Pathways in Hippocampus Alzheimer's Disease.

    PubMed

    Liu, Chun-Han; Liu, Lian

    2017-05-08

    BACKGROUND Alzheimer's disease (AD) is the most common type of dementia. The objective of this paper is to propose a novel method to identify differential pathways in hippocampus AD. MATERIAL AND METHODS We proposed a combined method by merging existed methods. Firstly, pathways were identified by four known methods (DAVID, the neaGUI package, the pathway-based co-expressed method, and the pathway network approach), and differential pathways were evaluated through setting weight thresholds. Subsequently, we combined all pathways by a rank-based algorithm and called the method the combined method. Finally, common differential pathways across two or more of five methods were selected. RESULTS Pathways obtained from different methods were also different. The combined method obtained 1639 pathways and 596 differential pathways, which included all pathways gained from the four existing methods; hence, the novel method solved the problem of inconsistent results. Besides, a total of 13 common pathways were identified, such as metabolism, immune system, and cell cycle. CONCLUSIONS We have proposed a novel method by combining four existing methods based on a rank product algorithm, and identified 13 significant differential pathways based on it. These differential pathways might provide insight into treatment and diagnosis of hippocampus AD.

  17. [Analyzing and modeling methods of near infrared spectroscopy for in-situ prediction of oil yield from oil shale].

    PubMed

    Liu, Jie; Zhang, Fu-Dong; Teng, Fei; Li, Jun; Wang, Zhi-Hong

    2014-10-01

    In order to in-situ detect the oil yield of oil shale, based on portable near infrared spectroscopy analytical technology, with 66 rock core samples from No. 2 well drilling of Fuyu oil shale base in Jilin, the modeling and analyzing methods for in-situ detection were researched. By the developed portable spectrometer, 3 data formats (reflectance, absorbance and K-M function) spectra were acquired. With 4 different modeling data optimization methods: principal component-mahalanobis distance (PCA-MD) for eliminating abnormal samples, uninformative variables elimination (UVE) for wavelength selection and their combina- tions: PCA-MD + UVE and UVE + PCA-MD, 2 modeling methods: partial least square (PLS) and back propagation artificial neural network (BPANN), and the same data pre-processing, the modeling and analyzing experiment were performed to determine the optimum analysis model and method. The results show that the data format, modeling data optimization method and modeling method all affect the analysis precision of model. Results show that whether or not using the optimization method, reflectance or K-M function is the proper spectrum format of the modeling database for two modeling methods. Using two different modeling methods and four different data optimization methods, the model precisions of the same modeling database are different. For PLS modeling method, the PCA-MD and UVE + PCA-MD data optimization methods can improve the modeling precision of database using K-M function spectrum data format. For BPANN modeling method, UVE, UVE + PCA-MD and PCA- MD + UVE data optimization methods can improve the modeling precision of database using any of the 3 spectrum data formats. In addition to using the reflectance spectra and PCA-MD data optimization method, modeling precision by BPANN method is better than that by PLS method. And modeling with reflectance spectra, UVE optimization method and BPANN modeling method, the model gets the highest analysis precision, its correlation coefficient (Rp) is 0.92, and its standard error of prediction (SEP) is 0.69%.

  18. Probabilistic drug connectivity mapping

    PubMed Central

    2014-01-01

    Background The aim of connectivity mapping is to match drugs using drug-treatment gene expression profiles from multiple cell lines. This can be viewed as an information retrieval task, with the goal of finding the most relevant profiles for a given query drug. We infer the relevance for retrieval by data-driven probabilistic modeling of the drug responses, resulting in probabilistic connectivity mapping, and further consider the available cell lines as different data sources. We use a special type of probabilistic model to separate what is shared and specific between the sources, in contrast to earlier connectivity mapping methods that have intentionally aggregated all available data, neglecting information about the differences between the cell lines. Results We show that the probabilistic multi-source connectivity mapping method is superior to alternatives in finding functionally and chemically similar drugs from the Connectivity Map data set. We also demonstrate that an extension of the method is capable of retrieving combinations of drugs that match different relevant parts of the query drug response profile. Conclusions The probabilistic modeling-based connectivity mapping method provides a promising alternative to earlier methods. Principled integration of data from different cell lines helps to identify relevant responses for specific drug repositioning applications. PMID:24742351

  19. Probe-specific mixed-model approach to detect copy number differences using multiplex ligation-dependent probe amplification (MLPA)

    PubMed Central

    González, Juan R; Carrasco, Josep L; Armengol, Lluís; Villatoro, Sergi; Jover, Lluís; Yasui, Yutaka; Estivill, Xavier

    2008-01-01

    Background MLPA method is a potentially useful semi-quantitative method to detect copy number alterations in targeted regions. In this paper, we propose a method for the normalization procedure based on a non-linear mixed-model, as well as a new approach for determining the statistical significance of altered probes based on linear mixed-model. This method establishes a threshold by using different tolerance intervals that accommodates the specific random error variability observed in each test sample. Results Through simulation studies we have shown that our proposed method outperforms two existing methods that are based on simple threshold rules or iterative regression. We have illustrated the method using a controlled MLPA assay in which targeted regions are variable in copy number in individuals suffering from different disorders such as Prader-Willi, DiGeorge or Autism showing the best performace. Conclusion Using the proposed mixed-model, we are able to determine thresholds to decide whether a region is altered. These threholds are specific for each individual, incorporating experimental variability, resulting in improved sensitivity and specificity as the examples with real data have revealed. PMID:18522760

  20. Are bioequivalence studies of levothyroxine sodium formulations in euthyroid volunteers reliable?

    PubMed

    Blakesley, Vicky; Awni, Walid; Locke, Charles; Ludden, Thomas; Granneman, G Richard; Braverman, Lewis E

    2004-03-01

    Levothyroxine (LT4) has a narrow therapeutic index. Consequently, precise standards for assessing the bioequivalence of different LT4 products are vital. We examined the methodology that the Food and Drug Administration (FDA) recommends for comparing the bioavailability of LT4 products, as well as three modifications to correct for endogenous, thyroxine (T4) levels, to determine if the methodology could distinguish LT4 products that differ by 12.5%, 25%, or 33%. With no baseline correction for the endogenous T4 pool, differences in administered LT4 doses that differed by 25%-33% could not be detected (450 microg and 400 microg doses versus 600 microg dose, respectively). The three mathematical correction methods could distinguish the doses that differed by 25% and 33%. None of the correction methods could distinguish dosage strengths that differed by 12.5% (450 microg versus 400 microg). Dose differences within this range are known to result in clinically relevant differences in safety and effectiveness. Methods of analysis of bioequivalence data that do not consider endogenous T4 concentrations confound accurate quantitation and interpretation of LT4 bioavailability. As a result, products inappropriately deemed bioequivalent may put patients at risk for iatrogenic hyperthyroidism or hypothyroidism. More precise methods for defining bioequivalence are required in order to ensure that LT4 products accepted as bioequivalent will perform equivalently in patients without the need for further monitoring and retitration of their dose.

  1. A Study of the Application of the Lognormal and Gamma Distributions to Corrective Maintenance Repair Time Data.

    DTIC Science & Technology

    1982-10-01

    K-S A R A j 1 10 23 R 3 8 11 16 18 For the lognormal methods the test methods sometimes give different results. The K-S test and the chi-square...significant difference among the three test methods . A previous study has been done using 24 data sets of electronic systems and equipments, using only the W...are suitable descriptors for corrective maintenance repair times, and to estimate the difference caused in assuming an exponential distribution for

  2. A finite-difference time-domain electromagnetic solver in a generalized coordinate system

    NASA Astrophysics Data System (ADS)

    Hochberg, Timothy Allen

    A new, finite-difference, time-domain method for the simulation of full-wave electromagnetic wave propogation in complex structures is developed. This method is simple and flexible; it allows for the simulation of transient wave propogation in a large class of practical structures. Boundary conditions are implemented for perfect and imperfect electrically conducting boundaries, perfect magnetically conducting boundaries, and absorbing boundaries. The method is validated with the aid of several different types of test cases. Two types of coaxial cables with helical breaks are simulated and the results are discussed.

  3. Some results on numerical methods for hyperbolic conservation laws

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang Huanan.

    1989-01-01

    This dissertation contains some results on the numerical solutions of hyperbolic conservation laws. (1) The author introduced an artificial compression method as a correction to the basic ENO schemes. The method successfully prevents contact discontinuities from being smeared. This is achieved by increasing the slopes of the ENO reconstructions in such a way that the essentially non-oscillatory property of the schemes is kept. He analyzes the non-oscillatory property of the new artificial compression method by applying it to the UNO scheme which is a second order accurate ENO scheme, and proves that the resulting scheme is indeed non-oscillatory. Extensive 1-Dmore » numerical results and some preliminary 2-D ones are provided to show the strong performance of the method. (2) He combines the ENO schemes and the centered difference schemes into self-adjusting hybrid schemes which will be called the localized ENO schemes. At or near the jumps, he uses the ENO schemes with the field by field decompositions, otherwise he simply uses the centered difference schemes without the field by field decompositions. The method involves a new interpolation analysis. In the numerical experiments on several standard test problems, the quality of the numerical results of this method is close to that of the pure ENO results. The localized ENO schemes can be equipped with the above artificial compression method. In this way, he dramatically improves the resolutions of the contact discontinuities at very little additional costs. (3) He introduces a space-time mesh refinement method for time dependent problems.« less

  4. Evaluation of internal noise methods for Hotelling observers

    NASA Astrophysics Data System (ADS)

    Zhang, Yani; Pham, Binh T.; Eckstein, Miguel P.

    2005-04-01

    Including internal noise in computer model observers to degrade model observer performance to human levels is a common method to allow for quantitatively comparisons of human and model performance. In this paper, we studied two different types of methods for injecting internal noise to Hotelling model observers. The first method adds internal noise to the output of the individual channels: a) Independent non-uniform channel noise, b) Independent uniform channel noise. The second method adds internal noise to the decision variable arising from the combination of channel responses: a) internal noise standard deviation proportional to decision variable's standard deviation due to the external noise, b) internal noise standard deviation proportional to decision variable's variance caused by the external noise. We tested the square window Hotelling observer (HO), channelized Hotelling observer (CHO), and Laguerre-Gauss Hotelling observer (LGHO). The studied task was detection of a filling defect of varying size/shape in one of four simulated arterial segment locations with real x-ray angiography backgrounds. Results show that the internal noise method that leads to the best prediction of human performance differs across the studied models observers. The CHO model best predicts human observer performance with the channel internal noise. The HO and LGHO best predict human observer performance with the decision variable internal noise. These results might help explain why previous studies have found different results on the ability of each Hotelling model to predict human performance. Finally, the present results might guide researchers with the choice of method to include internal noise into their Hotelling models.

  5. Comparison of formant detection methods used in speech processing applications

    NASA Astrophysics Data System (ADS)

    Belean, Bogdan

    2013-11-01

    The paper describes time frequency representations of speech signal together with the formant significance in speech processing applications. Speech formants can be used in emotion recognition, sex discrimination or diagnosing different neurological diseases. Taking into account the various applications of formant detection in speech signal, two methods for detecting formants are presented. First, the poles resulted after a complex analysis of LPC coefficients are used for formants detection. The second approach uses the Kalman filter for formant prediction along the speech signal. Results are presented for both approaches on real life speech spectrograms. A comparison regarding the features of the proposed methods is also performed, in order to establish which method is more suitable in case of different speech processing applications.

  6. Assessment of formulas for calculating critical concentration by the agar diffusion method.

    PubMed Central

    Drugeon, H B; Juvin, M E; Caillon, J; Courtieu, A L

    1987-01-01

    The critical concentration of antibiotic was calculated by using the agar diffusion method with disks containing different charges of antibiotic. It is currently possible to use different calculation formulas (based on Fick's law) devised by Cooper and Woodman (the best known) and by Vesterdal. The results obtained with the formulas were compared with the MIC results (obtained by the agar dilution method). A total of 91 strains and two cephalosporins (cefotaxime and ceftriaxone) were studied. The formula of Cooper and Woodman led to critical concentrations that were higher than the MIC, but concentrations obtained with the Vesterdal formula were closer to the MIC. The critical concentration was independent of method parameters (dilution, for example). PMID:3619419

  7. Impacts of building geometry modeling methods on the simulation results of urban building energy models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yixing; Hong, Tianzhen

    We present that urban-scale building energy modeling (UBEM)—using building modeling to understand how a group of buildings will perform together—is attracting increasing attention in the energy modeling field. Unlike modeling a single building, which will use detailed information, UBEM generally uses existing building stock data consisting of high-level building information. This study evaluated the impacts of three zoning methods and the use of floor multipliers on the simulated energy use of 940 office and retail buildings in three climate zones using City Building Energy Saver. The first zoning method, OneZone, creates one thermal zone per floor using the target building'smore » footprint. The second zoning method, AutoZone, splits the building's footprint into perimeter and core zones. A novel, pixel-based automatic zoning algorithm is developed for the AutoZone method. The third zoning method, Prototype, uses the U.S. Department of Energy's reference building prototype shapes. Results show that simulated source energy use of buildings with the floor multiplier are marginally higher by up to 2.6% than those modeling each floor explicitly, which take two to three times longer to run. Compared with the AutoZone method, the OneZone method results in decreased thermal loads and less equipment capacities: 15.2% smaller fan capacity, 11.1% smaller cooling capacity, 11.0% smaller heating capacity, 16.9% less heating loads, and 7.5% less cooling loads. Source energy use differences range from -7.6% to 5.1%. When comparing the Prototype method with the AutoZone method, source energy use differences range from -12.1% to 19.0%, and larger ranges of differences are found for the thermal loads and equipment capacities. This study demonstrated that zoning methods have a significant impact on the simulated energy use of UBEM. Finally, one recommendation resulting from this study is to use the AutoZone method with floor multiplier to obtain accurate results while balancing the simulation run time for UBEM.« less

  8. Impacts of building geometry modeling methods on the simulation results of urban building energy models

    DOE PAGES

    Chen, Yixing; Hong, Tianzhen

    2018-02-20

    We present that urban-scale building energy modeling (UBEM)—using building modeling to understand how a group of buildings will perform together—is attracting increasing attention in the energy modeling field. Unlike modeling a single building, which will use detailed information, UBEM generally uses existing building stock data consisting of high-level building information. This study evaluated the impacts of three zoning methods and the use of floor multipliers on the simulated energy use of 940 office and retail buildings in three climate zones using City Building Energy Saver. The first zoning method, OneZone, creates one thermal zone per floor using the target building'smore » footprint. The second zoning method, AutoZone, splits the building's footprint into perimeter and core zones. A novel, pixel-based automatic zoning algorithm is developed for the AutoZone method. The third zoning method, Prototype, uses the U.S. Department of Energy's reference building prototype shapes. Results show that simulated source energy use of buildings with the floor multiplier are marginally higher by up to 2.6% than those modeling each floor explicitly, which take two to three times longer to run. Compared with the AutoZone method, the OneZone method results in decreased thermal loads and less equipment capacities: 15.2% smaller fan capacity, 11.1% smaller cooling capacity, 11.0% smaller heating capacity, 16.9% less heating loads, and 7.5% less cooling loads. Source energy use differences range from -7.6% to 5.1%. When comparing the Prototype method with the AutoZone method, source energy use differences range from -12.1% to 19.0%, and larger ranges of differences are found for the thermal loads and equipment capacities. This study demonstrated that zoning methods have a significant impact on the simulated energy use of UBEM. Finally, one recommendation resulting from this study is to use the AutoZone method with floor multiplier to obtain accurate results while balancing the simulation run time for UBEM.« less

  9. On solving wave equations on fixed bounded intervals involving Robin boundary conditions with time-dependent coefficients

    NASA Astrophysics Data System (ADS)

    van Horssen, Wim T.; Wang, Yandong; Cao, Guohua

    2018-06-01

    In this paper, it is shown how characteristic coordinates, or equivalently how the well-known formula of d'Alembert, can be used to solve initial-boundary value problems for wave equations on fixed, bounded intervals involving Robin type of boundary conditions with time-dependent coefficients. A Robin boundary condition is a condition that specifies a linear combination of the dependent variable and its first order space-derivative on a boundary of the interval. Analytical methods, such as the method of separation of variables (SOV) or the Laplace transform method, are not applicable to those types of problems. The obtained analytical results by applying the proposed method, are in complete agreement with those obtained by using the numerical, finite difference method. For problems with time-independent coefficients in the Robin boundary condition(s), the results of the proposed method also completely agree with those as for instance obtained by the method of separation of variables, or by the finite difference method.

  10. Quantitative analysis of γ-oryzanol content in cold pressed rice bran oil by TLC-image analysis method

    PubMed Central

    Sakunpak, Apirak; Suksaeree, Jirapornchai; Monton, Chaowalit; Pathompak, Pathamaporn; Kraisintu, Krisana

    2014-01-01

    Objective To develop and validate an image analysis method for quantitative analysis of γ-oryzanol in cold pressed rice bran oil. Methods TLC-densitometric and TLC-image analysis methods were developed, validated, and used for quantitative analysis of γ-oryzanol in cold pressed rice bran oil. The results obtained by these two different quantification methods were compared by paired t-test. Results Both assays provided good linearity, accuracy, reproducibility and selectivity for determination of γ-oryzanol. Conclusions The TLC-densitometric and TLC-image analysis methods provided a similar reproducibility, accuracy and selectivity for the quantitative determination of γ-oryzanol in cold pressed rice bran oil. A statistical comparison of the quantitative determinations of γ-oryzanol in samples did not show any statistically significant difference between TLC-densitometric and TLC-image analysis methods. As both methods were found to be equal, they therefore can be used for the determination of γ-oryzanol in cold pressed rice bran oil. PMID:25182282

  11. Modified slanted-edge method for camera modulation transfer function measurement using nonuniform fast Fourier transform technique

    NASA Astrophysics Data System (ADS)

    Duan, Yaxuan; Xu, Songbo; Yuan, Suochao; Chen, Yongquan; Li, Hongguang; Da, Zhengshang; Gao, Limin

    2018-01-01

    ISO 12233 slanted-edge method experiences errors using fast Fourier transform (FFT) in the camera modulation transfer function (MTF) measurement due to tilt angle errors in the knife-edge resulting in nonuniform sampling of the edge spread function (ESF). In order to resolve this problem, a modified slanted-edge method using nonuniform fast Fourier transform (NUFFT) for camera MTF measurement is proposed. Theoretical simulations for images with noise at a different nonuniform sampling rate of ESF are performed using the proposed modified slanted-edge method. It is shown that the proposed method successfully eliminates the error due to the nonuniform sampling of the ESF. An experimental setup for camera MTF measurement is established to verify the accuracy of the proposed method. The experiment results show that under different nonuniform sampling rates of ESF, the proposed modified slanted-edge method has improved accuracy for the camera MTF measurement compared to the ISO 12233 slanted-edge method.

  12. [Simultaneous quantitative analysis of five alkaloids in Sophora flavescens by multi-components assay by single marker].

    PubMed

    Chen, Jing; Wang, Shu-Mei; Meng, Jiang; Sun, Fei; Liang, Sheng-Wang

    2013-05-01

    To establish a new method for quality evaluation and validate its feasibilities by simultaneous quantitative assay of five alkaloids in Sophora flavescens. The new quality evaluation method, quantitative analysis of multi-components by single marker (QAMS), was established and validated with S. flavescens. Five main alkaloids, oxymatrine, sophocarpine, matrine, oxysophocarpine and sophoridine, were selected as analytes to evaluate the quality of rhizome of S. flavescens, and the relative correction factor has good repeatibility. Their contents in 21 batches of samples, collected from different areas, were determined by both external standard method and QAMS. The method was evaluated by comparison of the quantitative results between external standard method and QAMS. No significant differences were found in the quantitative results of five alkaloids in 21 batches of S. flavescens determined by external standard method and QAMS. It is feasible and suitable to evaluate the quality of rhizome of S. flavescens by QAMS.

  13. Study of multi-dimensional radiative energy transfer in molecular gases

    NASA Technical Reports Server (NTRS)

    Liu, Jiwen; Tiwari, S. N.

    1993-01-01

    The Monte Carlo method (MCM) is applied to analyze radiative heat transfer in nongray gases. The nongray model employed is based on the statistical arrow band model with an exponential-tailed inverse intensity distribution. Consideration of spectral correlation results in some distinguishing features of the Monte Carlo formulations. Validation of the Monte Carlo formulations has been conducted by comparing results of this method with other solutions. Extension of a one-dimensional problem to a multi-dimensional problem requires some special treatments in the Monte Carlo analysis. Use of different assumptions results in different sets of Monte Carlo formulations. The nongray narrow band formulations provide the most accurate results.

  14. Numerical solution of sixth-order boundary-value problems using Legendre wavelet collocation method

    NASA Astrophysics Data System (ADS)

    Sohaib, Muhammad; Haq, Sirajul; Mukhtar, Safyan; Khan, Imad

    2018-03-01

    An efficient method is proposed to approximate sixth order boundary value problems. The proposed method is based on Legendre wavelet in which Legendre polynomial is used. The mechanism of the method is to use collocation points that converts the differential equation into a system of algebraic equations. For validation two test problems are discussed. The results obtained from proposed method are quite accurate, also close to exact solution, and other different methods. The proposed method is computationally more effective and leads to more accurate results as compared to other methods from literature.

  15. Measuring geographic access to health care: raster and network-based methods

    PubMed Central

    2012-01-01

    Background Inequalities in geographic access to health care result from the configuration of facilities, population distribution, and the transportation infrastructure. In recent accessibility studies, the traditional distance measure (Euclidean) has been replaced with more plausible measures such as travel distance or time. Both network and raster-based methods are often utilized for estimating travel time in a Geographic Information System. Therefore, exploring the differences in the underlying data models and associated methods and their impact on geographic accessibility estimates is warranted. Methods We examine the assumptions present in population-based travel time models. Conceptual and practical differences between raster and network data models are reviewed, along with methodological implications for service area estimates. Our case study investigates Limited Access Areas defined by Michigan’s Certificate of Need (CON) Program. Geographic accessibility is calculated by identifying the number of people residing more than 30 minutes from an acute care hospital. Both network and raster-based methods are implemented and their results are compared. We also examine sensitivity to changes in travel speed settings and population assignment. Results In both methods, the areas identified as having limited accessibility were similar in their location, configuration, and shape. However, the number of people identified as having limited accessibility varied substantially between methods. Over all permutations, the raster-based method identified more area and people with limited accessibility. The raster-based method was more sensitive to travel speed settings, while the network-based method was more sensitive to the specific population assignment method employed in Michigan. Conclusions Differences between the underlying data models help to explain the variation in results between raster and network-based methods. Considering that the choice of data model/method may substantially alter the outcomes of a geographic accessibility analysis, we advise researchers to use caution in model selection. For policy, we recommend that Michigan adopt the network-based method or reevaluate the travel speed assignment rule in the raster-based method. Additionally, we recommend that the state revisit the population assignment method. PMID:22587023

  16. Comparison of three explicit multigrid methods for the Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Chima, Rodrick V.; Turkel, Eli; Schaffer, Steve

    1987-01-01

    Three explicit multigrid methods, Ni's method, Jameson's finite-volume method, and a finite-difference method based on Brandt's work, are described and compared for two model problems. All three methods use an explicit multistage Runge-Kutta scheme on the fine grid, and this scheme is also described. Convergence histories for inviscid flow over a bump in a channel for the fine-grid scheme alone show that convergence rate is proportional to Courant number and that implicit residual smoothing can significantly accelerate the scheme. Ni's method was slightly slower than the implicitly-smoothed scheme alone. Brandt's and Jameson's methods are shown to be equivalent in form but differ in their node versus cell-centered implementations. They are about 8.5 times faster than Ni's method in terms of CPU time. Results for an oblique shock/boundary layer interaction problem verify the accuracy of the finite-difference code. All methods slowed considerably on the stretched viscous grid but Brandt's method was still 2.1 times faster than Ni's method.

  17. Cluster detection methods applied to the Upper Cape Cod cancer data.

    PubMed

    Ozonoff, Al; Webster, Thomas; Vieira, Veronica; Weinberg, Janice; Ozonoff, David; Aschengrau, Ann

    2005-09-15

    A variety of statistical methods have been suggested to assess the degree and/or the location of spatial clustering of disease cases. However, there is relatively little in the literature devoted to comparison and critique of different methods. Most of the available comparative studies rely on simulated data rather than real data sets. We have chosen three methods currently used for examining spatial disease patterns: the M-statistic of Bonetti and Pagano; the Generalized Additive Model (GAM) method as applied by Webster; and Kulldorff's spatial scan statistic. We apply these statistics to analyze breast cancer data from the Upper Cape Cancer Incidence Study using three different latency assumptions. The three different latency assumptions produced three different spatial patterns of cases and controls. For 20 year latency, all three methods generally concur. However, for 15 year latency and no latency assumptions, the methods produce different results when testing for global clustering. The comparative analyses of real data sets by different statistical methods provides insight into directions for further research. We suggest a research program designed around examining real data sets to guide focused investigation of relevant features using simulated data, for the purpose of understanding how to interpret statistical methods applied to epidemiological data with a spatial component.

  18. Image and Imaging an Emergency Department: Expense and Benefit of Different Quality Assessment Methods

    PubMed Central

    Pfortmueller, Carmen Andrea; Keller, Michael; Mueller, Urs; Zimmermann, Heinz; Exadaktylos, Aristomenis Konstantinos

    2013-01-01

    Introduction. In this era of high-tech medicine, it is becoming increasingly important to assess patient satisfaction. There are several methods to do so, but these differ greatly in terms of cost, time, and labour and external validity. The aim of this study is to describe and compare the structure and implementation of different methods to assess the satisfaction of patients in an emergency department. Methods. The structure and implementation of the different methods to assess patient satisfaction were evaluated on the basis of a 90-minute standardised interview. Results. We identified a total of six different methods in six different hospitals. The average number of patients assessed was 5012, with a range from 230 (M5) to 20 000 patients (M2). In four methods (M1, M3, M5, and M6), the questionnaire was composed by a specialised external institute. In two methods, the questionnaire was created by the hospital itself (M2, M4).The median response rate was 58.4% (range 9–97.8%). With a reminder, the response rate increased by 60% (M3). Conclusion. The ideal method to assess patient satisfaction in the emergency department setting is to use a patient-based, in-emergency department-based assessment of patient satisfaction, planned and guided by expert personnel. PMID:23984073

  19. Comparison of Video Head Impulse Test (vHIT) Gains Between Two Commercially Available Devices and by Different Gain Analytical Methods.

    PubMed

    Lee, Sang Hun; Yoo, Myung Hoon; Park, Jun Woo; Kang, Byung Chul; Yang, Chan Joo; Kang, Woo Suk; Ahn, Joong Ho; Chung, Jong Woo; Park, Hong Ju

    2018-06-01

    To evaluate whether video head impulse test (vHIT) gains are dependent on the measuring device and method of analysis. Prospective study. vHIT was performed in 25 healthy subjects using two devices simultaneously. vHIT gains were compared between these instruments and using five different methods of comparing position and velocity gains during head movement intervals. The two devices produced different vHIT gain results with the same method of analysis. There were also significant differences in the vHIT gains measured using different analytical methods. The gain analytic method that compares the areas under the velocity curve (AUC) of the head and eye movements during head movements showed lower vHIT gains than a method that compared the peak velocities of the head and eye movements. The former method produced the vHIT gain with the smallest standard deviation among the five procedures tested in this study. vHIT gains differ in normal subjects depending on the device and method of analysis used, suggesting that it is advisable for each device to have its own normal values. Gain calculations that compare the AUC of the head and eye movements during the head movements show the smallest variance.

  20. A new evaluation method research for fusion quality of infrared and visible images

    NASA Astrophysics Data System (ADS)

    Ge, Xingguo; Ji, Yiguo; Tao, Zhongxiang; Tian, Chunyan; Ning, Chengda

    2017-03-01

    In order to objectively evaluate the fusion effect of infrared and visible image, a fusion evaluation method for infrared and visible images based on energy-weighted average structure similarity and edge information retention value is proposed for drawbacks of existing evaluation methods. The evaluation index of this method is given, and the infrared and visible image fusion results under different algorithms and environments are made evaluation experiments on the basis of this index. The experimental results show that the objective evaluation index is consistent with the subjective evaluation results obtained from this method, which shows that the method is a practical and effective fusion image quality evaluation method.

  1. Content based Image Retrieval based on Different Global and Local Color Histogram Methods: A Survey

    NASA Astrophysics Data System (ADS)

    Suhasini, Pallikonda Sarah; Sri Rama Krishna, K.; Murali Krishna, I. V.

    2017-02-01

    Different global and local color histogram methods for content based image retrieval (CBIR) are investigated in this paper. Color histogram is a widely used descriptor for CBIR. Conventional method of extracting color histogram is global, which misses the spatial content, is less invariant to deformation and viewpoint changes, and results in a very large three dimensional histogram corresponding to the color space used. To address the above deficiencies, different global and local histogram methods are proposed in recent research. Different ways of extracting local histograms to have spatial correspondence, invariant colour histogram to add deformation and viewpoint invariance and fuzzy linking method to reduce the size of the histogram are found in recent papers. The color space and the distance metric used are vital in obtaining color histogram. In this paper the performance of CBIR based on different global and local color histograms in three different color spaces, namely, RGB, HSV, L*a*b* and also with three distance measures Euclidean, Quadratic and Histogram intersection are surveyed, to choose appropriate method for future research.

  2. Titanium Hydroxide - a Volatile Species at High Temperature

    NASA Technical Reports Server (NTRS)

    Nguyen, QuynhGiao N.

    2010-01-01

    An alternative method of low-temperature plasma functionalization of carbon nanotubes provides for the simultaneous attachment of molecular groups of multiple (typically two or three) different species or different mixtures of species to carbon nanotubes at different locations within the same apparatus. This method is based on similar principles, and involves the use of mostly the same basic apparatus, as those of the methods described in "Low-Temperature Plasma Functionalization of Carbon Nanotubes" (ARC-14661-1), NASA Tech Briefs, Vol. 28, No. 5 (May 2004), page 45. The figure schematically depicts the basic apparatus used in the aforementioned method, with emphasis on features that distinguish the present alternative method from the other. In this method, one exploits the fact that the composition of the deposition plasma changes as the plasma flows from its source in the precursor chamber toward the nanotubes in the target chamber. As a result, carbon nanotubes mounted in the target chamber at different flow distances (d1, d2, d3 . . .) from the precursor chamber become functionalized with different species or different mixtures of species.

  3. A CRITICAL ASSESSMENT OF BIODOSIMETRY METHODS FOR LARGE-SCALE INCIDENTS

    PubMed Central

    Swartz, Harold M.; Flood, Ann Barry; Gougelet, Robert M.; Rea, Michael E.; Nicolalde, Roberto J.; Williams, Benjamin B.

    2014-01-01

    Recognition is growing regarding the possibility that terrorism or large-scale accidents could result in potential radiation exposure of hundreds of thousands of people and that the present guidelines for evaluation after such an event are seriously deficient. Therefore, there is a great and urgent need for after-the-fact biodosimetric methods to estimate radiation dose. To accomplish this goal, the dose estimates must be at the individual level, timely, accurate, and plausibly obtained in large-scale disasters. This paper evaluates current biodosimetry methods, focusing on their strengths and weaknesses in estimating human radiation exposure in large-scale disasters at three stages. First, the authors evaluate biodosimetry’s ability to determine which individuals did not receive a significant exposure so they can be removed from the acute response system. Second, biodosimetry’s capacity to classify those initially assessed as needing further evaluation into treatment-level categories is assessed. Third, we review biodosimetry’s ability to guide treatment, both short- and long-term, is reviewed. The authors compare biodosimetric methods that are based on physical vs. biological parameters and evaluate the features of current dosimeters (capacity, speed and ease of getting information, and accuracy) to determine which are most useful in meeting patients’ needs at each of the different stages. Results indicate that the biodosimetry methods differ in their applicability to the three different stages, and that combining physical and biological techniques may sometimes be most effective. In conclusion, biodosimetry techniques have different properties, and knowledge of their properties for meeting the different needs for different stages will result in their most effective use in a nuclear disaster mass-casualty event. PMID:20065671

  4. A robust algorithm for optimisation and customisation of fractal dimensions of time series modified by nonlinearly scaling their time derivatives: mathematical theory and practical applications.

    PubMed

    Fuss, Franz Konstantin

    2013-01-01

    Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless) and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal's time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals.

  5. A Robust Algorithm for Optimisation and Customisation of Fractal Dimensions of Time Series Modified by Nonlinearly Scaling Their Time Derivatives: Mathematical Theory and Practical Applications

    PubMed Central

    2013-01-01

    Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless) and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal's time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals. PMID:24151522

  6. CompareSVM: supervised, Support Vector Machine (SVM) inference of gene regularity networks.

    PubMed

    Gillani, Zeeshan; Akash, Muhammad Sajid Hamid; Rahaman, M D Matiur; Chen, Ming

    2014-11-30

    Predication of gene regularity network (GRN) from expression data is a challenging task. There are many methods that have been developed to address this challenge ranging from supervised to unsupervised methods. Most promising methods are based on support vector machine (SVM). There is a need for comprehensive analysis on prediction accuracy of supervised method SVM using different kernels on different biological experimental conditions and network size. We developed a tool (CompareSVM) based on SVM to compare different kernel methods for inference of GRN. Using CompareSVM, we investigated and evaluated different SVM kernel methods on simulated datasets of microarray of different sizes in detail. The results obtained from CompareSVM showed that accuracy of inference method depends upon the nature of experimental condition and size of the network. For network with nodes (<200) and average (over all sizes of networks), SVM Gaussian kernel outperform on knockout, knockdown, and multifactorial datasets compared to all the other inference methods. For network with large number of nodes (~500), choice of inference method depend upon nature of experimental condition. CompareSVM is available at http://bis.zju.edu.cn/CompareSVM/ .

  7. Investigation of earthquake factor for optimum tuned mass dampers

    NASA Astrophysics Data System (ADS)

    Nigdeli, Sinan Melih; Bekdaş, Gebrail

    2012-09-01

    In this study the optimum parameters of tuned mass dampers (TMD) are investigated under earthquake excitations. An optimization strategy was carried out by using the Harmony Search (HS) algorithm. HS is a metaheuristic method which is inspired from the nature of musical performances. In addition to the HS algorithm, the results of the optimization objective are compared with the results of the other documented method and the corresponding results are eliminated. In that case, the best optimum results are obtained. During the optimization, the optimum TMD parameters were searched for single degree of freedom (SDOF) structure models with different periods. The optimization was done for different earthquakes separately and the results were compared.

  8. Numerical simulation and experimental research on wake field of ships under off-design conditions

    NASA Astrophysics Data System (ADS)

    Guo, Chun-yu; Wu, Tie-cheng; Zhang, Qi; Gong, Jie

    2016-10-01

    Different operating conditions (e.g. design and off-design) may lead to a significant difference in the hydrodynamics performance of a ship, especially in the total resistance and wake field of ships. This work investigated the hydrodynamic performance of the well-known KRISO 3600 TEU Container Ship (KCS) under three different operating conditions by means of Particle Image Velocimetry (PIV) and Computational Fluid Dynamics (CFD). The comparison results show that the use of PIV to measure a ship's nominal wake field is an important method which has the advantages of being contactless and highly accurate. Acceptable agreements between the results obtained by the two different methods are achieved. Results indicate that the total resistances of the KCS model under two off-design conditions are 23.88% and 13.92% larger than that under the designed condition, respectively.

  9. Pen rearing and imprinting of fall Chinook salmon

    USGS Publications Warehouse

    Beeman, J.W.; Novotny, J.F.

    1994-01-01

    Results of rearing upriver bright fall chinook salmon juveniles in net pens and a barrier net enclosure in two backwater areas and a pond along the Columbia River were compared with traditional hatchery methods. Growth, smoltification, and general condition of pen-reared fish receiving supplemental feeding were better than those of fish reared using traditional methods. Juvenile fish receiving no supplemental feeding were generally in poor condition resulting in a net loss of production. Rearing costs using pens were generally lower than in the hatchery. However, low adult returns resulted in greater cost per adult recovery than fish reared and released using traditional methods. Much of the differences in recovery rates may have been due to differences in rearing locations, as study sites were as much as 128 mi upstream from the hatcheries and study fish may have incurred higher mortality associated with downstream migration than control fish. Fish reared using these methods could be a cost-effective method of enhancing salmon production in the Columbia River Basin.

  10. Surveying immigrants without sampling frames - evaluating the success of alternative field methods.

    PubMed

    Reichel, David; Morales, Laura

    2017-01-01

    This paper evaluates the sampling methods of an international survey, the Immigrant Citizens Survey, which aimed at surveying immigrants from outside the European Union (EU) in 15 cities in seven EU countries. In five countries, no sample frame was available for the target population. Consequently, alternative ways to obtain a representative sample had to be found. In three countries 'location sampling' was employed, while in two countries traditional methods were used with adaptations to reach the target population. The paper assesses the main methodological challenges of carrying out a survey among a group of immigrants for whom no sampling frame exists. The samples of the survey in these five countries are compared to results of official statistics in order to assess the accuracy of the samples obtained through the different sampling methods. It can be shown that alternative sampling methods can provide meaningful results in terms of core demographic characteristics although some estimates differ to some extent from the census results.

  11. Boar taint detection: A comparison of three sensory protocols.

    PubMed

    Trautmann, Johanna; Meier-Dinkel, Lisa; Gertheiss, Jan; Mörlein, Daniel

    2016-01-01

    While recent studies state an important role of human sensory methods for daily routine control of so-called boar taint, the evaluation of different heating methods is still incomplete. This study investigated three common heating methods (microwave (MW), hot-water (HW), hot-iron (HI)) for boar fat evaluation. The comparison was carried out on 72 samples with a 10-person sensory panel. The heating method significantly affected the probability of a deviant rating. Compared to an assumed 'gold standard' (chemical analysis), the performance was best for HI when both sensitivity and specificity were considered. The results show the superiority of the panel result compared to individual assessors. However, the consistency of the individual sensory ratings was not significantly different between MW, HW, and HI. The three protocols showed only fair to moderate agreement. Concluding from the present results, the hot-iron method appears to be advantageous for boar taint evaluation as compared to microwave and hot-water. Copyright © 2015. Published by Elsevier Ltd.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    I. W. Ginsberg

    Multiresolutional decompositions known as spectral fingerprints are often used to extract spectral features from multispectral/hyperspectral data. In this study, the authors investigate the use of wavelet-based algorithms for generating spectral fingerprints. The wavelet-based algorithms are compared to the currently used method, traditional convolution with first-derivative Gaussian filters. The comparison analyses consists of two parts: (a) the computational expense of the new method is compared with the computational costs of the current method and (b) the outputs of the wavelet-based methods are compared with those of the current method to determine any practical differences in the resulting spectral fingerprints. The resultsmore » show that the wavelet-based algorithms can greatly reduce the computational expense of generating spectral fingerprints, while practically no differences exist in the resulting fingerprints. The analysis is conducted on a database of hyperspectral signatures, namely, Hyperspectral Digital Image Collection Experiment (HYDICE) signatures. The reduction in computational expense is by a factor of about 30, and the average Euclidean distance between resulting fingerprints is on the order of 0.02.« less

  13. Magnetic Moment Quantifications of Small Spherical Objects in MRI

    PubMed Central

    Cheng, Yu-Chung N.; Hsieh, Ching-Yi; Tackett, Ronald; Kokeny, Paul; Regmi, Rajesh Kumar; Lawes, Gavin

    2014-01-01

    Purpose The purpose of this work is to develop a method for accurately quantifying effective magnetic moments of spherical-like small objects from magnetic resonance imaging (MRI). A standard 3D gradient echo sequence with only one echo time is intended for our approach to measure the effective magnetic moment of a given object of interest. Methods Our method sums over complex MR signals around the object and equates those sums to equations derived from the magnetostatic theory. With those equations, our method is able to determine the center of the object with subpixel precision. By rewriting those equations, the effective magnetic moment of the object becomes the only unknown to be solved. Each quantified effective magnetic moment has an uncertainty that is derived from the error propagation method. If the volume of the object can be measured from spin echo images, the susceptibility difference between the object and its surrounding can be further quantified from the effective magnetic moment. Numerical simulations, a variety of glass beads in phantom studies with different MR imaging parameters from a 1.5 T machine, and measurements from a SQUID (superconducting quantum interference device) based magnetometer have been conducted to test the robustness of our method. Results Quantified effective magnetic moments and susceptibility differences from different imaging parameters and methods all agree with each other within two standard deviations of estimated uncertainties. Conclusion An MRI method is developed to accurately quantify the effective magnetic moment of a given small object of interest. Most results are accurate within 10% of true values and roughly half of the total results are accurate within 5% of true values using very reasonable imaging parameters. Our method is minimally affected by the partial volume, dephasing, and phase aliasing effects. Our next goal is to apply this method to in vivo studies. PMID:25490517

  14. Comparative analysis of different joining techniques to improve the passive fit of cobalt-chromium superstructures.

    PubMed

    Barbi, Francisco C L; Camarini, Edevaldo T; Silva, Rafael S; Endo, Eliana H; Pereira, Jefferson R

    2012-12-01

    The influence of different joining techniques on passive fit at the interface structure/abutment of cobalt-chromium (Co-Cr) superstructures has not yet been clearly established. The purpose of this study was to compare 3 different techniques of joining Co-Cr superstructures by measuring the resulting marginal misfit in a simulated prosthetic assembly. A specially designed metal model was used for casting, sectioning, joining, and measuring marginal misfit. Forty-five cast bar-type superstructures were fabricated in a Co-Cr alloy and randomly assigned by drawing lots to 3 groups (n=15) according to the joining method used: conventional gas-torch brazing (G-TB), laser welding (LW), and tungsten inert gas welding (TIG). Joined specimens were assembled onto abutment analogs in the metal model with the 1-screw method. The resulting marginal misfit was measured with scanning electron microscopy (SEM) at 3 different points: distal (D), central (C), and mesial (M) along the buccal aspect of both abutments: A (tightened) and B (without screw). The Levene test was used to evaluate variance homogeneity and then the Welsch ANOVA for heteroscedastic data (α=.05). Significant differences were found on abutment A between groups G-TB and LW (P=.013) measured mesially and between groups G-TB and TIG (P=.037) measured centrally. On abutment B, significant differences were found between groups G-TB and LW (P<.001) and groups LW and TIG (P<.001) measured mesially; groups G-TB and TIG (P=.007) measured distally; and groups G-TB and TIG (P=.001) and LW and TIG (P=.007) measured centrally. The method used for joining Co-Cr prosthetic structures had an influence on the level of resulting passive fit. Structures joined by the tungsten inert gas method produced better mean results than did the brazing or laser method. Copyright © 2012 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.

  15. Measuring What People Value: A Comparison of “Attitude” and “Preference” Surveys

    PubMed Central

    Phillips, Kathryn A; Johnson, F Reed; Maddala, Tara

    2002-01-01

    Objective To compare and contrast methods and findings from two approaches to valuation used in the same survey: measurement of “attitudes” using simple rankings and ratings versus measurement of “preferences” using conjoint analysis. Conjoint analysis, a stated preference method, involves comparing scenarios composed of attribute descriptions by ranking, rating, or choosing scenarios. We explore possible explanations for our findings using focus groups conducted after the quantitative survey. Methods A self-administered survey, measuring attitudes and preferences for HIV tests, was conducted at HIV testing sites in San Francisco in 1999–2000 (n = 365, response rate=96 percent). Attitudes were measured and analyzed using standard approaches. Conjoint analysis scenarios were developed using a fractional factorial design and results analyzed using random effects probit models. We examined how the results using the two approaches were both similar and different. Results We found that “attitudes” and “preferences” were generally consistent, but there were some important differences. Although rankings based on the attitude and conjoint analysis surveys were similar, closer examination revealed important differences in how respondents valued price and attributes with “halo” effects, variation in how attribute levels were valued, and apparent differences in decision-making processes. Conclusions To our knowledge, this is the first study to compare attitude surveys and conjoint analysis surveys and to explore the meaning of the results using post-hoc focus groups. Although the overall findings for attitudes and preferences were similar, the two approaches resulted in some different conclusions. Health researchers should consider the advantages and limitations of both methods when determining how to measure what people value. PMID:12546291

  16. User-customized brain computer interfaces using Bayesian optimization

    NASA Astrophysics Data System (ADS)

    Bashashati, Hossein; Ward, Rabab K.; Bashashati, Ali

    2016-04-01

    Objective. The brain characteristics of different people are not the same. Brain computer interfaces (BCIs) should thus be customized for each individual person. In motor-imagery based synchronous BCIs, a number of parameters (referred to as hyper-parameters) including the EEG frequency bands, the channels and the time intervals from which the features are extracted should be pre-determined based on each subject’s brain characteristics. Approach. To determine the hyper-parameter values, previous work has relied on manual or semi-automatic methods that are not applicable to high-dimensional search spaces. In this paper, we propose a fully automatic, scalable and computationally inexpensive algorithm that uses Bayesian optimization to tune these hyper-parameters. We then build different classifiers trained on the sets of hyper-parameter values proposed by the Bayesian optimization. A final classifier aggregates the results of the different classifiers. Main Results. We have applied our method to 21 subjects from three BCI competition datasets. We have conducted rigorous statistical tests, and have shown the positive impact of hyper-parameter optimization in improving the accuracy of BCIs. Furthermore, We have compared our results to those reported in the literature. Significance. Unlike the best reported results in the literature, which are based on more sophisticated feature extraction and classification methods, and rely on prestudies to determine the hyper-parameter values, our method has the advantage of being fully automated, uses less sophisticated feature extraction and classification methods, and yields similar or superior results compared to the best performing designs in the literature.

  17. Comparison of Modal Analysis Methods Applied to a Vibro-Acoustic Test Article

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn; Pappa, Richard; Buehrle, Ralph; Grosveld, Ferdinand

    2001-01-01

    Modal testing of a vibro-acoustic test article referred to as the Aluminum Testbed Cylinder (ATC) has provided frequency response data for the development of validated numerical models of complex structures for interior noise prediction and control. The ATC is an all aluminum, ring and stringer stiffened cylinder, 12 feet in length and 4 feet in diameter. The cylinder was designed to represent typical aircraft construction. Modal tests were conducted for several different configurations of the cylinder assembly under ambient and pressurized conditions. The purpose of this paper is to present results from dynamic testing of different ATC configurations using two modal analysis software methods: Eigensystem Realization Algorithm (ERA) and MTS IDEAS Polyreference method. The paper compares results from the two analysis methods as well as the results from various test configurations. The effects of pressurization on the modal characteristics are discussed.

  18. Standardization of Tc-99 by two methods and participation at the CCRI(II)-K2. Tc-99 comparison.

    PubMed

    Sahagia, M; Antohe, A; Ioan, R; Luca, A; Ivan, C

    2014-05-01

    The work accomplished within the participation at the 2012 key comparison of Tc-99 is presented. The solution was standardized for the first time in IFIN-HH by two methods: LSC-TDCR and 4π(PC)β-γ efficiency tracer. The methods are described and the results are compared. For the LSC-TDCR method, the program TDCR07c, written and provided by P. Cassette, was used for processing the measurement data. The results are 2.1% higher than when applying the TDCR06b program; the higher value, calculated with the software TDCR07c, was used for reporting the final result in the comparison. The tracer used for the 4π(PC)β-γ efficiency tracer method was a standard (60)Co solution. The sources were prepared from the mixture (60)Co+(99)Tc solution and a general extrapolation curve, type: N(βTc-99)/(M)(Tc-99)=f [1-ε(Co-60)], was drawn. This value was not used for the final result of the comparison. The difference between the values of activity concentration obtained by the two methods was within the limit of the combined standard uncertainty of the difference of these two results. © 2013 Published by Elsevier Ltd.

  19. Effect of Chemistry Triangle Oriented Learning Media on Cooperative, Individual and Conventional Method on Chemistry Learning Result

    NASA Astrophysics Data System (ADS)

    Latisma D, L.; Kurniawan, W.; Seprima, S.; Nirbayani, E. S.; Ellizar, E.; Hardeli, H.

    2018-04-01

    The purpose of this study was to see which method are well used with the Chemistry Triangle-oriented learning media. This quasi experimental research involves first grade of senior high school students in six schools namely each two SMA N in Solok city, in Pasaman and two SMKN in Pariaman. The sampling technique was done by Cluster Random Sampling. Data were collected by test and analyzed by one-way anova and Kruskall Wallish test. The results showed that the high school students in Solok learning taught by cooperative method is better than the results of student learning taught by conventional and Individual methods, both for students who have high initial ability and low-ability. Research in SMK showed that the overall student learning outcomes taught by conventional method is better than the student learning outcomes taught by cooperative and individual methods. Student learning outcomes that have high initial ability taught by individual method is better than student learning outcomes that are taught by cooperative method and for students who have low initial ability, there is no difference in student learning outcomes taught by cooperative, individual and conventional methods. Learning in high school in Pasaman showed no significant difference in learning outcomes of the three methods undertaken.

  20. The comparison of antimicrobial packaging properties with different applications incorporation method of active material

    NASA Astrophysics Data System (ADS)

    Anwar, R. W.; Sugiarto; Warsiki, E.

    2018-03-01

    Contamination after the processing of products during storage, distribution and marketing is one of the main causes of food safety issues. Handling of food products after processing can be done during the packaging process. Antimicrobial (AM) active packaging is one of the concept of packaging product development by utilize the interaction between the product and the packaging environment that can delay the bacterial damage by killing or reducing bacterial growth. The active system is formed by incorporating an antimicrobial agent against a packaging matrix that will function as a carrier. Many incorporation methods have been developed in this packaging-making concept which were direct mixing, polishing, and encapsulation. The aims of this research were to examine the different of the AM packaging performances including its stability and effectiveness of its function that would be produced by three different methods. The stability of the packaging function was analyzed by looking at the diffusivity of the active ingredient to the matrix using SEM. The effectiveness was analyzed by the ability of the packaging to prevent the growing of the microbial. The results showed that different incorporation methods resulted on different characteristics of the AM packaging.

  1. Chemometrics-assisted spectrophotometric green method for correcting interferences in biowaiver studies: Application to assay and dissolution profiling study of donepezil hydrochloride tablets

    NASA Astrophysics Data System (ADS)

    Korany, Mohamed A.; Mahgoub, Hoda; Haggag, Rim S.; Ragab, Marwa A. A.; Elmallah, Osama A.

    2018-06-01

    A green, simple and cost effective chemometric UV-Vis spectrophotometric method has been developed and validated for correcting interferences that arise during conducting biowaiver studies. Chemometric manipulation has been done for enhancing the results of direct absorbance, resulting from very low concentrations (high incidence of background noise interference) of earlier points in the dissolution timing in case of dissolution profile using first and second derivative (D1 & D2) methods and their corresponding Fourier function convoluted methods (D1/FF& D2/FF). The method applied for biowaiver study of Donepezil Hydrochloride (DH) as a representative model was done by comparing two different dosage forms containing 5 mg DH per tablet as an application of a developed chemometric method for correcting interferences as well as for the assay and dissolution testing in its tablet dosage form. The results showed that first derivative technique can be used for enhancement of the data in case of low concentration range of DH (1-8 μg mL-1) in the three different pH dissolution media which were used to estimate the low drug concentrations dissolved at the early points in the biowaiver study. Furthermore, the results showed similarity in phosphate buffer pH 6.8 and dissimilarity in the other 2 pH media. The method was validated according to ICH guidelines and USP monograph for both assays (HCl of pH 1.2) and dissolution study in 3 pH media (HCl of pH 1.2, acetate buffer of pH 4.5 and phosphate buffer of pH 6.8). Finally, the assessment of the method greenness was done using two different assessment techniques: National Environmental Method Index label and Eco scale methods. Both techniques ascertained the greenness of the proposed method.

  2. Chemometrics-assisted spectrophotometric green method for correcting interferences in biowaiver studies: Application to assay and dissolution profiling study of donepezil hydrochloride tablets.

    PubMed

    Korany, Mohamed A; Mahgoub, Hoda; Haggag, Rim S; Ragab, Marwa A A; Elmallah, Osama A

    2018-06-15

    A green, simple and cost effective chemometric UV-Vis spectrophotometric method has been developed and validated for correcting interferences that arise during conducting biowaiver studies. Chemometric manipulation has been done for enhancing the results of direct absorbance, resulting from very low concentrations (high incidence of background noise interference) of earlier points in the dissolution timing in case of dissolution profile using first and second derivative (D1 & D2) methods and their corresponding Fourier function convoluted methods (D1/FF& D2/FF). The method applied for biowaiver study of Donepezil Hydrochloride (DH) as a representative model was done by comparing two different dosage forms containing 5mg DH per tablet as an application of a developed chemometric method for correcting interferences as well as for the assay and dissolution testing in its tablet dosage form. The results showed that first derivative technique can be used for enhancement of the data in case of low concentration range of DH (1-8μgmL -1 ) in the three different pH dissolution media which were used to estimate the low drug concentrations dissolved at the early points in the biowaiver study. Furthermore, the results showed similarity in phosphate buffer pH6.8 and dissimilarity in the other 2pH media. The method was validated according to ICH guidelines and USP monograph for both assays (HCl of pH1.2) and dissolution study in 3pH media (HCl of pH1.2, acetate buffer of pH4.5 and phosphate buffer of pH6.8). Finally, the assessment of the method greenness was done using two different assessment techniques: National Environmental Method Index label and Eco scale methods. Both techniques ascertained the greenness of the proposed method. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Laboratory based instruction in Pakistan: Comparative evaluation of three laboratory instruction methods in biological science at higher secondary school level

    NASA Astrophysics Data System (ADS)

    Cheema, Tabinda Shahid

    This study of laboratory based instruction at higher secondary school level was an attempt to gain some insight into the effectiveness of three laboratory instruction methods: cooperative group instruction method, individualised instruction method and lecture demonstration method on biology achievement and retention. A Randomised subjects, Pre-test Post-test Comparative Methods Design was applied. Three groups of students from a year 11 class in Pakistan conducted experiments using the different laboratory instruction methods. Pre-tests, achievement tests after the experiments and retention tests one month later were administered. Results showed no significant difference between the groups on total achievement and retention, nor was there any significant difference on knowledge and comprehension test scores or skills performance. Future research investigating a similar problem is suggested.

  4. Discrimination of Medicine Radix Astragali from Different Geographic Origins Using Multiple Spectroscopies Combined with Data Fusion Methods

    NASA Astrophysics Data System (ADS)

    Wang, Hai-Yan; Song, Chao; Sha, Min; Liu, Jun; Li, Li-Ping; Zhang, Zheng-Yong

    2018-05-01

    Raman spectra and ultraviolet-visible absorption spectra of four different geographic origins of Radix Astragali were collected. These data were analyzed using kernel principal component analysis combined with sparse representation classification. The results showed that the recognition rate reached 70.44% using Raman spectra for data input and 90.34% using ultraviolet-visible absorption spectra for data input. A new fusion method based on Raman combined with ultraviolet-visible data was investigated and the recognition rate was increased to 96.43%. The experimental results suggested that the proposed data fusion method effectively improved the utilization rate of the original data.

  5. Single-Cell RNA-Sequencing: Assessment of Differential Expression Analysis Methods.

    PubMed

    Dal Molin, Alessandra; Baruzzo, Giacomo; Di Camillo, Barbara

    2017-01-01

    The sequencing of the transcriptomes of single-cells, or single-cell RNA-sequencing, has now become the dominant technology for the identification of novel cell types and for the study of stochastic gene expression. In recent years, various tools for analyzing single-cell RNA-sequencing data have been proposed, many of them with the purpose of performing differentially expression analysis. In this work, we compare four different tools for single-cell RNA-sequencing differential expression, together with two popular methods originally developed for the analysis of bulk RNA-sequencing data, but largely applied to single-cell data. We discuss results obtained on two real and one synthetic dataset, along with considerations about the perspectives of single-cell differential expression analysis. In particular, we explore the methods performance in four different scenarios, mimicking different unimodal or bimodal distributions of the data, as characteristic of single-cell transcriptomics. We observed marked differences between the selected methods in terms of precision and recall, the number of detected differentially expressed genes and the overall performance. Globally, the results obtained in our study suggest that is difficult to identify a best performing tool and that efforts are needed to improve the methodologies for single-cell RNA-sequencing data analysis and gain better accuracy of results.

  6. [Direct and indirect ion selective electrodes methods: the differences specified through a case of Waldenström's macroglobulinemia].

    PubMed

    Zelmat, Mohamed Sofiane

    2015-01-01

    Direct and indirect ion selective electrodes (ISEs) are two methods commonly used in biochemistry laboratories in order to measure the electrolytes such as sodium. In the clinical practice, it's the sodium concentration in plasma water -measured by direct ISE- which is important to consider as it is responsible of water movements between the liquid compartments. Knowing the difference between the two methods is important because there are situations leading to conflicting results between direct and indirect ISE, especially with sodium and inappropriate therapeutic decisions could be taken if the clinician is not aware of this difference. The increase and the decrease in plasma water volume are the situations that distort the results of the indirect ISE because this method, after a dilution step, does not take into account the real percentage of plasma water of the patient in the determination of the concentrations (leading for sodium to pseudohyponatremia, pseudonormonatremia or pseudohypernatremia). In the direct ISE, the sample is not diluted and the results are correct even if the volume of plasma water is modified. This article specifies the differences between the two techniques through a case of Waldenström's macroglobulinemia and proposes a course of action to follow for both of the biologist and the clinician.

  7. Reinforce: An Ensemble Approach for Inferring PPI Network from AP-MS Data.

    PubMed

    Tian, Bo; Duan, Qiong; Zhao, Can; Teng, Ben; He, Zengyou

    2017-05-17

    Affinity Purification-Mass Spectrometry (AP-MS) is one of the most important technologies for constructing protein-protein interaction (PPI) networks. In this paper, we propose an ensemble method, Reinforce, for inferring PPI network from AP-MS data set. The new algorithm named Reinforce is based on rank aggregation and false discovery rate control. Under the null hypothesis that the interaction scores from different scoring methods are randomly generated, Reinforce follows three steps to integrate multiple ranking results from different algorithms or different data sets. The experimental results show that Reinforce can get more stable and accurate inference results than existing algorithms. The source codes of Reinforce and data sets used in the experiments are available at: https://sourceforge.net/projects/reinforce/.

  8. A Different Approach to Preparing Novakian Concept Maps: The Indexing Method

    ERIC Educational Resources Information Center

    Turan Oluk, Nurcan; Ekmekci, Güler

    2016-01-01

    People who claim that applying Novakian concept maps in Turkish is problematic base their arguments largely upon the structural differences between the English and Turkish languages. This study aims to introduce the indexing method to eliminate problems encountered in Turkish applications of Novakian maps and to share the preliminary results of…

  9. The Effect of Three Methods of Supporting the Double Bass on Muscle Tension.

    ERIC Educational Resources Information Center

    Dennis, Allan

    1984-01-01

    Using different methods of holding the double bass, college students performed Beethoven's Symphony No. 9. Audio recordings of performance were rated. Muscle tension readings from the left arm, right arm, upper back, and lower back were taken, using electromyography. Results suggest nonsignificant differences in both performance quality and muscle…

  10. An Analysis of the Optimal Multiobjective Inventory Clustering Decision with Small Quantity and Great Variety Inventory by Applying a DPSO

    PubMed Central

    Li, Meng-Hua

    2014-01-01

    When an enterprise has thousands of varieties in its inventory, the use of a single management method could not be a feasible approach. A better way to manage this problem would be to categorise inventory items into several clusters according to inventory decisions and to use different management methods for managing different clusters. The present study applies DPSO (dynamic particle swarm optimisation) to a problem of clustering of inventory items. Without the requirement of prior inventory knowledge, inventory items are automatically clustered into near optimal clustering number. The obtained clustering results should satisfy the inventory objective equation, which consists of different objectives such as total cost, backorder rate, demand relevance, and inventory turnover rate. This study integrates the above four objectives into a multiobjective equation, and inputs the actual inventory items of the enterprise into DPSO. In comparison with other clustering methods, the proposed method can consider different objectives and obtain an overall better solution to obtain better convergence results and inventory decisions. PMID:25197713

  11. Comprehensive Numerical Analysis of Finite Difference Time Domain Methods for Improving Optical Waveguide Sensor Accuracy

    PubMed Central

    Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly

    2016-01-01

    This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.

  12. International comparison of observation-specific spatial buffers: maximizing the ability to estimate physical activity.

    PubMed

    Frank, Lawrence D; Fox, Eric H; Ulmer, Jared M; Chapman, James E; Kershaw, Suzanne E; Sallis, James F; Conway, Terry L; Cerin, Ester; Cain, Kelli L; Adams, Marc A; Smith, Graham R; Hinckson, Erica; Mavoa, Suzanne; Christiansen, Lars B; Hino, Adriano Akira F; Lopes, Adalberto A S; Schipperijn, Jasper

    2017-01-23

    Advancements in geographic information systems over the past two decades have increased the specificity by which an individual's neighborhood environment may be spatially defined for physical activity and health research. This study investigated how different types of street network buffering methods compared in measuring a set of commonly used built environment measures (BEMs) and tested their performance on associations with physical activity outcomes. An internationally-developed set of objective BEMs using three different spatial buffering techniques were used to evaluate the relative differences in resulting explanatory power on self-reported physical activity outcomes. BEMs were developed in five countries using 'sausage,' 'detailed-trimmed,' and 'detailed,' network buffers at a distance of 1 km around participant household addresses (n = 5883). BEM values were significantly different (p < 0.05) for 96% of sausage versus detailed-trimmed buffer comparisons and 89% of sausage versus detailed network buffer comparisons. Results showed that BEM coefficients in physical activity models did not differ significantly across buffering methods, and in most cases BEM associations with physical activity outcomes had the same level of statistical significance across buffer types. However, BEM coefficients differed in significance for 9% of the sausage versus detailed models, which may warrant further investigation. Results of this study inform the selection of spatial buffering methods to estimate physical activity outcomes using an internationally consistent set of BEMs. Using three different network-based buffering methods, the findings indicate significant variation among BEM values, however associations with physical activity outcomes were similar across each buffering technique. The study advances knowledge by presenting consistently assessed relationships between three different network buffer types and utilitarian travel, sedentary behavior, and leisure-oriented physical activity outcomes.

  13. The Demirjian versus the Willems method for dental age estimation in different populations: A meta-analysis of published studies

    PubMed Central

    2017-01-01

    Background The accuracy of radiographic methods for dental age estimation is important for biological growth research and forensic applications. Accuracy of the two most commonly used systems (Demirjian and Willems) has been evaluated with conflicting results. This study investigates the accuracies of these methods for dental age estimation in different populations. Methods A search of PubMed, Scopus, Ovid, Database of Open Access Journals and Google Scholar was undertaken. Eligible studies published before December 28, 2016 were reviewed and analyzed. Meta-analysis was performed on 28 published articles using the Demirjian and/or Willems methods to estimate chronological age in 14,109 children (6,581 males, 7,528 females) age 3–18 years in studies using Demirjian’s method and 10,832 children (5,176 males, 5,656 females) age 4–18 years in studies using Willems’ method. The weighted mean difference at 95% confidence interval was used to assess accuracies of the two methods in predicting the chronological age. Results The Demirjian method significantly overestimated chronological age (p<0.05) in males age 3–15 and females age 4–16 when studies were pooled by age cohorts and sex. The majority of studies using Willems’ method did not report significant overestimation of ages in either sex. Overall, Demirjian’s method significantly overestimated chronological age compared to the Willems method (p<0.05). The weighted mean difference for the Demirjian method was 0.62 for males and 0.72 for females, while that of the Willems method was 0.26 for males and 0.29 for females. Conclusion The Willems method provides more accurate estimation of chronological age in different populations, while Demirjian’s method has a broad application in terms of determining maturity scores. However, accuracy of Demirjian age estimations is confounded by population variation when converting maturity scores to dental ages. For highest accuracy of age estimation, population-specific standards, rather than a universal standard or methods developed on other populations, need to be employed. PMID:29117240

  14. Multiple-algorithm parallel fusion of infrared polarization and intensity images based on algorithmic complementarity and synergy

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Yang, Fengbao; Ji, Linna; Lv, Sheng

    2018-01-01

    Diverse image fusion methods perform differently. Each method has advantages and disadvantages compared with others. One notion is that the advantages of different image methods can be effectively combined. A multiple-algorithm parallel fusion method based on algorithmic complementarity and synergy is proposed. First, in view of the characteristics of the different algorithms and difference-features among images, an index vector-based feature-similarity is proposed to define the degree of complementarity and synergy. This proposed index vector is a reliable evidence indicator for algorithm selection. Second, the algorithms with a high degree of complementarity and synergy are selected. Then, the different degrees of various features and infrared intensity images are used as the initial weights for the nonnegative matrix factorization (NMF). This avoids randomness of the NMF initialization parameter. Finally, the fused images of different algorithms are integrated using the NMF because of its excellent data fusing performance on independent features. Experimental results demonstrate that the visual effect and objective evaluation index of the fused images obtained using the proposed method are better than those obtained using traditional methods. The proposed method retains all the advantages that individual fusion algorithms have.

  15. A comparison of different interpolation methods for wind data in Central Asia

    NASA Astrophysics Data System (ADS)

    Reinhardt, Katja; Samimi, Cyrus

    2017-04-01

    For the assessment of the global climate change and its consequences, the results of computer based climate models are of central importance. The quality of these results and the validity of the derived forecasts are strongly determined by the quality of the underlying climate data. However, in many parts of the world high resolution data are not available. This is particularly true for many regions in Central Asia, where the density of climatological stations has often to be described as thinned out. Due to this insufficient data base the use of statistical methods to improve the resolution of existing climate data is of crucial importance. Only this can provide a substantial data base for a well-founded analysis of past climate changes as well as for a reliable forecast of future climate developments for the particular region. The study presented here shows a comparison of different interpolation methods for the wind components u and v for a region in Central Asia with a pronounced topography. The aim of the study is to find out whether there is an optimal interpolation method which can equally be applied for all pressure levels or if different interpolation methods have to be applied for each pressure level. The European reanalysis data Era-Interim for the years 1989 - 2015 are used as input data for the pressure levels of 850 hPa, 500 hPa and 200 hPa. In order to improve the input data, two different interpolation procedures were applied: On the one hand pure interpolation methods were used, such as inverse distance weighting and ordinary kriging. On the other hand machine learning algorithms, generalized additive models and regression kriging were applied, considering additional influencing factors, e.g. geopotential and topography. As a result it can be concluded that regression kriging provides the best results for all pressure levels, followed by support vector machine, neural networks and ordinary kriging. Inverse distance weighting showed the worst results.

  16. Estimation of effective brain connectivity with dual Kalman filter and EEG source localization methods.

    PubMed

    Rajabioun, Mehdi; Nasrabadi, Ali Motie; Shamsollahi, Mohammad Bagher

    2017-09-01

    Effective connectivity is one of the most important considerations in brain functional mapping via EEG. It demonstrates the effects of a particular active brain region on others. In this paper, a new method is proposed which is based on dual Kalman filter. In this method, firstly by using a brain active localization method (standardized low resolution brain electromagnetic tomography) and applying it to EEG signal, active regions are extracted, and appropriate time model (multivariate autoregressive model) is fitted to extracted brain active sources for evaluating the activity and time dependence between sources. Then, dual Kalman filter is used to estimate model parameters or effective connectivity between active regions. The advantage of this method is the estimation of different brain parts activity simultaneously with the calculation of effective connectivity between active regions. By combining dual Kalman filter with brain source localization methods, in addition to the connectivity estimation between parts, source activity is updated during the time. The proposed method performance has been evaluated firstly by applying it to simulated EEG signals with interacting connectivity simulation between active parts. Noisy simulated signals with different signal to noise ratios are used for evaluating method sensitivity to noise and comparing proposed method performance with other methods. Then the method is applied to real signals and the estimation error during a sweeping window is calculated. By comparing proposed method results in different simulation (simulated and real signals), proposed method gives acceptable results with least mean square error in noisy or real conditions.

  17. Qualitative Maintenance Experience Handbook

    DTIC Science & Technology

    1975-10-20

    differences in type and location of actuators results. DESIRABLE FEATURES: 1. The simpler assist methods are easier to get to usually and are smaller...the wheels differ somewhat in method of removal, there exists no particular features that would qualify as 4 "undesirable." 3. The AV-8 requires special... different airplanes, this survey identifies desirable and unde- sirable features evident in the various installations of the same compo- nent. In essence

  18. Urban local climate zone mapping and apply in urban environment study

    NASA Astrophysics Data System (ADS)

    He, Shan; Zhang, Yunwei; Zhang, Jili

    2018-02-01

    The city’s local climate zone (LCZ) was considered to be a powerful tool for urban climate mapping. But for cities in different countries and regions, the LCZ division methods and results were different, thus targeted researches should be performed. In the current work, a LCZ mapping method was proposed, which is convenient in operation and city planning oriented. In this proposed method, the local climate zoning types were adjusted firstly, according to the characteristics of Chinese city, that more tall buildings and high density. Then the classification method proposed by WUDAPT based on remote sensing data was performed on Xi’an city, as an example, for LCZ mapping. Combined with the city road network, a reasonable expression of the dividing results was provided, to adapt to the characteristics in city planning that land parcels are usually recognized as the basic unit. The proposed method was validated against the actual land use and construction data that surveyed in Xi’an, with results indicating the feasibility of the proposed method for urban LCZ mapping in China.

  19. High order multi-grid methods to solve the Poisson equation

    NASA Technical Reports Server (NTRS)

    Schaffer, S.

    1981-01-01

    High order multigrid methods based on finite difference discretization of the model problem are examined. The following methods are described: (1) a fixed high order FMG-FAS multigrid algorithm; (2) the high order methods; and (3) results are presented on four problems using each method with the same underlying fixed FMG-FAS algorithm.

  20. Comparative results of using different methods for discovery of microorganisms in very ancient layers of the Central Antartic Glacier above the Lake Vostok

    NASA Astrophysics Data System (ADS)

    Abyzov, S.; Hoover, R.; Imura, S.; Mitskevich, I.; Naganuma, T.; Poglazova, M.; Ivanov, M.

    The ice sheet of the Central Antarctic is considered by world-wide scientific community as a model for elaboration of different methods for search of the life outside of the Earth. This problem became especially significant in connection with discovery the under glacial lake in the vicinity of the Russian Antarctic Station Vostok. This lake, later named "Lake Vostok" is considered by many scientists as an analog ice covered seas of Jupiter's satellite Europa. According to the opinion of many researchers there is great possibility of presence in this lake of relict forms of microorganisms well preserved since Ice Age period. The investigations through out the thickness of the ice sheet above the Lake Vostok shows the presence of microorganisms belonging to well-known different taxonomic groups even in the very ancient horizons close to floor of the glacier. Different methods were used for search of microorganisms which were rarely found in the deep ancient layers of the ice sheet. The method of aseptic sampling from the ice cores and results of control sterile conditions in all stages of conducting of these investigations are described in detail in previous reports. Primary investigations used try usual methods of sowing samples onto the different nutrient media permitted to obtain only a few part of the microorganisms which grow on the media used. The possibility of isolation of obtained organisms for further investigations by using modern methods including DNA-analysis appears to be preferential importance of this method. In the further investigations of the very ancient layers of the ice sheet by radioisotopic, luminescence and scanning electron microscopy methods of different modifications, were determined as quantity of microorganisms distributed on its different horizons, as well as the morphological diversity of obtained cells of microorganisms. Experience of many years standing investigations of micro flora in the very ancient strata of the Antarctic ice cover close to the bedrock testified the effectiveness of combination of different methods for search for signs of life in ancient icy formations evidently which may preserve and transport life in the Universe.

  1. Self-Tuning Threshold Method for Real-Time Gait Phase Detection Based on Ground Contact Forces Using FSRs.

    PubMed

    Tang, Jing; Zheng, Jianbin; Wang, Yang; Yu, Lie; Zhan, Enqi; Song, Qiuzhi

    2018-02-06

    This paper presents a novel methodology for detecting the gait phase of human walking on level ground. The previous threshold method (TM) sets a threshold to divide the ground contact forces (GCFs) into on-ground and off-ground states. However, the previous methods for gait phase detection demonstrate no adaptability to different people and different walking speeds. Therefore, this paper presents a self-tuning triple threshold algorithm (STTTA) that calculates adjustable thresholds to adapt to human walking. Two force sensitive resistors (FSRs) were placed on the ball and heel to measure GCFs. Three thresholds (i.e., high-threshold, middle-threshold andlow-threshold) were used to search out the maximum and minimum GCFs for the self-adjustments of thresholds. The high-threshold was the main threshold used to divide the GCFs into on-ground and off-ground statuses. Then, the gait phases were obtained through the gait phase detection algorithm (GPDA), which provides the rules that determine calculations for STTTA. Finally, the STTTA reliability is determined by comparing the results between STTTA and Mariani method referenced as the timing analysis module (TAM) and Lopez-Meyer methods. Experimental results show that the proposed method can be used to detect gait phases in real time and obtain high reliability when compared with the previous methods in the literature. In addition, the proposed method exhibits strong adaptability to different wearers walking at different walking speeds.

  2. Source separation of municipal solid waste: The effects of different separation methods and citizens' inclination-case study of Changsha, China.

    PubMed

    Chen, Haibin; Yang, Yan; Jiang, Wei; Song, Mengjie; Wang, Ying; Xiang, Tiantian

    2017-02-01

    A case study on the source separation of municipal solid waste (MSW) was performed in Changsha, the capital city of Hunan Province, China. The objective of this study is to analyze the effects of different separation methods and compare their effects with citizens' attitudes and inclination. An effect evaluation method based on accuracy rate and miscellany rate was proposed to study the performance of different separation methods. A large-scale questionnaire survey was conducted to determine citizens' attitudes and inclination toward source separation. Survey result shows that the vast majority of respondents hold consciously positive attitudes toward participation in source separation. Moreover, the respondents ignore the operability of separation methods and would rather choose the complex separation method involving four or more subclassed categories. For the effects of separation methods, the site experiment result demonstrates that the relatively simple separation method involving two categories (food waste and other waste) achieves the best effect with the highest accuracy rate (83.1%) and the lowest miscellany rate (16.9%) among the proposed experimental alternatives. The outcome reflects the inconsistency between people's environmental awareness and behavior. Such inconsistency and conflict may be attributed to the lack of environmental knowledge. Environmental education is assumed to be a fundamental solution to improve the effect of source separation of MSW in Changsha. Important management tips on source separation, including the reformation of the current pay-as-you-throw (PAYT) system, are presented in this work. A case study on the source separation of municipal solid waste was performed in Changsha. An effect evaluation method based on accuracy rate and miscellany rate was proposed to study the performance of different separation methods. The site experiment result demonstrates that the two-category (food waste and other waste) method achieves the best effect. The inconsistency between people's inclination and the effect of source separation exists. The proposed method can be expanded to other cities to determine the most effective separation method during planning stages or to evaluate the performance of running source separation systems.

  3. Comparison of heuristic and cognitive walkthrough usability evaluation methods for evaluating health information systems.

    PubMed

    Khajouei, Reza; Zahiri Esfahani, Misagh; Jahani, Yunes

    2017-04-01

    There are several user-based and expert-based usability evaluation methods that may perform differently according to the context in which they are used. The objective of this study was to compare 2 expert-based methods, heuristic evaluation (HE) and cognitive walkthrough (CW), for evaluating usability of health care information systems. Five evaluators independently evaluated a medical office management system using HE and CW. We compared the 2 methods in terms of the number of identified usability problems, their severity, and the coverage of each method. In total, 156 problems were identified using the 2 methods. HE identified a significantly higher number of problems related to the "satisfaction" attribute ( P  = .002). The number of problems identified using CW concerning the "learnability" attribute was significantly higher than those identified using HE ( P  = .005). There was no significant difference between the number of problems identified by HE, based on different usability attributes ( P  = .232). Results of CW showed a significant difference between the number of problems related to usability attributes ( P  < .0001). The average severity of problems identified using CW was significantly higher than that of HE ( P  < .0001). This study showed that HE and CW do not differ significantly in terms of the number of usability problems identified, but they differ based on the severity of problems and the coverage of some usability attributes. The results suggest that CW would be the preferred method for evaluating systems intended for novice users and HE for users who have experience with similar systems. However, more studies are needed to support this finding. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  4. Adaptive methods, rolling contact, and nonclassical friction laws

    NASA Technical Reports Server (NTRS)

    Oden, J. T.

    1989-01-01

    Results and methods on three different areas of contemporary research are outlined. These include adaptive methods, the rolling contact problem for finite deformation of a hyperelastic or viscoelastic cylinder, and non-classical friction laws for modeling dynamic friction phenomena.

  5. Alternative Confidence Interval Methods Used in the Diagnostic Accuracy Studies

    PubMed Central

    Gülhan, Orekıcı Temel

    2016-01-01

    Background/Aim. It is necessary to decide whether the newly improved methods are better than the standard or reference test or not. To decide whether the new diagnostics test is better than the gold standard test/imperfect standard test, the differences of estimated sensitivity/specificity are calculated with the help of information obtained from samples. However, to generalize this value to the population, it should be given with the confidence intervals. The aim of this study is to evaluate the confidence interval methods developed for the differences between the two dependent sensitivity/specificity values on a clinical application. Materials and Methods. In this study, confidence interval methods like Asymptotic Intervals, Conditional Intervals, Unconditional Interval, Score Intervals, and Nonparametric Methods Based on Relative Effects Intervals are used. Besides, as clinical application, data used in diagnostics study by Dickel et al. (2010) has been taken as a sample. Results. The results belonging to the alternative confidence interval methods for Nickel Sulfate, Potassium Dichromate, and Lanolin Alcohol are given as a table. Conclusion. While preferring the confidence interval methods, the researchers have to consider whether the case to be compared is single ratio or dependent binary ratio differences, the correlation coefficient between the rates in two dependent ratios and the sample sizes. PMID:27478491

  6. Alternative Confidence Interval Methods Used in the Diagnostic Accuracy Studies.

    PubMed

    Erdoğan, Semra; Gülhan, Orekıcı Temel

    2016-01-01

    Background/Aim. It is necessary to decide whether the newly improved methods are better than the standard or reference test or not. To decide whether the new diagnostics test is better than the gold standard test/imperfect standard test, the differences of estimated sensitivity/specificity are calculated with the help of information obtained from samples. However, to generalize this value to the population, it should be given with the confidence intervals. The aim of this study is to evaluate the confidence interval methods developed for the differences between the two dependent sensitivity/specificity values on a clinical application. Materials and Methods. In this study, confidence interval methods like Asymptotic Intervals, Conditional Intervals, Unconditional Interval, Score Intervals, and Nonparametric Methods Based on Relative Effects Intervals are used. Besides, as clinical application, data used in diagnostics study by Dickel et al. (2010) has been taken as a sample. Results. The results belonging to the alternative confidence interval methods for Nickel Sulfate, Potassium Dichromate, and Lanolin Alcohol are given as a table. Conclusion. While preferring the confidence interval methods, the researchers have to consider whether the case to be compared is single ratio or dependent binary ratio differences, the correlation coefficient between the rates in two dependent ratios and the sample sizes.

  7. Integrating SANS and fluid-invasion methods to characterize pore structure of typical American shale oil reservoirs.

    PubMed

    Zhao, Jianhua; Jin, Zhijun; Hu, Qinhong; Jin, Zhenkui; Barber, Troy J; Zhang, Yuxiang; Bleuel, Markus

    2017-11-13

    An integration of small-angle neutron scattering (SANS), low-pressure N 2 physisorption (LPNP), and mercury injection capillary pressure (MICP) methods was employed to study the pore structure of four oil shale samples from leading Niobrara, Wolfcamp, Bakken, and Utica Formations in USA. Porosity values obtained from SANS are higher than those from two fluid-invasion methods, due to the ability of neutrons to probe pore spaces inaccessible to N 2 and mercury. However, SANS and LPNP methods exhibit a similar pore-size distribution, and both methods (in measuring total pore volume) show different results of porosity and pore-size distribution obtained from the MICP method (quantifying pore throats). Multi-scale (five pore-diameter intervals) inaccessible porosity to N 2 was determined using SANS and LPNP data. Overall, a large value of inaccessible porosity occurs at pore diameters <10 nm, which we attribute to low connectivity of organic matter-hosted and clay-associated pores in these shales. While each method probes a unique aspect of complex pore structure of shale, the discrepancy between pore structure results from different methods is explained with respect to their difference in measurable ranges of pore diameter, pore space, pore type, sample size and associated pore connectivity, as well as theoretical base and interpretation.

  8. Level set method for image segmentation based on moment competition

    NASA Astrophysics Data System (ADS)

    Min, Hai; Wang, Xiao-Feng; Huang, De-Shuang; Jin, Jing; Wang, Hong-Zhi; Li, Hai

    2015-05-01

    We propose a level set method for image segmentation which introduces the moment competition and weakly supervised information into the energy functional construction. Different from the region-based level set methods which use force competition, the moment competition is adopted to drive the contour evolution. Here, a so-called three-point labeling scheme is proposed to manually label three independent points (weakly supervised information) on the image. Then the intensity differences between the three points and the unlabeled pixels are used to construct the force arms for each image pixel. The corresponding force is generated from the global statistical information of a region-based method and weighted by the force arm. As a result, the moment can be constructed and incorporated into the energy functional to drive the evolving contour to approach the object boundary. In our method, the force arm can take full advantage of the three-point labeling scheme to constrain the moment competition. Additionally, the global statistical information and weakly supervised information are successfully integrated, which makes the proposed method more robust than traditional methods for initial contour placement and parameter setting. Experimental results with performance analysis also show the superiority of the proposed method on segmenting different types of complicated images, such as noisy images, three-phase images, images with intensity inhomogeneity, and texture images.

  9. Comparison of fractionation methods for nitrogen and starch in maize and grass silages.

    PubMed

    Ali, M; de Jonge, L H; Cone, J W; van Duinkerken, G; Blok, M C; Bruinenberg, M H; Hendriks, W H

    2016-06-01

    In in situ nylon bag technique, many feed evaluation systems use a washing machine method (WMM) to determine the washout (W) fraction and to wash the rumen incubated nylon bags. As this method has some disadvantages, an alternate modified method (MM) was recently introduced. The aim of this study was to determine and compare the W and non-washout (D+U) fractions of nitrogen (N) and/or starch of maize and grass silages, using the WMM and the MM. Ninety-nine maize silage and 99 grass silage samples were selected with a broad range in chemical composition. The results showed a large range in the W, soluble (S) and D+U fractions of N of maize and grass silages and the W, insoluble washout (W-S) and D+U fractions of starch of maize silages, determined by both methods, due to variation in their chemical composition. The values for N fractions of maize and grass silages obtained with both methods were found different (p < 0.001). Large differences (p < 0.001) were found in the D+U fraction of starch of maize silages which might be due to different methodological approaches, such as different rinsing procedures (washing vs. shaking), duration of rinsing (40 min vs. 60 min) and different solvents (water vs. buffer solution). The large differences (p < 0.001) in the W-S and D+U fractions of starch determined with both methods can led to different predicted values for the effective rumen starch degradability. In conclusion, the MM with one recommended shaking procedure, performed under identical and controlled experimental conditions, can give more reliable results compared to the WMM, using different washing programs and procedures. Journal of Animal Physiology and Animal Nutrition © 2015 Blackwell Verlag GmbH.

  10. Review of disability weight studies: comparison of methodological choices and values

    PubMed Central

    2014-01-01

    Introduction The disability-adjusted life year (DALY) is widely used to assess the burden of different health problems and risk factors. The disability weight, a value anchored between 0 (perfect health) and 1 (equivalent to death), is necessary to estimate the disability component (years lived with disability, YLDs) of the DALY. After publication of the ground-breaking Global Burden of Disease (GBD) 1996, alternative sets of disability weights have been developed over the past 16 years, each using different approaches with regards to the panel, health state description, and valuation methods. The objective of this study was to review all studies that developed disability weights and to critically assess the methodological design choices (health state and time description, panel composition, and valuation method). Furthermore, disability weights of eight specific conditions were compared. Methods Disability weights studies (1990¿2012) in international peer-reviewed journals and grey literature were identified with main inclusion criteria being that the study assessed DALY disability weights for several conditions or a specific group of illnesses. Studies were collated by design and methods and evaluation of results. Results Twenty-two studies met the inclusion criteria of our review. There is considerable variation in methods used to derive disability weights, although most studies used a disease-specific description of the health state, a panel that consisted of medical experts, and nonpreference-based valuation method to assess the values for the majority of the disability weights. Comparisons of disability weights across 15 specific disease and injury groups showed that the subdivision of a disease into separate health states (stages) differed markedly across studies. Additionally, weights for similar health states differed, particularly in the case of mild diseases, for which the disability weight differed by a factor of two or more. Conclusions In terms of comparability of the resulting YLDs, the global use of the same set of disability weights has advantages, though practical constraints and intercultural differences should be taken into account into such a set. PMID:26019690

  11. Human body mass estimation: a comparison of "morphometric" and "mechanical" methods.

    PubMed

    Auerbach, Benjamin M; Ruff, Christopher B

    2004-12-01

    In the past, body mass was reconstructed from hominin skeletal remains using both "mechanical" methods which rely on the support of body mass by weight-bearing skeletal elements, and "morphometric" methods which reconstruct body mass through direct assessment of body size and shape. A previous comparison of two such techniques, using femoral head breadth (mechanical) and stature and bi-iliac breadth (morphometric), indicated a good general correspondence between them (Ruff et al. [1997] Nature 387:173-176). However, the two techniques were never systematically compared across a large group of modern humans of diverse body form. This study incorporates skeletal measures taken from 1,173 Holocene adult individuals, representing diverse geographic origins, body sizes, and body shapes. Femoral head breadth, bi-iliac breadth (after pelvic rearticulation), and long bone lengths were measured on each individual. Statures were estimated from long bone lengths using appropriate reference samples. Body masses were calculated using three available femoral head breadth (FH) formulae and the stature/bi-iliac breadth (STBIB) formula, and compared. All methods yielded similar results. Correlations between FH estimates and STBIB estimates are 0.74-0.81. Slight differences in results between the three FH estimates can be attributed to sampling differences in the original reference samples, and in particular, the body-size ranges included in those samples. There is no evidence for systematic differences in results due to differences in body proportions. Since the STBIB method was validated on other samples, and the FH methods produced similar estimates, this argues that either may be applied to skeletal remains with some confidence. 2004 Wiley-Liss, Inc.

  12. A normalization method for combination of laboratory test results from different electronic healthcare databases in a distributed research network.

    PubMed

    Yoon, Dukyong; Schuemie, Martijn J; Kim, Ju Han; Kim, Dong Ki; Park, Man Young; Ahn, Eun Kyoung; Jung, Eun-Young; Park, Dong Kyun; Cho, Soo Yeon; Shin, Dahye; Hwang, Yeonsoo; Park, Rae Woong

    2016-03-01

    Distributed research networks (DRNs) afford statistical power by integrating observational data from multiple partners for retrospective studies. However, laboratory test results across care sites are derived using different assays from varying patient populations, making it difficult to simply combine data for analysis. Additionally, existing normalization methods are not suitable for retrospective studies. We normalized laboratory results from different data sources by adjusting for heterogeneous clinico-epidemiologic characteristics of the data and called this the subgroup-adjusted normalization (SAN) method. Subgroup-adjusted normalization renders the means and standard deviations of distributions identical under population structure-adjusted conditions. To evaluate its performance, we compared SAN with existing methods for simulated and real datasets consisting of blood urea nitrogen, serum creatinine, hematocrit, hemoglobin, serum potassium, and total bilirubin. Various clinico-epidemiologic characteristics can be applied together in SAN. For simplicity of comparison, age and gender were used to adjust population heterogeneity in this study. In simulations, SAN had the lowest standardized difference in means (SDM) and Kolmogorov-Smirnov values for all tests (p < 0.05). In a real dataset, SAN had the lowest SDM and Kolmogorov-Smirnov values for blood urea nitrogen, hematocrit, hemoglobin, and serum potassium, and the lowest SDM for serum creatinine (p < 0.05). Subgroup-adjusted normalization performed better than normalization using other methods. The SAN method is applicable in a DRN environment and should facilitate analysis of data integrated across DRN partners for retrospective observational studies. Copyright © 2015 John Wiley & Sons, Ltd.

  13. Assessment of Exposure of Elementary Schools to Traffic Pollution by GIS Methods.

    PubMed

    Štych, Přemysl; Šrámková, Denisa; Braniš, Martin

    2016-06-01

    The susceptibility of children to polluted air has been pointed out several times in the past. Generally, children suffer from higher exposure to air pollutants than adults because of their higher physical activity, higher metabolic rate and the resultant increase in minute ventilation. The aim of this study was to examine the exposure characteristics of public elementary schools in Prague (the capital of the Czech Republic). The exposure was examined by two different methods: by the proximity of selected schools to major urban roads and their location within the modeled urban PM10 concentration fields. We determined average daily traffic counts for all roads within 300 m of 251 elementary schools using the national road network database and geographic information system and calculated by means of GIS tools the proximity of the schools to the roads. In the second method we overlapped the GIS layer of predicted annual urban PM10 concentration field with that of geocoded school addresses. The results showed that 208 Prague schools (almost 80%) are situated in a close proximity (<300 m) of roads exhibiting high traffic loads. Both methods showed good agreement in the proportion of highly exposed schools at risk; however, we found significant differences in the locations of schools at risk determined by the two methods. We argue that results of similar proximity studies should be treated with caution before they are used in risk based decision-making process, since different methods may provide different outcomes. Copyright© by the National Institute of Public Health, Prague 2015.

  14. Gravimetric method for in vitro calibration of skin hydration measurements.

    PubMed

    Martinsen, Ørjan G; Grimnes, Sverre; Nilsen, Jon K; Tronstad, Christian; Jang, Wooyoung; Kim, Hongsig; Shin, Kunsoo; Naderi, Majid; Thielmann, Frank

    2008-02-01

    A novel method for in vitro calibration of skin hydration measurements is presented. The method combines gravimetric and electrical measurements and reveals an exponential dependency of measured electrical susceptance to absolute water content in the epidermal stratum corneum. The results also show that absorption of water into the stratum corneum exhibits three different phases with significant differences in absorption time constant. These phases probably correspond to bound, loosely bound, and bulk water.

  15. Evaluating firms' R&D performance using best worst method.

    PubMed

    Salimi, Negin; Rezaei, Jafar

    2018-02-01

    Since research and development (R&D) is the most critical determinant of the productivity, growth and competitive advantage of firms, measuring R&D performance has become the core of attention of R&D managers, and an extensive body of literature has examined and identified different R&D measurements and determinants of R&D performance. However, measuring R&D performance and assigning the same level of importance to different R&D measures, which is the common approach in existing studies, can oversimplify the R&D measuring process, which may result in misinterpretation of the performance and consequently fallacy R&D strategies. The aim of this study is to measure R&D performance taking into account the different levels of importance of R&D measures, using a multi-criteria decision-making method called Best Worst Method (BWM) to identify the weights (importance) of R&D measures and measure the R&D performance of 50 high-tech SMEs in the Netherlands using the data gathered in a survey among SMEs and from R&D experts. The results show how assigning different weights to different R&D measures (in contrast to simple mean) results in a different ranking of the firms and allow R&D managers to formulate more effective strategies to improve their firm's R&D performance by applying knowledge regarding the importance of different R&D measures. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. Solution of transonic flows by an integro-differential equation method

    NASA Technical Reports Server (NTRS)

    Ogana, W.

    1978-01-01

    Solutions of steady transonic flow past a two-dimensional airfoil are obtained from a singular integro-differential equation which involves a tangential derivative of the perturbation velocity potential. Subcritical flows are solved by taking central differences everywhere. For supercritical flows with shocks, central differences are taken in subsonic flow regions and backward differences in supersonic flow regions. The method is applied to a nonlifting parabolic-arc airfoil and to a lifting NACA 0012 airfoil. Results compare favorably with those of finite-difference schemes.

  17. Comparison of three methods for gastrointestinal nematode diagnosis determination in grazing dairy cattle in relation to milk production.

    PubMed

    Mejía, M E; Perri, A F; Licoff, N; Miglierina, M M; Cseh, S; Ornstein, A M; Becu-Villalobos, D; Lacau-Mengido, I M

    2011-12-29

    Development of resistance to anthelmintic drugs has motivated the search for diagnostic methods to identify animals for targeted selective treatments. We compared three methods for the diagnosis of nematode infection in relation to milk production in a fully grazing dairy herd of 150 cows in the humid Pampa (Argentina). Animals had feces, blood and milk sampled during the first postpartum month for EPG, pepsinogen and anti-Ostertagia antibody determination, respectively. With the results obtained two groups of cows, divided in high and low parasite burden, were conformed for each method, and milk production was then compared between groups. When cows were separated by the EPG method (EPG=0 (N=106) vs. EPG>0 (N=44)) a difference of nearly 800 l of milk per cow per lactation was found (P<0.05). On the other hand, milk production between groups separated by Pepsinogen (mUtyr ≤ 1000 vs. mUtyr > 1000) or by anti-Ostertagia (ODR ≤ 0.5 vs. ODR > 0.5) results did not differ. Interestingly, proportion of cows in each group differed between methods (P<0.0001), and the anti-Ostertagia method yielded significantly more cows in the high index group compared to results using the EPG or Pepsinogen method. No correlations were found between parasite indexes determined by the different methods. High parasite burden estimation found may be ascribed to the production system, fully grazing all year round, and to the sampling time, at the beginning of lactation with cows in negative energy balance and depressed immunity. The fact that the cows were born and reared outside, on pasture with continuous nematode larvae exposure, may also account for the results obtained. In conclusion, EPG counting during the first postpartum month may be a useful tool for the diagnosis of production impairment induced by high nematode burden in adult grazing dairy cows. The anthelmintic treatment of only the EPG-positive recently calved cows would improve milk production, while reducing selective pressure on nematode population for the development of resistance. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. Considerations on methodological challenges for water footprint calculations.

    PubMed

    Thaler, S; Zessner, M; De Lis, F Bertran; Kreuzinger, N; Fehringer, R

    2012-01-01

    We have investigated how different approaches for water footprint (WF) calculations lead to different results, taking sugar beet production and sugar refining as examples. To a large extent, results obtained from any WF calculation are reflective of the method used and the assumptions made. Real irrigation data for 59 European sugar beet growing areas showed inadequate estimation of irrigation water when a widely used simple approach was used. The method resulted in an overestimation of blue water and an underestimation of green water usage. Dependent on the chosen (available) water quality standard, the final grey WF can differ up to a factor of 10 and more. We conclude that further development and standardisation of the WF is needed to reach comparable and reliable results. A special focus should be on standardisation of the grey WF methodology based on receiving water quality standards.

  19. [Comparative study of the population structure and population assignment of sockeye salmon Oncorhynchus nerka from West Kamchatka based on RAPD-PCR and microsatellite polymorphism].

    PubMed

    Zelenina, D A; Khrustaleva, A M; Volkov, A A

    2006-05-01

    Using two types of molecular markers, a comparative analysis of the population structure of sockeye salmon from West Kamchatka as well as population assignment of each individual fish were carried out. The values of a RAPD-PCR-based population assignment test (94-100%) were somewhat higher than those based on microsatellite data (74-84%). However, these results seem quite satisfactory because of high polymorphism of the microsatellite loci examined. The UPGMA dendrograms of genetic similarity of three largest spawning populations, constructed using each of the methods, were highly reliable, which was demonstrated by high bootstrap indices (100% in the case of RAPD-PCR; 84 and 100%, in the case of microsatellite analysis), though the resultant trees differed from one another. The different topology of the trees, in our view, is explained by the fact that the employed methods explored different parts of the genome; hence, the obtained results, albeit valid, may not correlate. Thus, to enhance reliability of the results, several methods of analysis should be used concurrently.

  20. Statistical inference methods for sparse biological time series data.

    PubMed

    Ndukum, Juliet; Fonseca, Luís L; Santos, Helena; Voit, Eberhard O; Datta, Susmita

    2011-04-25

    Comparing metabolic profiles under different biological perturbations has become a powerful approach to investigating the functioning of cells. The profiles can be taken as single snapshots of a system, but more information is gained if they are measured longitudinally over time. The results are short time series consisting of relatively sparse data that cannot be analyzed effectively with standard time series techniques, such as autocorrelation and frequency domain methods. In this work, we study longitudinal time series profiles of glucose consumption in the yeast Saccharomyces cerevisiae under different temperatures and preconditioning regimens, which we obtained with methods of in vivo nuclear magnetic resonance (NMR) spectroscopy. For the statistical analysis we first fit several nonlinear mixed effect regression models to the longitudinal profiles and then used an ANOVA likelihood ratio method in order to test for significant differences between the profiles. The proposed methods are capable of distinguishing metabolic time trends resulting from different treatments and associate significance levels to these differences. Among several nonlinear mixed-effects regression models tested, a three-parameter logistic function represents the data with highest accuracy. ANOVA and likelihood ratio tests suggest that there are significant differences between the glucose consumption rate profiles for cells that had been--or had not been--preconditioned by heat during growth. Furthermore, pair-wise t-tests reveal significant differences in the longitudinal profiles for glucose consumption rates between optimal conditions and heat stress, optimal and recovery conditions, and heat stress and recovery conditions (p-values <0.0001). We have developed a nonlinear mixed effects model that is appropriate for the analysis of sparse metabolic and physiological time profiles. The model permits sound statistical inference procedures, based on ANOVA likelihood ratio tests, for testing the significance of differences between short time course data under different biological perturbations.

  1. Comparison of updated Lagrangian FEM with arbitrary Lagrangian Eulerian method for 3D thermo-mechanical extrusion of a tube profile

    NASA Astrophysics Data System (ADS)

    Kronsteiner, J.; Horwatitsch, D.; Zeman, K.

    2017-10-01

    Thermo-mechanical numerical modelling and simulation of extrusion processes faces several serious challenges. Large plastic deformations in combination with a strong coupling of thermal with mechanical effects leads to a high numerical demand for the solution as well as for the handling of mesh distortions. The two numerical methods presented in this paper also reflect two different ways to deal with mesh distortions. Lagrangian Finite Element Methods (FEM) tackle distorted elements by building a new mesh (called re-meshing) whereas Arbitrary Lagrangian Eulerian (ALE) methods use an "advection" step to remap the solution from the distorted to the undistorted mesh. Another difference between conventional Lagrangian and ALE methods is the separate treatment of material and mesh in ALE, allowing the definition of individual velocity fields. In theory, an ALE formulation contains the Eulerian formulation as a subset to the Lagrangian description of the material. The investigations presented in this paper were dealing with the direct extrusion of a tube profile using EN-AW 6082 aluminum alloy and a comparison of experimental with Lagrangian and ALE results. The numerical simulations cover the billet upsetting and last until one third of the billet length is extruded. A good qualitative correlation of experimental and numerical results could be found, however, major differences between Lagrangian and ALE methods concerning thermo-mechanical coupling lead to deviations in the thermal results.

  2. Comparative analysis of different survey methods for monitoring fish assemblages in coastal habitats.

    PubMed

    Baker, Duncan G L; Eddy, Tyler D; McIver, Reba; Schmidt, Allison L; Thériault, Marie-Hélène; Boudreau, Monica; Courtenay, Simon C; Lotze, Heike K

    2016-01-01

    Coastal ecosystems are among the most productive yet increasingly threatened marine ecosystems worldwide. Particularly vegetated habitats, such as eelgrass (Zostera marina) beds, play important roles in providing key spawning, nursery and foraging habitats for a wide range of fauna. To properly assess changes in coastal ecosystems and manage these critical habitats, it is essential to develop sound monitoring programs for foundation species and associated assemblages. Several survey methods exist, thus understanding how different methods perform is important for survey selection. We compared two common methods for surveying macrofaunal assemblages: beach seine netting and underwater visual census (UVC). We also tested whether assemblages in shallow nearshore habitats commonly sampled by beach seines are similar to those of nearby eelgrass beds often sampled by UVC. Among five estuaries along the Southern Gulf of St. Lawrence, Canada, our results suggest that the two survey methods yield comparable results for species richness, diversity and evenness, yet beach seines yield significantly higher abundance and different species composition. However, sampling nearshore assemblages does not represent those in eelgrass beds despite considerable overlap and close proximity. These results have important implications for how and where macrofaunal assemblages are monitored in coastal ecosystems. Ideally, multiple survey methods and locations should be combined to complement each other in assessing the entire assemblage and full range of changes in coastal ecosystems, thereby better informing coastal zone management.

  3. Assessing response of sediment load variation to climate change and human activities with six different approaches.

    PubMed

    Zhao, Guangju; Mu, Xingmin; Jiao, Juying; Gao, Peng; Sun, Wenyi; Li, Erhui; Wei, Yanhong; Huang, Jiacong

    2018-05-23

    Understanding the relative contributions of climate change and human activities to variations in sediment load is of great importance for regional soil, and river basin management. Considerable studies have investigated spatial-temporal variation of sediment load within the Loess Plateau; however, contradictory findings exist among methods used. This study systematically reviewed six quantitative methods: simple linear regression, double mass curve, sediment identity factor analysis, dam-sedimentation based method, the Sediment Delivery Distributed (SEDD) model, and the Soil Water Assessment Tool (SWAT) model. The calculation procedures and merits for each method were systematically explained. A case study in the Huangfuchuan watershed on the northern Loess Plateau has been undertaken. The results showed that sediment load had been reduced by 70.5% during the changing period from 1990 to 2012 compared to that of the baseline period from 1955 to 1989. Human activities accounted for an average of 93.6 ± 4.1% of the total decline in sediment load, whereas climate change contributed 6.4 ± 4.1%. Five methods produced similar estimates, but the linear regression yielded relatively different results. The results of this study provide a good reference for assessing the effects of climate change and human activities on sediment load variation by using different methods. Copyright © 2018. Published by Elsevier B.V.

  4. The Teacher as Researcher: An Experimental Approach toward Teaching in the College Classroom and Beyond.

    ERIC Educational Resources Information Center

    DeMarie, Darlene

    Noting that there are far too many variables ever to have the same teaching results with different people in different classes in different historical times and places, this paper describes methods for helping educational psychology students to learn to assess systematically the results of teaching. First, making one's thinking overt helps…

  5. Comparing deflection measurements of a magnetically steerable catheter using optical imaging and MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lillaney, Prasheel, E-mail: Prasheel.Lillaney@ucsf.edu; Caton, Curtis; Martin, Alastair J.

    2014-02-15

    Purpose: Magnetic resonance imaging (MRI) is an emerging modality for interventional radiology, giving clinicians another tool for minimally invasive image-guided interventional procedures. Difficulties associated with endovascular catheter navigation using MRI guidance led to the development of a magnetically steerable catheter. The focus of this study was to mechanically characterize deflections of two different prototypes of the magnetically steerable catheterin vitro to better understand their efficacy. Methods: A mathematical model for deflection of the magnetically steerable catheter is formulated based on the principle that at equilibrium the mechanical and magnetic torques are equal to each other. Furthermore, two different image basedmore » methods for empirically measuring the catheter deflection angle are presented. The first, referred to as the absolute tip method, measures the angle of the line that is tangential to the catheter tip. The second, referred to the base to tip method, is an approximation that is used when it is not possible to measure the angle of the tangent line. Optical images of the catheter deflection are analyzed using the absolute tip method to quantitatively validate the predicted deflections from the mathematical model. Optical images of the catheter deflection are also analyzed using the base to tip method to quantitatively determine the differences between the absolute tip and base to tip methods. Finally, the optical images are compared to MR images using the base to tip method to determine the accuracy of measuring the catheter deflection using MR. Results: The optical catheter deflection angles measured for both catheter prototypes using the absolute tip method fit very well to the mathematical model (R{sup 2} = 0.91 and 0.86 for each prototype, respectively). It was found that the angles measured using the base to tip method were consistently smaller than those measured using the absolute tip method. The deflection angles measured using optical data did not demonstrate a significant difference from the angles measured using MR image data when compared using the base to tip method. Conclusions: This study validates the theoretical description of the magnetically steerable catheter, while also giving insight into different methods and modalities for measuring the deflection angles of the prototype catheters. These results can be used to mechanically model future iterations of the design. Quantifying the difference between the different methods for measuring catheter deflection will be important when making deflection measurements in future studies. Finally, MR images can be used to reliably measure deflection angles since there is no significant difference between the MR and optical measurements.« less

  6. Assessment of Intrathecal Free Light Chain Synthesis: Comparison of Different Quantitative Methods with the Detection of Oligoclonal Free Light Chains by Isoelectric Focusing and Affinity-Mediated Immunoblotting

    PubMed Central

    Kušnierová, Pavlína; Švagera, Zdeněk; Všianský, František; Byrtusová, Monika; Hradílek, Pavel; Kurková, Barbora; Zapletalová, Olga; Bartoš, Vladimír

    2016-01-01

    Objectives We aimed to compare various methods for free light chain (fLC) quantitation in cerebrospinal fluid (CSF) and serum and to determine whether quantitative CSF measurements could reliably predict intrathecal fLC synthesis. In addition, we wished to determine the relationship between free kappa and free lambda light chain concentrations in CSF and serum in various disease groups. Methods We analysed 166 paired CSF and serum samples by at least one of the following methods: turbidimetry (Freelite™, SPAPLUS), nephelometry (N Latex FLC™, BN ProSpec), and two different (commercially available and in-house developed) sandwich ELISAs. The results were compared with oligoclonal fLC detected by affinity-mediated immunoblotting after isoelectric focusing. Results Although the correlations between quantitative methods were good, both proportional and systematic differences were discerned. However, no major differences were observed in the prediction of positive oligoclonal fLC test. Surprisingly, CSF free kappa/free lambda light chain ratios were lower than those in serum in about 75% of samples with negative oligoclonal fLC test. In about a half of patients with multiple sclerosis and clinically isolated syndrome, profoundly increased free kappa/free lambda light chain ratios were found in the CSF. Conclusions Our results show that using appropriate method-specific cut-offs, different methods of CSF fLC quantitation can be used for the prediction of intrathecal fLC synthesis. The reason for unexpectedly low free kappa/free lambda light chain ratios in normal CSFs remains to be elucidated. Whereas CSF free kappa light chain concentration is increased in most patients with multiple sclerosis and clinically isolated syndrome, CSF free lambda light chain values show large interindividual variability in these patients and should be investigated further for possible immunopathological and prognostic significance. PMID:27846293

  7. The feasibility of using explicit method for linear correction of the particle size variation using NIR Spectroscopy combined with PLS2regression method

    NASA Astrophysics Data System (ADS)

    Yulia, M.; Suhandy, D.

    2018-03-01

    NIR spectra obtained from spectral data acquisition system contains both chemical information of samples as well as physical information of the samples, such as particle size and bulk density. Several methods have been established for developing calibration models that can compensate for sample physical information variations. One common approach is to include physical information variation in the calibration model both explicitly and implicitly. The objective of this study was to evaluate the feasibility of using explicit method to compensate the influence of different particle size of coffee powder in NIR calibration model performance. A number of 220 coffee powder samples with two different types of coffee (civet and non-civet) and two different particle sizes (212 and 500 µm) were prepared. Spectral data was acquired using NIR spectrometer equipped with an integrating sphere for diffuse reflectance measurement. A discrimination method based on PLS-DA was conducted and the influence of different particle size on the performance of PLS-DA was investigated. In explicit method, we add directly the particle size as predicted variable results in an X block containing only the NIR spectra and a Y block containing the particle size and type of coffee. The explicit inclusion of the particle size into the calibration model is expected to improve the accuracy of type of coffee determination. The result shows that using explicit method the quality of the developed calibration model for type of coffee determination is a little bit superior with coefficient of determination (R2) = 0.99 and root mean square error of cross-validation (RMSECV) = 0.041. The performance of the PLS2 calibration model for type of coffee determination with particle size compensation was quite good and able to predict the type of coffee in two different particle sizes with relatively high R2 pred values. The prediction also resulted in low bias and RMSEP values.

  8. Pressure-Sensitive Paint: Effect of Substrate

    PubMed Central

    Quinn, Mark Kenneth; Yang, Leichao; Kontis, Konstantinos

    2011-01-01

    There are numerous ways in which pressure-sensitive paint can be applied to a surface. The choice of substrate and application method can greatly affect the results obtained. The current study examines the different methods of applying pressure-sensitive paint to a surface. One polymer-based and two porous substrates (anodized aluminum and thin-layer chromatography plates) are investigated and compared for luminescent output, pressure sensitivity, temperature sensitivity and photodegradation. Two luminophores [tris-Bathophenanthroline Ruthenium(II) Perchlorate and Platinum-tetrakis (pentafluorophenyl) Porphyrin] will also be compared in all three of the substrates. The results show the applicability of the different substrates and luminophores to different testing environments. PMID:22247685

  9. DNA extraction from coral reef sediment bacteria for the polymerase chain reaction.

    PubMed

    Guthrie, J N; Moriarty, D J; Blackall, L L

    2000-12-15

    A rapid and effective method for the direct extraction of high molecular weight amplifiable DNA from two coral reef sediments was developed. DNA was amplified by the polymerase chain reaction (PCR) using 16S rDNA specific primers. The amplicons were digested with HaeIII, HinP1I and MspI and separated using polyacrylamide gel electrophoresis and silver staining. The resulting amplified ribosomal DNA restriction analysis (ARDRA) patterns were used as a fingerprint to discern differences between the coral reef sediment samples. Results indicated that ARDRA is an effective method for determining differences within the bacterial community amongst different environmental samples.

  10. Two pass method and radiation interchange processing when applied to thermal-structural analysis of large space truss structures

    NASA Technical Reports Server (NTRS)

    Warren, Andrew H.; Arelt, Joseph E.; Lalicata, Anthony L.; Rogers, Karen M.

    1993-01-01

    A method of efficient and automated thermal-structural processing of very large space structures is presented. The method interfaces the finite element and finite difference techniques. It also results in a pronounced reduction of the quantity of computations, computer resources and manpower required for the task, while assuring the desired accuracy of the results.

  11. Comparing 3D foot scanning with conventional measurement methods.

    PubMed

    Lee, Yu-Chi; Lin, Gloria; Wang, Mao-Jiun J

    2014-01-01

    Foot dimension information on different user groups is important for footwear design and clinical applications. Foot dimension data collected using different measurement methods presents accuracy problems. This study compared the precision and accuracy of the 3D foot scanning method with conventional foot dimension measurement methods including the digital caliper, ink footprint and digital footprint. Six commonly used foot dimensions, i.e. foot length, ball of foot length, outside ball of foot length, foot breadth diagonal, foot breadth horizontal and heel breadth were measured from 130 males and females using four foot measurement methods. Two-way ANOVA was performed to evaluate the sex and method effect on the measured foot dimensions. In addition, the mean absolute difference values and intra-class correlation coefficients (ICCs) were used for precision and accuracy evaluation. The results were also compared with the ISO 20685 criteria. The participant's sex and the measurement method were found (p < 0.05) to exert significant effects on the measured six foot dimensions. The precision of the 3D scanning measurement method with mean absolute difference values between 0.73 to 1.50 mm showed the best performance among the four measurement methods. The 3D scanning measurements showed better measurement accuracy performance than the other methods (mean absolute difference was 0.6 to 4.3 mm), except for measuring outside ball of foot length and foot breadth horizontal. The ICCs for all six foot dimension measurements among the four measurement methods were within the 0.61 to 0.98 range. Overall, the 3D foot scanner is recommended for collecting foot anthropometric data because it has relatively higher precision, accuracy and robustness. This finding suggests that when comparing foot anthropometric data among different references, it is important to consider the differences caused by the different measurement methods.

  12. On the Equivalence of FCS and FRAP: Simultaneous Lipid Membrane Measurements.

    PubMed

    Macháň, Radek; Foo, Yong Hwee; Wohland, Thorsten

    2016-07-12

    Fluorescence correlation spectroscopy (FCS) and fluorescence recovery after photobleaching (FRAP) are widely used methods to determine diffusion coefficients. However, they often do not yield the same results. With the advent of camera-based imaging FCS, which measures the diffusion coefficient in each pixel of an image, and proper bleaching corrections, it is now possible to measure the diffusion coefficient by FRAP and FCS in the exact same images. We thus performed simultaneous FCS and FRAP measurements on supported lipid bilayers and live cell membranes to test how far the two methods differ in their results and whether the methodological differences, in particular the high bleach intensity in FRAP, the bleach corrections, and the fitting procedures in the two methods explain observed differences. Overall, we find that the FRAP bleach intensity does not measurably influence the diffusion in the samples, but that bleach correction and fitting introduce large uncertainties in FRAP. We confirm our results by simulations. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  13. Psychological traits underlying different killing methods among Malaysian male murderers.

    PubMed

    Kamaluddin, Mohammad Rahim; Shariff, Nadiah Syariani; Nurfarliza, Siti; Othman, Azizah; Ismail, Khaidzir H; Mat Saat, Geshina Ayu

    2014-04-01

    Murder is the most notorious crime that violates religious, social and cultural norms. Examining the types and number of different killing methods that used are pivotal in a murder case. However, the psychological traits underlying specific and multiple killing methods are still understudied. The present study attempts to fill this gap in knowledge by identifying the underlying psychological traits of different killing methods among Malaysian murderers. The study adapted an observational cross-sectional methodology using a guided self-administered questionnaire for data collection. The sampling frame consisted of 71 Malaysian male murderers from 11 Malaysian prisons who were selected using purposive sampling method. The participants were also asked to provide the types and number of different killing methods used to kill their respective victims. An independent sample t-test was performed to establish the mean score difference of psychological traits between the murderers who used single and multiple types of killing methods. Kruskal-Wallis tests were carried out to ascertain the psychological trait differences between specific types of killing methods. The results suggest that specific psychological traits underlie the type and number of different killing methods used during murder. The majority (88.7%) of murderers used a single method of killing. Multiple methods of killing was evident in 'premeditated' murder compared to 'passion' murder, and revenge was a common motive. Examples of multiple methods are combinations of stabbing and strangulation or slashing and physical force. An exception was premeditated murder committed with shooting, when it was usually a single method, attributed to the high lethality of firearms. Shooting was also notable when the motive was financial gain or related to drug dealing. Murderers who used multiple killing methods were more aggressive and sadistic than those who used a single killing method. Those who used multiple methods or slashing also displayed a higher level of minimisation traits. Despite its limitations, this study has provided some light on the underlying psychological traits of different killing methods which is useful in the field of criminology.

  14. Solving the MHD equations by the space time conservation element and solution element method

    NASA Astrophysics Data System (ADS)

    Zhang, Moujin; John Yu, S.-T.; Henry Lin, S.-C.; Chang, Sin-Chung; Blankson, Isaiah

    2006-05-01

    We apply the space-time conservation element and solution element (CESE) method to solve the ideal MHD equations with special emphasis on satisfying the divergence free constraint of magnetic field, i.e., ∇ · B = 0. In the setting of the CESE method, four approaches are employed: (i) the original CESE method without any additional treatment, (ii) a simple corrector procedure to update the spatial derivatives of magnetic field B after each time marching step to enforce ∇ · B = 0 at all mesh nodes, (iii) a constraint-transport method by using a special staggered mesh to calculate magnetic field B, and (iv) the projection method by solving a Poisson solver after each time marching step. To demonstrate the capabilities of these methods, two benchmark MHD flows are calculated: (i) a rotated one-dimensional MHD shock tube problem and (ii) a MHD vortex problem. The results show no differences between different approaches and all results compare favorably with previously reported data.

  15. Homogenization versus homogenization-free method to measure muscle glycogen fractions.

    PubMed

    Mojibi, N; Rasouli, M

    2016-12-01

    The glycogen is extracted from animal tissues with or without homogenization using cold perchloric acid. Three methods were compared for determination of glycogen in rat muscle at different physiological states. Two groups of five rats were kept at rest or 45 minutes muscular activity. The glycogen fractions were extracted and measured by using three methods. The data of homogenization method shows that total glycogen decreased following 45 min physical activity and the change occurred entirely in acid soluble glycogen (ASG), while AIG did not change significantly. Similar results were obtained by using "total-glycogen-fractionation methods". The findings of "homogenization-free method" indicate that the acid insoluble fraction (AIG) was the main portion of muscle glycogen and the majority of changes occurred in AIG fraction. The results of "homogenization method" are identical with "total glycogen fractionation", but differ with "homogenization-free" protocol. The ASG fraction is the major portion of muscle glycogen and is more metabolically active form.

  16. The effect of interview method on self-reported sexual behavior and perceptions of community norms in Botswana.

    PubMed

    Anglewicz, Philip; Gourvenec, Diana; Halldorsdottir, Iris; O'Kane, Cate; Koketso, Obakeng; Gorgens, Marelize; Kasper, Toby

    2013-02-01

    Since self-reports of sensitive behaviors play an important role in HIV/AIDS research, the accuracy of these measures has often been examined. In this paper we (1) examine the effect of three survey interview methods on self-reported sexual behavior and perceptions of community sexual norms in Botswana, and (2) introduce an interview method to research on self-reported sexual behavior in sub-Saharan Africa. Comparing across these three survey methods (face-to-face, ballot box, and randomized response), we find that ballot box and randomized response surveys both provide higher reports of sensitive behaviors; the results for randomized response are particularly strong. Within these overall patterns, however, there is variation by question type; additionally the effect of interview method differs by sex. We also examine interviewer effects to gain insight into the effectiveness of these interview methods, and our results suggest that caution be used when interpreting the differences between survey methods.

  17. Assessing the accuracy of cranial and pelvic ageing methods on human skeletal remains from a modern Greek assemblage.

    PubMed

    Xanthopoulou, Panagiota; Valakos, Efstratios; Youlatos, Dionisios; Nikita, Efthymia

    2018-05-01

    The present study tests the accuracy of commonly adopted ageing methods based on the morphology of the pubic symphysis, auricular surface and cranial sutures. These methods are examined both in their traditional form as well as in the context of transition analysis using the ADBOU software in a modern Greek documented collection consisting of 140 individuals who lived mainly in the second half of the twentieth century and come from cemeteries in the area of Athens. The auricular surface overall produced the most accurate age estimates in our material, with different methods based on this anatomical area showing varying degrees of success for different age groups. The pubic symphysis produced accurate results primarily for young adults and the same applied to cranial sutures but the latter appeared completely inappropriate for older individuals. The use of transition analysis through the ADBOU software provided less accurate results than the corresponding traditional ageing methods in our sample. Our results are in agreement with those obtained from validation studies based on material from across the world, but certain differences identified with other studies on Greek material highlight the importance of taking into account intra- and inter-population variability in age estimation. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Establishing a method of short-term rainfall forecasting based on GNSS-derived PWV and its application.

    PubMed

    Yao, Yibin; Shan, Lulu; Zhao, Qingzhi

    2017-09-29

    Global Navigation Satellite System (GNSS) can effectively retrieve precipitable water vapor (PWV) with high precision and high-temporal resolution. GNSS-derived PWV can be used to reflect water vapor variation in the process of strong convection weather. By studying the relationship between time-varying PWV and rainfall, it can be found that PWV contents increase sharply before raining. Therefore, a short-term rainfall forecasting method is proposed based on GNSS-derived PWV. Then the method is validated using hourly GNSS-PWV data from Zhejiang Continuously Operating Reference Station (CORS) network of the period 1 September 2014 to 31 August 2015 and its corresponding hourly rainfall information. The results show that the forecasted correct rate can reach about 80%, while the false alarm rate is about 66%. Compared with results of the previous studies, the correct rate is improved by about 7%, and the false alarm rate is comparable. The method is also applied to other three actual rainfall events of different regions, different durations, and different types. The results show that the method has good applicability and high accuracy, which can be used for rainfall forecasting, and in the future study, it can be assimilated with traditional weather forecasting techniques to improve the forecasted accuracy.

  19. Comparison of Arterial Spin-labeling Perfusion Images at Different Spatial Normalization Methods Based on Voxel-based Statistical Analysis.

    PubMed

    Tani, Kazuki; Mio, Motohira; Toyofuku, Tatsuo; Kato, Shinichi; Masumoto, Tomoya; Ijichi, Tetsuya; Matsushima, Masatoshi; Morimoto, Shoichi; Hirata, Takumi

    2017-01-01

    Spatial normalization is a significant image pre-processing operation in statistical parametric mapping (SPM) analysis. The purpose of this study was to clarify the optimal method of spatial normalization for improving diagnostic accuracy in SPM analysis of arterial spin-labeling (ASL) perfusion images. We evaluated the SPM results of five spatial normalization methods obtained by comparing patients with Alzheimer's disease or normal pressure hydrocephalus complicated with dementia and cognitively healthy subjects. We used the following methods: 3DT1-conventional based on spatial normalization using anatomical images; 3DT1-DARTEL based on spatial normalization with DARTEL using anatomical images; 3DT1-conventional template and 3DT1-DARTEL template, created by averaging cognitively healthy subjects spatially normalized using the above methods; and ASL-DARTEL template created by averaging cognitively healthy subjects spatially normalized with DARTEL using ASL images only. Our results showed that ASL-DARTEL template was small compared with the other two templates. Our SPM results obtained with ASL-DARTEL template method were inaccurate. Also, there were no significant differences between 3DT1-conventional and 3DT1-DARTEL template methods. In contrast, the 3DT1-DARTEL method showed higher detection sensitivity, and precise anatomical location. Our SPM results suggest that we should perform spatial normalization with DARTEL using anatomical images.

  20. Comparison of indoor air sampling and dust collection methods for fungal exposure assessment using quantitative PCR

    EPA Science Inventory

    Evaluating fungal contamination indoors is complicated because of the many different sampling methods utilized. In this study, fungal contamination was evaluated using five sampling methods and four matrices for results. The five sampling methods were a 48 hour indoor air sample ...

  1. Assessing Equating Results on Different Equating Criteria

    ERIC Educational Resources Information Center

    Tong, Ye; Kolen, Michael

    2005-01-01

    The performance of three equating methods--the presmoothed equipercentile method, the item response theory (IRT) true score method, and the IRT observed score method--were examined based on three equating criteria: the same distributions property, the first-order equity property, and the second-order equity property. The magnitude of the…

  2. [Spectral scatter correction of coal samples based on quasi-linear local weighted method].

    PubMed

    Lei, Meng; Li, Ming; Ma, Xiao-Ping; Miao, Yan-Zi; Wang, Jian-Sheng

    2014-07-01

    The present paper puts forth a new spectral correction method based on quasi-linear expression and local weighted function. The first stage of the method is to search 3 quasi-linear expressions to replace the original linear expression in MSC method, such as quadratic, cubic and growth curve expression. Then the local weighted function is constructed by introducing 4 kernel functions, such as Gaussian, Epanechnikov, Biweight and Triweight kernel function. After adding the function in the basic estimation equation, the dependency between the original and ideal spectra is described more accurately and meticulously at each wavelength point. Furthermore, two analytical models were established respectively based on PLS and PCA-BP neural network method, which can be used for estimating the accuracy of corrected spectra. At last, the optimal correction mode was determined by the analytical results with different combination of quasi-linear expression and local weighted function. The spectra of the same coal sample have different noise ratios while the coal sample was prepared under different particle sizes. To validate the effectiveness of this method, the experiment analyzed the correction results of 3 spectral data sets with the particle sizes of 0.2, 1 and 3 mm. The results show that the proposed method can eliminate the scattering influence, and also can enhance the information of spectral peaks. This paper proves a more efficient way to enhance the correlation between corrected spectra and coal qualities significantly, and improve the accuracy and stability of the analytical model substantially.

  3. Measuring signal-to-noise ratio in partially parallel imaging MRI

    PubMed Central

    Goerner, Frank L.; Clarke, Geoffrey D.

    2011-01-01

    Purpose: To assess five different methods of signal-to-noise ratio (SNR) measurement for partially parallel imaging (PPI) acquisitions. Methods: Measurements were performed on a spherical phantom and three volunteers using a multichannel head coil a clinical 3T MRI system to produce echo planar, fast spin echo, gradient echo, and balanced steady state free precession image acquisitions. Two different PPI acquisitions, generalized autocalibrating partially parallel acquisition algorithm and modified sensitivity encoding with acceleration factors (R) of 2–4, were evaluated and compared to nonaccelerated acquisitions. Five standard SNR measurement techniques were investigated and Bland–Altman analysis was used to determine agreement between the various SNR methods. The estimated g-factor values, associated with each method of SNR calculation and PPI reconstruction method, were also subjected to assessments that considered the effects on SNR due to reconstruction method, phase encoding direction, and R-value. Results: Only two SNR measurement methods produced g-factors in agreement with theoretical expectations (g ≥ 1). Bland–Altman tests demonstrated that these two methods also gave the most similar results relative to the other three measurements. R-value was the only factor of the three we considered that showed significant influence on SNR changes. Conclusions: Non-signal methods used in SNR evaluation do not produce results consistent with expectations in the investigated PPI protocols. Two of the methods studied provided the most accurate and useful results. Of these two methods, it is recommended, when evaluating PPI protocols, the image subtraction method be used for SNR calculations due to its relative accuracy and ease of implementation. PMID:21978049

  4. Qualitative approaches to use of the RE-AIM framework: rationale and methods.

    PubMed

    Holtrop, Jodi Summers; Rabin, Borsika A; Glasgow, Russell E

    2018-03-13

    There have been over 430 publications using the RE-AIM model for planning and evaluation of health programs and policies, as well as numerous applications of the model in grant proposals and national programs. Full use of the model includes use of qualitative methods to understand why and how results were obtained on different RE-AIM dimensions, however, recent reviews have revealed that qualitative methods have been used infrequently. Having quantitative and qualitative methods and results iteratively inform each other should enhance understanding and lessons learned. Because there have been few published examples of qualitative approaches and methods using RE-AIM for planning or assessment and no guidance on how qualitative approaches can inform these processes, we provide guidance on qualitative methods to address the RE-AIM model and its various dimensions. The intended audience is researchers interested in applying RE-AIM or similar implementation models, but the methods discussed should also be relevant to those in community or clinical settings. We present directions for, examples of, and guidance on how qualitative methods can be used to address each of the five RE-AIM dimensions. Formative qualitative methods can be helpful in planning interventions and designing for dissemination. Summative qualitative methods are useful when used in an iterative, mixed methods approach for understanding how and why different patterns of results occur. In summary, qualitative and mixed methods approaches to RE-AIM help understand complex situations and results, why and how outcomes were obtained, and contextual factors not easily assessed using quantitative measures.

  5. Amperometric Enzyme Sensor to Check the Total Antioxidant Capacity of Several Mixed Berries. Comparison with Two Other Spectrophotometric and Fluorimetric Methods

    PubMed Central

    Tomassetti, Mauro; Serone, Maruschka; Angeloni, Riccardo; Campanella, Luigi; Mazzone, Elisa

    2015-01-01

    The aim of this research was to test the correctness of response of a superoxide dismutase amperometric biosensor used for the purpose of measuring and ranking the total antioxidant capacity of several systematically analysed mixed berries. Several methods are described in the literature for determining antioxidant capacity, each culminating in the construction of an antioxidant capacity scale and each using its own unit of measurement. It was therefore endeavoured to correlate and compare the results obtained using the present amperometric biosensor method with those resulting from two other different methods for determining the total antioxidant capacity selected from among those more frequently cited in the literature. The purpose was to establish a methodological approach consisting in the simultaneous application of different methods that it would be possible to use to obtain an accurate estimation of the total antioxidant capacity of different mixed berries and the food products containing them. Testing was therefore extended to also cover jams, yoghurts and juices containing mixed berries. PMID:25654720

  6. Development and Validation of New Discriminative Dissolution Method for Carvedilol Tablets

    PubMed Central

    Raju, V.; Murthy, K. V. R.

    2011-01-01

    The objective of the present study was to develop and validate a discriminative dissolution method for evaluation of carvedilol tablets. Different conditions such as type of dissolution medium, volume of dissolution medium and rotation speed of paddle were evaluated. The best in vitro dissolution profile was obtained using Apparatus II (paddle), 50 rpm, 900 ml of pH 6.8 phosphate buffer as dissolution medium. The drug release was evaluated by high-performance liquid chromatographic method. The dissolution method was validated according to current ICH and FDA guidelines using parameters such as the specificity, accuracy, precision and stability were evaluated and obtained results were within the acceptable range. The comparison of the obtained dissolution profiles of three different products were investigated using ANOVA-based, model-dependent and model-independent methods, results showed that there is significant difference between the products. The dissolution test developed and validated was adequate for its higher discriminative capacity in differentiating the release characteristics of the products tested and could be applied for development and quality control of carvedilol tablets. PMID:22923865

  7. Infrared small target detection technology based on OpenCV

    NASA Astrophysics Data System (ADS)

    Liu, Lei; Huang, Zhijian

    2013-05-01

    Accurate and fast detection of infrared (IR) dim target has very important meaning for infrared precise guidance, early warning, video surveillance, etc. In this paper, some basic principles and the implementing flow charts of a series of algorithms for target detection are described. These algorithms are traditional two-frame difference method, improved three-frame difference method, background estimate and frame difference fusion method, and building background with neighborhood mean method. On the foundation of above works, an infrared target detection software platform which is developed by OpenCV and MFC is introduced. Three kinds of tracking algorithms are integrated in this software. In order to explain the software clearly, the framework and the function are described in this paper. At last, the experiments are performed for some real-life IR images. The whole algorithm implementing processes and results are analyzed, and those algorithms for detection targets are evaluated from the two aspects of subjective and objective. The results prove that the proposed method has satisfying detection effectiveness and robustness. Meanwhile, it has high detection efficiency and can be used for real-time detection.

  8. Infrared small target detection technology based on OpenCV

    NASA Astrophysics Data System (ADS)

    Liu, Lei; Huang, Zhijian

    2013-09-01

    Accurate and fast detection of infrared (IR) dim target has very important meaning for infrared precise guidance, early warning, video surveillance, etc. In this paper, some basic principles and the implementing flow charts of a series of algorithms for target detection are described. These algorithms are traditional two-frame difference method, improved three-frame difference method, background estimate and frame difference fusion method, and building background with neighborhood mean method. On the foundation of above works, an infrared target detection software platform which is developed by OpenCV and MFC is introduced. Three kinds of tracking algorithms are integrated in this software. In order to explain the software clearly, the framework and the function are described in this paper. At last, the experiments are performed for some real-life IR images. The whole algorithm implementing processes and results are analyzed, and those algorithms for detection targets are evaluated from the two aspects of subjective and objective. The results prove that the proposed method has satisfying detection effectiveness and robustness. Meanwhile, it has high detection efficiency and can be used for real-time detection.

  9. Comparative Study on Two Different Methods for Determination of Hydraulic Conductivity of HeLa Cells During Freezing.

    PubMed

    Li, Lei; Gao, Cai; Zhao, Gang; Shu, Zhiquan; Cao, Yunxia; Gao, Dayong

    2016-12-01

    The measurement of hydraulic conductivity of the cell membrane is very important for optimizing the protocol of cryopreservation and cryosurgery. There are two different methods using differential scanning calorimetry (DSC) to measure the freezing response of cells and tissues. Devireddy et al. presented the slow-fast-slow (SFS) cooling method, in which the difference of the heat release during the freezing process between the osmotically active and inactive cells is used to obtain the cell membrane hydraulic conductivity and activation energy. Luo et al. simplified the procedure and introduced the single-slow (SS) cooling protocol, which requires only one cooling process although different cytocrits are required for the determination of the membrane transport properties. To the best of our knowledge, there is still a lack of comparison of experimental processes and requirements for experimental conditions between these two methods. This study made a systematic comparison between these two methods from the aforementioned aspects in detail. The SFS and SS cooling methods mentioned earlier were utilized to obtain the reference hydraulic conductivity (L pg ) and activation energy (E Lp ) of HeLa cells by fitting the model to DSC data. With the SFS method, it was determined that L pg  = 0.10 μm/(min·atm) and E Lp  = 22.9 kcal/mol; whereas the results obtained by the SS cooling method showed that L pg  = 0.10 μm/(min·atm) and E Lp  = 23.6 kcal/mol. The results indicated that the values of the water transport parameters measured by two methods were comparable. In other words, the two parameters can be obtained by comparing the heat releases between two slow cooling processes of the same sample according to the SFS method. However, the SS method required analyzing heat releases of samples with different cytocrits. Thus, more experimental time was required.

  10. Qualitative risk assessment during polymer mortar test specimens preparation - methods comparison

    NASA Astrophysics Data System (ADS)

    Silva, F.; Sousa, S. P. B.; Arezes, P.; Swuste, P.; Ribeiro, M. C. S.; Baptista, J. S.

    2015-05-01

    Polymer binder modification with inorganic nanomaterials (NM) could be a potential and efficient solution to control matrix flammability of polymer concrete (PC) materials without sacrificing other important properties. Occupational exposures can occur all along the life cycle of a NM and “nanoproducts” from research through scale-up, product development, manufacturing, and end of life. The main objective of the present study is to analyse and compare different qualitative risk assessment methods during the production of polymer mortars (PM) with NM. The laboratory scale production process was divided in 3 main phases (pre-production, production and post-production), which allow testing the assessment methods in different situations. The risk assessment involved in the manufacturing process of PM was made by using the qualitative analyses based on: French Agency for Food, Environmental and Occupational Health & Safety method (ANSES); Control Banding Nanotool (CB Nanotool); Ecole Polytechnique Fédérale de Lausanne method (EPFL); Guidance working safely with nanomaterials and nanoproducts (GWSNN); Istituto Superiore per la Prevenzione e la Sicurezza del Lavoro, Italy method (ISPESL); Precautionary Matrix for Synthetic Nanomaterials (PMSN); and Stoffenmanager Nano. It was verified that the different methods applied also produce different final results. In phases 1 and 3 the risk assessment tends to be classified as medium-high risk, while for phase 2 the more common result is medium level. It is necessary to improve the use of qualitative methods by defining narrow criteria for the methods selection for each assessed situation, bearing in mind that the uncertainties are also a relevant factor when dealing with the risk related to nanotechnologies field.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Q; Han, H; Xing, L

    Purpose: Dictionary learning based method has attracted more and more attentions in low-dose CT due to the superior performance on suppressing noise and preserving structural details. Considering the structures and noise vary from region to region in one imaging object, we propose a region-specific dictionary learning method to improve the low-dose CT reconstruction. Methods: A set of normal-dose images was used for dictionary learning. Segmentations were performed on these images, so that the training patch sets corresponding to different regions can be extracted out. After that, region-specific dictionaries were learned from these training sets. For the low-dose CT reconstruction, amore » conventional reconstruction, such as filtered back-projection (FBP), was performed firstly, and then segmentation was followed to segment the image into different regions. Sparsity constraints of each region based on its dictionary were used as regularization terms. The regularization parameters were selected adaptively according to different regions. A low-dose human thorax dataset was used to evaluate the proposed method. The single dictionary based method was performed for comparison. Results: Since the lung region is very different from the other part of thorax, two dictionaries corresponding to lung region and the rest part of thorax respectively were learned to better express the structural details and avoid artifacts. With only one dictionary some artifact appeared in the body region caused by the spot atoms corresponding to the structures in the lung region. And also some structure in the lung regions cannot be recovered well by only one dictionary. The quantitative indices of the result by the proposed method were also improved a little compared to the single dictionary based method. Conclusion: Region-specific dictionary can make the dictionary more adaptive to different region characteristics, which is much desirable for enhancing the performance of dictionary learning based method.« less

  12. Numerical comparisons of ground motion predictions with kinematic rupture modeling

    NASA Astrophysics Data System (ADS)

    Yuan, Y. O.; Zurek, B.; Liu, F.; deMartin, B.; Lacasse, M. D.

    2017-12-01

    Recent advances in large-scale wave simulators allow for the computation of seismograms at unprecedented levels of detail and for areas sufficiently large to be relevant to small regional studies. In some instances, detailed information of the mechanical properties of the subsurface has been obtained from seismic exploration surveys, well data, and core analysis. Using kinematic rupture modeling, this information can be used with a wave propagation simulator to predict the ground motion that would result from an assumed fault rupture. The purpose of this work is to explore the limits of wave propagation simulators for modeling ground motion in different settings, and in particular, to explore the numerical accuracy of different methods in the presence of features that are challenging to simulate such as topography, low-velocity surface layers, and shallow sources. In the main part of this work, we use a variety of synthetic three-dimensional models and compare the relative costs and benefits of different numerical discretization methods in computing the seismograms of realistic-size models. The finite-difference method, the discontinuous-Galerkin method, and the spectral-element method are compared for a range of synthetic models having different levels of complexity such as topography, large subsurface features, low-velocity surface layers, and the location and characteristics of fault ruptures represented as an array of seismic sources. While some previous studies have already demonstrated that unstructured-mesh methods can sometimes tackle complex problems (Moczo et al.), we investigate the trade-off between unstructured-mesh methods and regular-grid methods for a broad range of models and source configurations. Finally, for comparison, our direct simulation results are briefly contrasted with those predicted by a few phenomenological ground-motion prediction equations, and a workflow for accurately predicting ground motion is proposed.

  13. The effect of different propolis harvest methods on its lead contents determined by ET AAS and UV-visS.

    PubMed

    Sales, A; Alvarez, A; Areal, M Rodriguez; Maldonado, L; Marchisio, P; Rodríguez, M; Bedascarrasbure, E

    2006-10-11

    Argentinean propolis is exported to different countries, specially Japan. The market demands propolis quality control according to international standards. The analytical determination of some metals, as lead in food, is very important for their high toxicity even in low concentrations and because of their harmful effects on health. Flavonoids, the main bioactive compounds of propolis, tend to chelate metals as lead, which becomes one of the main polluting agents of propolis. The lead found in propolis may come from the atmosphere or it may be incorporated in the harvest, extraction and processing methods. The aim of this work is to evaluate lead level on Argentinean propolis determined by electrothermal atomic absorption spectrometry (ET AAS) and UV-vis spectrophotometry (UV-visS) methods, as well as the effect of harvest methods on those contents. A randomized test with three different treatments of collection was made to evaluate the effect of harvest methods. These procedures were: separating wedges (traditional), netting plastic meshes and stamping out plastic meshes. By means of the analysis of variance technique for multiple comparisons (ANOVA) it was possible to conclude that there are significant differences between scraped and mesh methods (stamped out and mosquito netting meshes). The results obtained in the present test would allow us to conclude that mesh methods are more advisable than scraped ones in order to obtain innocuous and safe propolis with minor lead contents. A statistical comparison of lead determination by both, ET AAS and UV-visS methods, demonstrated that there is not a significant difference in the results achieved with the two analytical techniques employed.

  14. The bias, accuracy and precision of faecal egg count reduction test results in cattle using McMaster, Cornell-Wisconsin and FLOTAC egg counting methods.

    PubMed

    Levecke, B; Rinaldi, L; Charlier, J; Maurelli, M P; Bosco, A; Vercruysse, J; Cringoli, G

    2012-08-13

    The faecal egg count reduction test (FECRT) is the recommended method to monitor anthelmintic drug efficacy in cattle. There is a large variation in faecal egg count (FEC) methods applied to determine FECRT. However, it remains unclear whether FEC methods with an equal analytic sensitivity, but with different methodologies, result in equal FECRT results. We therefore, compared the bias, accuracy and precision of FECRT results for Cornell-Wisconsin (analytic sensitivity = 1 egg per gram faeces (EPG)), FLOTAC (analytic sensitivity = 1 EPG) and McMaster method (analytic sensitivity = 10 EPG) across four levels of egg excretion (1-49 EPG; 50-149 EPG; 150-299 EPG; 300-600 EPG). Finally, we assessed the sensitivity of the FEC methods to detect a truly reduced efficacy. To this end, two different criteria were used to define reduced efficacy based on FECR, including those described in the WAAVP guidelines (FECRT <95% and lower limit of 95%CI <90%) (Coles et al., 1992) and those proposed by El-Abdellati et al. (2010) (upper limit of 95%CI <95%). There was no significant difference in bias and accuracy of FECRT results across the three methods. FLOTAC provided the most precise FECRT results. Cornell-Wisconsin and McMaster gave similar imprecise results. FECRT were significantly underestimated when baseline FEC were low and drugs were more efficacious. For all FEC methods, precision and accuracy of the FECRT improved as egg excretion increased, this effect was greatest for McMaster and least for Cornell-Wisconsin. The sensitivity of the three methods to detect a truly reduced efficacy was high (>90%). Yet, the sensitivity of McMaster and Cornell-Wisconsin may drop when drugs only show sub-optimal efficacy. Overall, the study indicates that the precision of FECRT is affected by the methodology of FEC, and that the level of egg excretion should be considered in the final interpretation of the FECRT. However, more comprehensive studies are required to provide more insights into the complex interplay of factors inherent to study design (sample size and FEC method) and host-parasite interactions (level of egg excretion and aggregation across the host population). Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Variational finite-difference methods in linear and nonlinear problems of the deformation of metallic and composite shells (review)

    NASA Astrophysics Data System (ADS)

    Maksimyuk, V. A.; Storozhuk, E. A.; Chernyshenko, I. S.

    2012-11-01

    Variational finite-difference methods of solving linear and nonlinear problems for thin and nonthin shells (plates) made of homogeneous isotropic (metallic) and orthotropic (composite) materials are analyzed and their classification principles and structure are discussed. Scalar and vector variational finite-difference methods that implement the Kirchhoff-Love hypotheses analytically or algorithmically using Lagrange multipliers are outlined. The Timoshenko hypotheses are implemented in a traditional way, i.e., analytically. The stress-strain state of metallic and composite shells of complex geometry is analyzed numerically. The numerical results are presented in the form of graphs and tables and used to assess the efficiency of using the variational finite-difference methods to solve linear and nonlinear problems of the statics of shells (plates)

  16. Comparison of three methods for evaluation of work postures in a truck assembly plant.

    PubMed

    Zare, Mohsen; Biau, Sophie; Brunet, Rene; Roquelaure, Yves

    2017-11-01

    This study compared the results of three risk assessment tools (self-reported questionnaire, observational tool, direct measurement method) for the upper limbs and back in a truck assembly plant at two cycle times (11 and 8 min). The weighted Kappa factor showed fair agreement between the observational and direct measurement method for the arm (0.39) and back (0.47). The weighted Kappa factor for these methods was poor for the neck (0) and wrist (0) but the observed proportional agreement (P o ) was 0.78 for the neck and 0.83 for the wrist. The weighted Kappa factor between questionnaire and direct measurement showed poor or slight agreement (0) for different body segments in both cycle times. The results revealed moderate agreement between the observational tool and the direct measurement method, and poor agreement between the self-reported questionnaire and direct measurement. Practitioner Summary: This study provides risk exposure measurement by different common ergonomic methods in the field. The results help to develop valid measurements and improve exposure evaluation. Hence, the ergonomist/practitioners should apply the methods with caution, or at least knowing what the issues/errors are.

  17. Comparison of Manual Refraction Versus Autorefraction in 60 Diabetic Retinopathy Patients.

    PubMed

    Shirzadi, Keyvan; Shahraki, Kourosh; Yahaghi, Emad; Makateb, Ali; Khosravifard, Keivan

    2016-07-27

    The purpose of the study was to evaluate the comparison of manual refraction versus autorefraction in diabetic retinopathy patients. The study was conducted at the Be'sat Army Hospital from 2013-2015. In the present study differences between two common refractometry methods (manual refractometry and Auto refractometry) in diagnosis and follow up of retinopathy in patients affected with diabetes is investigated. Our results showed that there is a significant difference in visual acuity score of patients between manual and auto refractometry. Despite this fact, spherical equivalent scores of two methods of refractometry did not show a significant statistical difference in the patients. Although use of manual refraction is comparable with autorefraction in evaluating spherical equivalent scores in diabetic patients affected with retinopathy, but in the case of visual acuity results from these two methods are not comparable.

  18. Quality control in urinalysis.

    PubMed

    Takubo, T; Tatsumi, N

    1999-01-01

    Quality control (QC) has been introduced in laboratories, and QC surveys in urinalysis have been performed by College of American Pathologist, by Japanese Association of Medical Technologists, by Osaka Medical Association and by manufacturers. QC survey in urinalysis for synthetic urine by the reagent strip and instrument made in same manufacturer, and by an automated urine cell analyser provided satisfactory results among laboratories. QC survey in urinalysis for synthetic urine by the reagent strips and instruments made by various manufacturers indicated differences in the determination values among manufacturers, and between manual and automated methods because the reagent strips and instruments have different characteristics, respectively. QC photo survey in urinalysis on the microscopic photos of urine sediment constituents indicated differences in the identification of cells among laboratories. From the results, it is necessary to standardize a reagent strip method, manual and automated methods, and synthetic urine.

  19. LARGE RIVER ASSESSMENT METHODS FOR BENTHIC MACROINVERTEBRATES AND FISH

    EPA Science Inventory

    Multiple projects are currently underway to increase our understanding of the varying results of different sampling methods and designs used for the biological assessment and monitoring of large (boatable) rivers. Studies include methods used to assess fish, benthic macroinverte...

  20. Testing actinide fission yield treatment in CINDER90 for use in MCNP6 burnup calculations

    DOE PAGES

    Fensin, Michael Lorne; Umbel, Marissa

    2015-09-18

    Most of the development of the MCNPX/6 burnup capability focused on features that were applied to the Boltzman transport or used to prepare coefficients for use in CINDER90, with little change to CINDER90 or the CINDER90 data. Though a scheme exists for best solving the coupled Boltzman and Bateman equations, the most significant approximation is that the employed nuclear data are correct and complete. Thus, the CINDER90 library file contains 60 different actinide fission yields encompassing 36 fissionable actinides (thermal, fast, high energy and spontaneous fission). Fission reaction data exists for more than 60 actinides and as a result, fissionmore » yield data must be approximated for actinides that do not possess fission yield information. Several types of approximations are used for estimating fission yields for actinides which do not possess explicit fission yield data. The objective of this study is to test whether or not certain approximations of fission yield selection have any impact on predictability of major actinides and fission products. Further we assess which other fission products, available in MCNP6 Tier 3, result in the largest difference in production. Because the CINDER90 library file is in ASCII format and therefore easily amendable, we assess reasons for choosing, as well as compare actinide and major fission product prediction for the H. B. Robinson benchmark for, three separate fission yield selection methods: (1) the current CINDER90 library file method (Base); (2) the element method (Element); and (3) the isobar method (Isobar). Results show that the three methods tested result in similar prediction of major actinides, Tc-99 and Cs-137; however, certain fission products resulted in significantly different production depending on the method of choice.« less

Top