Sample records for difference mgfd method

  1. The Relation of Finite Element and Finite Difference Methods

    NASA Technical Reports Server (NTRS)

    Vinokur, M.

    1976-01-01

    Finite element and finite difference methods are examined in order to bring out their relationship. It is shown that both methods use two types of discrete representations of continuous functions. They differ in that finite difference methods emphasize the discretization of independent variable, while finite element methods emphasize the discretization of dependent variable (referred to as functional approximations). An important point is that finite element methods use global piecewise functional approximations, while finite difference methods normally use local functional approximations. A general conclusion is that finite element methods are best designed to handle complex boundaries, while finite difference methods are superior for complex equations. It is also shown that finite volume difference methods possess many of the advantages attributed to finite element methods.

  2. Evaluation of finite difference and FFT-based solutions of the transport of intensity equation.

    PubMed

    Zhang, Hongbo; Zhou, Wen-Jing; Liu, Ying; Leber, Donald; Banerjee, Partha; Basunia, Mahmudunnabi; Poon, Ting-Chung

    2018-01-01

    A finite difference method is proposed for solving the transport of intensity equation. Simulation results show that although slower than fast Fourier transform (FFT)-based methods, finite difference methods are able to reconstruct the phase with better accuracy due to relaxed assumptions for solving the transport of intensity equation relative to FFT methods. Finite difference methods are also more flexible than FFT methods in dealing with different boundary conditions.

  3. Improved methods of vibration analysis of pretwisted, airfoil blades

    NASA Technical Reports Server (NTRS)

    Subrahmanyam, K. B.; Kaza, K. R. V.

    1984-01-01

    Vibration analysis of pretwisted blades of asymmetric airfoil cross section is performed by using two mixed variational approaches. Numerical results obtained from these two methods are compared to those obtained from an improved finite difference method and also to those given by the ordinary finite difference method. The relative merits, convergence properties and accuracies of all four methods are studied and discussed. The effects of asymmetry and pretwist on natural frequencies and mode shapes are investigated. The improved finite difference method is shown to be far superior to the conventional finite difference method in several respects. Close lower bound solutions are provided by the improved finite difference method for untwisted blades with a relatively coarse mesh while the mixed methods have not indicated any specific bound.

  4. A new method of time difference measurement: The time difference method by dual phase coincidence points detection

    NASA Technical Reports Server (NTRS)

    Zhou, Wei

    1993-01-01

    In the high accurate measurement of periodic signals, the greatest common factor frequency and its characteristics have special functions. A method of time difference measurement - the time difference method by dual 'phase coincidence points' detection is described. This method utilizes the characteristics of the greatest common factor frequency to measure time or phase difference between periodic signals. It can suit a very wide frequency range. Measurement precision and potential accuracy of several picoseconds were demonstrated with this new method. The instrument based on this method is very simple, and the demand for the common oscillator is low. This method and instrument can be used widely.

  5. Mixed Methods, Triangulation, and Causal Explanation

    ERIC Educational Resources Information Center

    Howe, Kenneth R.

    2012-01-01

    This article distinguishes a disjunctive conception of mixed methods/triangulation, which brings different methods to bear on different questions, from a conjunctive conception, which brings different methods to bear on the same question. It then examines a more inclusive, holistic conception of mixed methods/triangulation that accommodates…

  6. Examining mixing methods in an evaluation of a smoking cessation program.

    PubMed

    Betzner, Anne; Lawrenz, Frances P; Thao, Mao

    2016-02-01

    Three different methods were used in an evaluation of a smoking cessation study: surveys, focus groups, and phenomenological interviews. The results of each method were analyzed separately and then combined using both a pragmatic and dialectic stance to examine the effects of different approaches to mixing methods. Results show that the further apart the methods are philosophically, the more diverse the findings. Comparisons of decision maker opinions and costs of the different methods are provided along with recommendations for evaluators' uses of different methods. Copyright © 2015. Published by Elsevier Ltd.

  7. Task exposures in an office environment: a comparison of methods.

    PubMed

    Van Eerd, Dwayne; Hogg-Johnson, Sheilah; Mazumder, Anjali; Cole, Donald; Wells, Richard; Moore, Anne

    2009-10-01

    Task-related factors such as frequency and duration are associated with musculoskeletal disorders in office settings. The primary objective was to compare various task recording methods as measures of exposure in an office workplace. A total of 41 workers from different jobs were recruited from a large urban newspaper (71% female, mean age 41 years SD 9.6). Questionnaire, task diaries, direct observation and video methods were used to record tasks. A common set of task codes was used across methods. Different estimates of task duration, number of tasks and task transitions arose from the different methods. Self-report methods did not consistently result in longer task duration estimates. Methodological issues could explain some of the differences in estimates seen between methods observed. It was concluded that different task recording methods result in different estimates of exposure likely due to different exposure constructs. This work addresses issues of exposure measurement in office environments. It is of relevance to ergonomists/researchers interested in how to best assess the risk of injury among office workers. The paper discusses the trade-offs between precision, accuracy and burden in the collection of computer task-based exposure measures and different underlying constructs captures in each method.

  8. Finite difference and Runge-Kutta methods for solving vibration problems

    NASA Astrophysics Data System (ADS)

    Lintang Renganis Radityani, Scolastika; Mungkasi, Sudi

    2017-11-01

    The vibration of a storey building can be modelled into a system of second order ordinary differential equations. If the number of floors of a building is large, then the result is a large scale system of second order ordinary differential equations. The large scale system is difficult to solve, and if it can be solved, the solution may not be accurate. Therefore, in this paper, we seek for accurate methods for solving vibration problems. We compare the performance of numerical finite difference and Runge-Kutta methods for solving large scale systems of second order ordinary differential equations. The finite difference methods include the forward and central differences. The Runge-Kutta methods include the Euler and Heun methods. Our research results show that the central finite difference and the Heun methods produce more accurate solutions than the forward finite difference and the Euler methods do.

  9. Unmanned Tactical Autonomous Control and Collaboration Threat and Vulnerability Assessment

    DTIC Science & Technology

    2015-06-01

    they are evaluated separately 15 from each other [19]. One major difference between this classification method and the FIPS 199 is that no...both technical and nontechnical methods ” [31]. These different methods will enable the UTACC system to effectively mitigate vulnerabilities that...team. Marines on the battlefield communicate with each other in many different and unique methods . UTACC must be adaptable to these different methods

  10. Convergence and divergence across construction methods for human brain white matter networks: an assessment based on individual differences.

    PubMed

    Zhong, Suyu; He, Yong; Gong, Gaolang

    2015-05-01

    Using diffusion MRI, a number of studies have investigated the properties of whole-brain white matter (WM) networks with differing network construction methods (node/edge definition). However, how the construction methods affect individual differences of WM networks and, particularly, if distinct methods can provide convergent or divergent patterns of individual differences remain largely unknown. Here, we applied 10 frequently used methods to construct whole-brain WM networks in a healthy young adult population (57 subjects), which involves two node definitions (low-resolution and high-resolution) and five edge definitions (binary, FA weighted, fiber-density weighted, length-corrected fiber-density weighted, and connectivity-probability weighted). For these WM networks, individual differences were systematically analyzed in three network aspects: (1) a spatial pattern of WM connections, (2) a spatial pattern of nodal efficiency, and (3) network global and local efficiencies. Intriguingly, we found that some of the network construction methods converged in terms of individual difference patterns, but diverged with other methods. Furthermore, the convergence/divergence between methods differed among network properties that were adopted to assess individual differences. Particularly, high-resolution WM networks with differing edge definitions showed convergent individual differences in the spatial pattern of both WM connections and nodal efficiency. For the network global and local efficiencies, low-resolution and high-resolution WM networks for most edge definitions consistently exhibited a highly convergent pattern in individual differences. Finally, the test-retest analysis revealed a decent temporal reproducibility for the patterns of between-method convergence/divergence. Together, the results of the present study demonstrated a measure-dependent effect of network construction methods on the individual difference of WM network properties. © 2015 Wiley Periodicals, Inc.

  11. A simulation-based evaluation of methods for inferring linear barriers to gene flow

    Treesearch

    Christopher Blair; Dana E. Weigel; Matthew Balazik; Annika T. H. Keeley; Faith M. Walker; Erin Landguth; Sam Cushman; Melanie Murphy; Lisette Waits; Niko Balkenhol

    2012-01-01

    Different analytical techniques used on the same data set may lead to different conclusions about the existence and strength of genetic structure. Therefore, reliable interpretation of the results from different methods depends on the efficacy and reliability of different statistical methods. In this paper, we evaluated the performance of multiple analytical methods to...

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rowe, M.D.; Pierce, B.L.

    This report presents results of tests of different final site selection methods used for siting large-scale facilities such as nuclear power plants. Test data are adapted from a nuclear power plant siting study conducted on Long Island, New York. The purpose of the tests is to determine whether or not different final site selection methods produce different results, and to obtain some understanding of the nature of any differences found. Decision rules and weighting methods are included. Decision rules tested are Weighting Summation, Power Law, Decision Analysis, Goal Programming, and Goal Attainment; weighting methods tested are Categorization, Ranking, Rating Ratiomore » Estimation, Metfessel Allocation, Indifferent Tradeoff, Decision Analysis lottery, and Global Evaluation. Results show that different methods can, indeed, produce different results, but that the probability that they will do so is controlled by the structure of differences among the sites being evaluated. Differences in weights and suitability scores attributable to methods have reduced significance if the alternatives include one or two sites that are superior to all others in many attributes. The more tradeoffs there are among good and bad levels of different attributes at different sites, the more important are the specifics of methods to the final decision. 5 refs., 14 figs., 19 tabs.« less

  13. Testing and Validation of Computational Methods for Mass Spectrometry.

    PubMed

    Gatto, Laurent; Hansen, Kasper D; Hoopmann, Michael R; Hermjakob, Henning; Kohlbacher, Oliver; Beyer, Andreas

    2016-03-04

    High-throughput methods based on mass spectrometry (proteomics, metabolomics, lipidomics, etc.) produce a wealth of data that cannot be analyzed without computational methods. The impact of the choice of method on the overall result of a biological study is often underappreciated, but different methods can result in very different biological findings. It is thus essential to evaluate and compare the correctness and relative performance of computational methods. The volume of the data as well as the complexity of the algorithms render unbiased comparisons challenging. This paper discusses some problems and challenges in testing and validation of computational methods. We discuss the different types of data (simulated and experimental validation data) as well as different metrics to compare methods. We also introduce a new public repository for mass spectrometric reference data sets ( http://compms.org/RefData ) that contains a collection of publicly available data sets for performance evaluation for a wide range of different methods.

  14. Estimating the mediating effect of different biomarkers on the relation of alcohol consumption with the risk of type 2 diabetes.

    PubMed

    Beulens, Joline W J; van der Schouw, Yvonne T; Moons, Karel G M; Boshuizen, Hendriek C; van der A, Daphne L; Groenwold, Rolf H H

    2013-04-01

    Moderate alcohol consumption is associated with a reduced type 2 diabetes risk, but the biomarkers that explain this relation are unknown. The most commonly used method to estimate the proportion explained by a biomarker is the difference method. However, influence of alcohol-biomarker interaction on its results is unclear. G-estimation method is proposed to accurately assess proportion explained, but how this method compares with the difference method is unknown. In a case-cohort study of 2498 controls and 919 incident diabetes cases, we estimated the proportion explained by different biomarkers on the relation between alcohol consumption and diabetes using the difference method and sequential G-estimation method. Using the difference method, high-density lipoprotein cholesterol explained the relation between alcohol and diabetes by 78% (95% confidence interval [CI], 41-243), whereas high-sensitivity C-reactive protein (-7.5%; -36.4 to 1.8) or blood pressure (-6.9; -26.3 to -0.6) did not explain the relation. Interaction between alcohol and liver enzymes led to bias in proportion explained with different outcomes for different levels of liver enzymes. G-estimation method showed comparable results, but proportions explained were lower. The relation between alcohol consumption and diabetes may be largely explained by increased high-density lipoprotein cholesterol but not by other biomarkers. Ignoring exposure-mediator interactions may result in bias. The difference and G-estimation methods provide similar results. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Psychological traits underlying different killing methods among Malaysian male murderers.

    PubMed

    Kamaluddin, Mohammad Rahim; Shariff, Nadiah Syariani; Nurfarliza, Siti; Othman, Azizah; Ismail, Khaidzir H; Mat Saat, Geshina Ayu

    2014-04-01

    Murder is the most notorious crime that violates religious, social and cultural norms. Examining the types and number of different killing methods that used are pivotal in a murder case. However, the psychological traits underlying specific and multiple killing methods are still understudied. The present study attempts to fill this gap in knowledge by identifying the underlying psychological traits of different killing methods among Malaysian murderers. The study adapted an observational cross-sectional methodology using a guided self-administered questionnaire for data collection. The sampling frame consisted of 71 Malaysian male murderers from 11 Malaysian prisons who were selected using purposive sampling method. The participants were also asked to provide the types and number of different killing methods used to kill their respective victims. An independent sample t-test was performed to establish the mean score difference of psychological traits between the murderers who used single and multiple types of killing methods. Kruskal-Wallis tests were carried out to ascertain the psychological trait differences between specific types of killing methods. The results suggest that specific psychological traits underlie the type and number of different killing methods used during murder. The majority (88.7%) of murderers used a single method of killing. Multiple methods of killing was evident in 'premeditated' murder compared to 'passion' murder, and revenge was a common motive. Examples of multiple methods are combinations of stabbing and strangulation or slashing and physical force. An exception was premeditated murder committed with shooting, when it was usually a single method, attributed to the high lethality of firearms. Shooting was also notable when the motive was financial gain or related to drug dealing. Murderers who used multiple killing methods were more aggressive and sadistic than those who used a single killing method. Those who used multiple methods or slashing also displayed a higher level of minimisation traits. Despite its limitations, this study has provided some light on the underlying psychological traits of different killing methods which is useful in the field of criminology.

  16. Mobile micro-colorimeter and micro-spectrometer sensor modules as enablers for the replacement of subjective inspections by objective measurements for optically clear colored liquids in-field

    NASA Astrophysics Data System (ADS)

    Dittrich, Paul-Gerald; Grunert, Fred; Ehehalt, Jörg; Hofmann, Dietrich

    2015-03-01

    Aim of the paper is to show that the colorimetric characterization of optically clear colored liquids can be performed with different measurement methods and their application specific multichannel spectral sensors. The possible measurement methods are differentiated by the applied types of multichannel spectral sensors and therefore by their spectral resolution, measurement speed, measurement accuracy and measurement costs. The paper describes how different types of multichannel spectral sensors are calibrated with different types of calibration methods and how the measurement values can be used for further colorimetric calculations. The different measurement methods and the different application specific calibration methods will be explained methodically and theoretically. The paper proofs that and how different multichannel spectral sensor modules with different calibration methods can be applied with smartpads for the calculation of measurement results both in laboratory and in field. A given practical example is the application of different multichannel spectral sensors for the colorimetric characterization of petroleum oils and fuels and their colorimetric characterization by the Saybolt color scale.

  17. [Preliminary study on correlation between diversity of soluble proteins and producing area of Cordyceps sinensis].

    PubMed

    Ren, Yan; Qiu, Yi; Wan, De-Guang; Lu, Xian-Ming; Guo, Jin-Lin

    2013-05-01

    To analyze the content and type of soluble proteins in Cordyceps sinensis from different producing areas and processed with different methods with bradford method and 2-DE technology, in order to discover significant differences in soluble proteins in C. sinensis processed with different methods and from different producing areas. The preliminary study indicated that the content and diversity of soluble proteins were related to producing areas and processing methods to some extent.

  18. [Measuring the blood pressure in both arms is of little use; longitudinal study into blood pressure differences between both arms and its reproducibility in patients with diabetes mellitus type 2].

    PubMed

    Kleefstra, N; Houweling, S T; Meyboom-de Jong, B; Bilo, H J G

    2007-07-07

    To determine the prevalence of inter-arm blood pressure differences > 10 mmHg in patients with diabetes mellitus type 2 (DM2) and to determine whether these differences are consistent over time. Descriptive. In an evaluation study of 169 DM2 patients from 5 general practices in 2003 and 2004, different methods of oscillatory measurement were used to investigate inter-arm blood pressure differences > 10 mmHg systolic or diastolic. These methods were: one measurement in each arm non-simultaneously (method A), one measurement simultaneously (B) and the mean of two simultaneous measurements (C). With method A an inter-arm blood pressure difference was found in 33% of patients. This percentage diminished to 9 with method C. In 44% (n = 7) of the patients in whom method C detected a relevant blood pressure difference, this difference was not found with method A. In 79% of patients the inter-arm blood pressure difference was not reproduced after one year. In daily practice, one non-simultaneous blood pressure measurement in each arm (method A) was of little value for identification of patients with inter-arm blood pressure differences. The reproducibility was poor one year later. Bilateral blood pressure measurement is therefore of little value.

  19. Spectral difference Lanczos method for efficient time propagation in quantum control theory

    NASA Astrophysics Data System (ADS)

    Farnum, John D.; Mazziotti, David A.

    2004-04-01

    Spectral difference methods represent the real-space Hamiltonian of a quantum system as a banded matrix which possesses the accuracy of the discrete variable representation (DVR) and the efficiency of finite differences. When applied to time-dependent quantum mechanics, spectral differences enhance the efficiency of propagation methods for evolving the Schrödinger equation. We develop a spectral difference Lanczos method which is computationally more economical than the sinc-DVR Lanczos method, the split-operator technique, and even the fast-Fourier-Transform Lanczos method. Application of fast propagation is made to quantum control theory where chirped laser pulses are designed to dissociate both diatomic and polyatomic molecules. The specificity of the chirped laser fields is also tested as a possible method for molecular identification and discrimination.

  20. Different methods to analyze stepped wedge trial designs revealed different aspects of intervention effects.

    PubMed

    Twisk, J W R; Hoogendijk, E O; Zwijsen, S A; de Boer, M R

    2016-04-01

    Within epidemiology, a stepped wedge trial design (i.e., a one-way crossover trial in which several arms start the intervention at different time points) is increasingly popular as an alternative to a classical cluster randomized controlled trial. Despite this increasing popularity, there is a huge variation in the methods used to analyze data from a stepped wedge trial design. Four linear mixed models were used to analyze data from a stepped wedge trial design on two example data sets. The four methods were chosen because they have been (frequently) used in practice. Method 1 compares all the intervention measurements with the control measurements. Method 2 treats the intervention variable as a time-independent categorical variable comparing the different arms with each other. In method 3, the intervention variable is a time-dependent categorical variable comparing groups with different number of intervention measurements, whereas in method 4, the changes in the outcome variable between subsequent measurements are analyzed. Regarding the results in the first example data set, methods 1 and 3 showed a strong positive intervention effect, which disappeared after adjusting for time. Method 2 showed an inverse intervention effect, whereas method 4 did not show a significant effect at all. In the second example data set, the results were the opposite. Both methods 2 and 4 showed significant intervention effects, whereas the other two methods did not. For method 4, the intervention effect attenuated after adjustment for time. Different methods to analyze data from a stepped wedge trial design reveal different aspects of a possible intervention effect. The choice of a method partly depends on the type of the intervention and the possible time-dependent effect of the intervention. Furthermore, it is advised to combine the results of the different methods to obtain an interpretable overall result. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. [Influence of different processing methods and mature stages on 3,29-dibenzoyl rarounitriol of Trichosanthes kirilowii seeds].

    PubMed

    Liu, Jin-Na; Xie, Xiao-Liang; Yang, Tai-Xin; Zhang, Cun-Li; Jia, Dong-Sheng; Liu, Ming; Wen, Chun-Xiu

    2014-04-01

    To study the different mature stages and the best processing methods on the quality of Trichosanthes kirilowii seeds. The content of 3,29-dibenzoyl rarounitriol in Trichosanthes kirilowii seeds was determined by HPLC. The sample of different mature stages such as immature, near mature and fully mature and processed by different methods were studied. Fully mature Trichosanthes kirilowii seeds were better than the immatured, and the best processing method was dried under 60degrees C, the content of 3,29-dibenzoyl rarounitriol reached up to 131.63microlg/mL. Different processing methods and different mature stages had a significant influence on the quality of Trichosanthes kirilowii seeds.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ji, Changyoon, E-mail: changyoon@yonsei.ac.kr; Hong, Taehoon, E-mail: hong7@yonsei.ac.kr

    Previous studies have proposed several methods for integrating characterized environmental impacts as a single index in life cycle assessment. Each of them, however, may lead to different results. This study presents internal and external normalization methods, weighting factors proposed by panel methods, and a monetary valuation based on an endpoint life cycle impact assessment method as the integration methods. Furthermore, this study investigates the differences among the integration methods and identifies the causes of the differences through a case study in which five elementary school buildings were used. As a result, when using internal normalization with weighting factors, the weightingmore » factors had a significant influence on the total environmental impacts whereas the normalization had little influence on the total environmental impacts. When using external normalization with weighting factors, the normalization had more significant influence on the total environmental impacts than weighing factors. Due to such differences, the ranking of the five buildings varied depending on the integration methods. The ranking calculated by the monetary valuation method was significantly different from that calculated by the normalization and weighting process. The results aid decision makers in understanding the differences among these integration methods, and, finally, help them select the method most appropriate for the goal at hand.« less

  3. Comparison of preprocessing methods and storage times for touch DNA samples

    PubMed Central

    Dong, Hui; Wang, Jing; Zhang, Tao; Ge, Jian-ye; Dong, Ying-qiang; Sun, Qi-fan; Liu, Chao; Li, Cai-xia

    2017-01-01

    Aim To select appropriate preprocessing methods for different substrates by comparing the effects of four different preprocessing methods on touch DNA samples and to determine the effect of various storage times on the results of touch DNA sample analysis. Method Hand touch DNA samples were used to investigate the detection and inspection results of DNA on different substrates. Four preprocessing methods, including the direct cutting method, stubbing procedure, double swab technique, and vacuum cleaner method, were used in this study. DNA was extracted from mock samples with four different preprocessing methods. The best preprocess protocol determined from the study was further used to compare performance after various storage times. DNA extracted from all samples was quantified and amplified using standard procedures. Results The amounts of DNA and the number of alleles detected on the porous substrates were greater than those on the non-porous substrates. The performances of the four preprocessing methods varied with different substrates. The direct cutting method displayed advantages for porous substrates, and the vacuum cleaner method was advantageous for non-porous substrates. No significant degradation trend was observed as the storage times increased. Conclusion Different substrates require the use of different preprocessing method in order to obtain the highest DNA amount and allele number from touch DNA samples. This study provides a theoretical basis for explorations of touch DNA samples and may be used as a reference when dealing with touch DNA samples in case work. PMID:28252870

  4. Digital photography and transparency-based methods for measuring wound surface area.

    PubMed

    Bhedi, Amul; Saxena, Atul K; Gadani, Ravi; Patel, Ritesh

    2013-04-01

    To compare and determine a credible method of measurement of wound surface area by linear, transparency, and photographic methods for monitoring progress of wound healing accurately and ascertaining whether these methods are significantly different. From April 2005 to December 2006, 40 patients (30 men, 5 women, 5 children) admitted to the surgical ward of Shree Sayaji General Hospital, Baroda, had clean as well as infected wound following trauma, debridement, pressure sore, venous ulcer, and incision and drainage. Wound surface areas were measured by these three methods (linear, transparency, and photographic methods) simultaneously on alternate days. The linear method is statistically and significantly different from transparency and photographic methods (P value <0.05), but there is no significant difference between transparency and photographic methods (P value >0.05). Photographic and transparency methods provided measurements of wound surface area with equivalent result and there was no statistically significant difference between these two methods.

  5. K-nearest neighbors based methods for identification of different gear crack levels under different motor speeds and loads: Revisited

    NASA Astrophysics Data System (ADS)

    Wang, Dong

    2016-03-01

    Gears are the most commonly used components in mechanical transmission systems. Their failures may cause transmission system breakdown and result in economic loss. Identification of different gear crack levels is important to prevent any unexpected gear failure because gear cracks lead to gear tooth breakage. Signal processing based methods mainly require expertize to explain gear fault signatures which is usually not easy to be achieved by ordinary users. In order to automatically identify different gear crack levels, intelligent gear crack identification methods should be developed. The previous case studies experimentally proved that K-nearest neighbors based methods exhibit high prediction accuracies for identification of 3 different gear crack levels under different motor speeds and loads. In this short communication, to further enhance prediction accuracies of existing K-nearest neighbors based methods and extend identification of 3 different gear crack levels to identification of 5 different gear crack levels, redundant statistical features are constructed by using Daubechies 44 (db44) binary wavelet packet transform at different wavelet decomposition levels, prior to the use of a K-nearest neighbors method. The dimensionality of redundant statistical features is 620, which provides richer gear fault signatures. Since many of these statistical features are redundant and highly correlated with each other, dimensionality reduction of redundant statistical features is conducted to obtain new significant statistical features. At last, the K-nearest neighbors method is used to identify 5 different gear crack levels under different motor speeds and loads. A case study including 3 experiments is investigated to demonstrate that the developed method provides higher prediction accuracies than the existing K-nearest neighbors based methods for recognizing different gear crack levels under different motor speeds and loads. Based on the new significant statistical features, some other popular statistical models including linear discriminant analysis, quadratic discriminant analysis, classification and regression tree and naive Bayes classifier, are compared with the developed method. The results show that the developed method has the highest prediction accuracies among these statistical models. Additionally, selection of the number of new significant features and parameter selection of K-nearest neighbors are thoroughly investigated.

  6. More than Method?: A Discussion of Paradigm Differences within Mixed Methods Research

    ERIC Educational Resources Information Center

    Harrits, Gitte Sommer

    2011-01-01

    This article challenges the idea that mixed methods research (MMR) constitutes a coherent research paradigm and explores how different research paradigms exist within MMR. Tracing paradigmatic differences at the level of methods, ontology, and epistemology, two MMR strategies are discussed: nested analysis, recently presented by the American…

  7. Comparing 3D foot scanning with conventional measurement methods.

    PubMed

    Lee, Yu-Chi; Lin, Gloria; Wang, Mao-Jiun J

    2014-01-01

    Foot dimension information on different user groups is important for footwear design and clinical applications. Foot dimension data collected using different measurement methods presents accuracy problems. This study compared the precision and accuracy of the 3D foot scanning method with conventional foot dimension measurement methods including the digital caliper, ink footprint and digital footprint. Six commonly used foot dimensions, i.e. foot length, ball of foot length, outside ball of foot length, foot breadth diagonal, foot breadth horizontal and heel breadth were measured from 130 males and females using four foot measurement methods. Two-way ANOVA was performed to evaluate the sex and method effect on the measured foot dimensions. In addition, the mean absolute difference values and intra-class correlation coefficients (ICCs) were used for precision and accuracy evaluation. The results were also compared with the ISO 20685 criteria. The participant's sex and the measurement method were found (p < 0.05) to exert significant effects on the measured six foot dimensions. The precision of the 3D scanning measurement method with mean absolute difference values between 0.73 to 1.50 mm showed the best performance among the four measurement methods. The 3D scanning measurements showed better measurement accuracy performance than the other methods (mean absolute difference was 0.6 to 4.3 mm), except for measuring outside ball of foot length and foot breadth horizontal. The ICCs for all six foot dimension measurements among the four measurement methods were within the 0.61 to 0.98 range. Overall, the 3D foot scanner is recommended for collecting foot anthropometric data because it has relatively higher precision, accuracy and robustness. This finding suggests that when comparing foot anthropometric data among different references, it is important to consider the differences caused by the different measurement methods.

  8. Validated univariate and multivariate spectrophotometric methods for the determination of pharmaceuticals mixture in complex wastewater

    NASA Astrophysics Data System (ADS)

    Riad, Safaa M.; Salem, Hesham; Elbalkiny, Heba T.; Khattab, Fatma I.

    2015-04-01

    Five, accurate, precise, and sensitive univariate and multivariate spectrophotometric methods were developed for the simultaneous determination of a ternary mixture containing Trimethoprim (TMP), Sulphamethoxazole (SMZ) and Oxytetracycline (OTC) in waste water samples collected from different cites either production wastewater or livestock wastewater after their solid phase extraction using OASIS HLB cartridges. In univariate methods OTC was determined at its λmax 355.7 nm (0D), while (TMP) and (SMZ) were determined by three different univariate methods. Method (A) is based on successive spectrophotometric resolution technique (SSRT). The technique starts with the ratio subtraction method followed by ratio difference method for determination of TMP and SMZ. Method (B) is successive derivative ratio technique (SDR). Method (C) is mean centering of the ratio spectra (MCR). The developed multivariate methods are principle component regression (PCR) and partial least squares (PLS). The specificity of the developed methods is investigated by analyzing laboratory prepared mixtures containing different ratios of the three drugs. The obtained results are statistically compared with those obtained by the official methods, showing no significant difference with respect to accuracy and precision at p = 0.05.

  9. Validated univariate and multivariate spectrophotometric methods for the determination of pharmaceuticals mixture in complex wastewater.

    PubMed

    Riad, Safaa M; Salem, Hesham; Elbalkiny, Heba T; Khattab, Fatma I

    2015-04-05

    Five, accurate, precise, and sensitive univariate and multivariate spectrophotometric methods were developed for the simultaneous determination of a ternary mixture containing Trimethoprim (TMP), Sulphamethoxazole (SMZ) and Oxytetracycline (OTC) in waste water samples collected from different cites either production wastewater or livestock wastewater after their solid phase extraction using OASIS HLB cartridges. In univariate methods OTC was determined at its λmax 355.7 nm (0D), while (TMP) and (SMZ) were determined by three different univariate methods. Method (A) is based on successive spectrophotometric resolution technique (SSRT). The technique starts with the ratio subtraction method followed by ratio difference method for determination of TMP and SMZ. Method (B) is successive derivative ratio technique (SDR). Method (C) is mean centering of the ratio spectra (MCR). The developed multivariate methods are principle component regression (PCR) and partial least squares (PLS). The specificity of the developed methods is investigated by analyzing laboratory prepared mixtures containing different ratios of the three drugs. The obtained results are statistically compared with those obtained by the official methods, showing no significant difference with respect to accuracy and precision at p=0.05. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Finite difference methods for transient signal propagation in stratified dispersive media

    NASA Technical Reports Server (NTRS)

    Lam, D. H.

    1975-01-01

    Explicit difference equations are presented for the solution of a signal of arbitrary waveform propagating in an ohmic dielectric, a cold plasma, a Debye model dielectric, and a Lorentz model dielectric. These difference equations are derived from the governing time-dependent integro-differential equations for the electric fields by a finite difference method. A special difference equation is derived for the grid point at the boundary of two different media. Employing this difference equation, transient signal propagation in an inhomogeneous media can be solved provided that the medium is approximated in a step-wise fashion. The solutions are generated simply by marching on in time. It is concluded that while the classical transform methods will remain useful in certain cases, with the development of the finite difference methods described, an extensive class of problems of transient signal propagating in stratified dispersive media can be effectively solved by numerical methods.

  11. Confident difference criterion: a new Bayesian differentially expressed gene selection algorithm with applications.

    PubMed

    Yu, Fang; Chen, Ming-Hui; Kuo, Lynn; Talbott, Heather; Davis, John S

    2015-08-07

    Recently, the Bayesian method becomes more popular for analyzing high dimensional gene expression data as it allows us to borrow information across different genes and provides powerful estimators for evaluating gene expression levels. It is crucial to develop a simple but efficient gene selection algorithm for detecting differentially expressed (DE) genes based on the Bayesian estimators. In this paper, by extending the two-criterion idea of Chen et al. (Chen M-H, Ibrahim JG, Chi Y-Y. A new class of mixture models for differential gene expression in DNA microarray data. J Stat Plan Inference. 2008;138:387-404), we propose two new gene selection algorithms for general Bayesian models and name these new methods as the confident difference criterion methods. One is based on the standardized differences between two mean expression values among genes; the other adds the differences between two variances to it. The proposed confident difference criterion methods first evaluate the posterior probability of a gene having different gene expressions between competitive samples and then declare a gene to be DE if the posterior probability is large. The theoretical connection between the proposed first method based on the means and the Bayes factor approach proposed by Yu et al. (Yu F, Chen M-H, Kuo L. Detecting differentially expressed genes using alibrated Bayes factors. Statistica Sinica. 2008;18:783-802) is established under the normal-normal-model with equal variances between two samples. The empirical performance of the proposed methods is examined and compared to those of several existing methods via several simulations. The results from these simulation studies show that the proposed confident difference criterion methods outperform the existing methods when comparing gene expressions across different conditions for both microarray studies and sequence-based high-throughput studies. A real dataset is used to further demonstrate the proposed methodology. In the real data application, the confident difference criterion methods successfully identified more clinically important DE genes than the other methods. The confident difference criterion method proposed in this paper provides a new efficient approach for both microarray studies and sequence-based high-throughput studies to identify differentially expressed genes.

  12. [Mahalanobis distance based hyperspectral characteristic discrimination of leaves of different desert tree species].

    PubMed

    Lin, Hai-jun; Zhang, Hui-fang; Gao, Ya-qi; Li, Xia; Yang, Fan; Zhou, Yan-fei

    2014-12-01

    The hyperspectral reflectance of Populus euphratica, Tamarix hispida, Haloxylon ammodendron and Calligonum mongolicum in the lower reaches of Tarim River and Turpan Desert Botanical Garden was measured by using the HR-768 field-portable spectroradiometer. The method of continuum removal, first derivative reflectance and second derivative reflectance were used to deal with the original spectral data of four tree species. The method of Mahalanobis Distance was used to select the bands with significant differences in the original spectral data and transform spectral data to identify the different tree species. The progressive discrimination analyses were used to test the selective bands used to identify different tree species. The results showed that The Mahalanobis Distance method was an effective method in feature band extraction. The bands for identifying different tree species were most near-infrared bands. The recognition accuracy of four methods was 85%, 93.8%, 92.4% and 95.5% respectively. Spectrum transform could improve the recognition accuracy. The recognition accuracy of different research objects and different spectrum transform methods were different. The research provided evidence for desert tree species classification, monitoring biodiversity and the analysis of area in desert by using large scale remote sensing method.

  13. Simultaneous determination of binary mixture of amlodipine besylate and atenolol based on dual wavelengths

    NASA Astrophysics Data System (ADS)

    Lamie, Nesrine T.

    2015-10-01

    Four, accurate, precise, and sensitive spectrophotometric methods are developed for simultaneous determination of a binary mixture of amlodipine besylate (AM) and atenolol (AT). AM is determined at its λmax 360 nm (0D), while atenolol can be determined by four different methods. Method (A) is absorption factor (AF). Method (B) is the new ratio difference method (RD) which measures the difference in amplitudes between 210 and 226 nm. Method (C) is novel constant center spectrophotometric method (CC). Method (D) is mean centering of the ratio spectra (MCR) at 284 nm. The methods are tested by analyzing synthetic mixtures of the cited drugs and they are applied to their commercial pharmaceutical preparation. The validity of results is assessed by applying standard addition technique. The results obtained are found to agree statistically with those obtained by official methods, showing no significant difference with respect to accuracy and precision.

  14. Financial time series analysis based on information categorization method

    NASA Astrophysics Data System (ADS)

    Tian, Qiang; Shang, Pengjian; Feng, Guochen

    2014-12-01

    The paper mainly applies the information categorization method to analyze the financial time series. The method is used to examine the similarity of different sequences by calculating the distances between them. We apply this method to quantify the similarity of different stock markets. And we report the results of similarity in US and Chinese stock markets in periods 1991-1998 (before the Asian currency crisis), 1999-2006 (after the Asian currency crisis and before the global financial crisis), and 2007-2013 (during and after global financial crisis) by using this method. The results show the difference of similarity between different stock markets in different time periods and the similarity of the two stock markets become larger after these two crises. Also we acquire the results of similarity of 10 stock indices in three areas; it means the method can distinguish different areas' markets from the phylogenetic trees. The results show that we can get satisfactory information from financial markets by this method. The information categorization method can not only be used in physiologic time series, but also in financial time series.

  15. Elevation data fitting and precision analysis of Google Earth in road survey

    NASA Astrophysics Data System (ADS)

    Wei, Haibin; Luan, Xiaohan; Li, Hanchao; Jia, Jiangkun; Chen, Zhao; Han, Leilei

    2018-05-01

    Objective: In order to improve efficiency of road survey and save manpower and material resources, this paper intends to apply Google Earth to the feasibility study stage of road survey and design. Limited by the problem that Google Earth elevation data lacks precision, this paper is focused on finding several different fitting or difference methods to improve the data precision, in order to make every effort to meet the accuracy requirements of road survey and design specifications. Method: On the basis of elevation difference of limited public points, any elevation difference of the other points can be fitted or interpolated. Thus, the precise elevation can be obtained by subtracting elevation difference from the Google Earth data. Quadratic polynomial surface fitting method, cubic polynomial surface fitting method, V4 interpolation method in MATLAB and neural network method are used in this paper to process elevation data of Google Earth. And internal conformity, external conformity and cross correlation coefficient are used as evaluation indexes to evaluate the data processing effect. Results: There is no fitting difference at the fitting point while using V4 interpolation method. Its external conformity is the largest and the effect of accuracy improvement is the worst, so V4 interpolation method is ruled out. The internal and external conformity of the cubic polynomial surface fitting method both are better than those of the quadratic polynomial surface fitting method. The neural network method has a similar fitting effect with the cubic polynomial surface fitting method, but its fitting effect is better in the case of a higher elevation difference. Because the neural network method is an unmanageable fitting model, the cubic polynomial surface fitting method should be mainly used and the neural network method can be used as the auxiliary method in the case of higher elevation difference. Conclusions: Cubic polynomial surface fitting method can obviously improve data precision of Google Earth. The error of data in hilly terrain areas meets the requirement of specifications after precision improvement and it can be used in feasibility study stage of road survey and design.

  16. Development of a Thermal Desorption Gas Chromatography-Mass Spectrometry Analysis Method for Airborne Dichlorodiphenyltrichloroethane

    DTIC Science & Technology

    2013-05-28

    span of 1-250 ng DDT. Furthermore, laboratory and field experiments utilizing this method confirmed that significant DDT concentration differences ... different between the two sample introduction methods when comparing the same DDT mass which may be due to differences in the precision of split...degradation of DDT was significantly different between the liquid and TD methods (t-test; p < 0.001). For TD analyses the relative percent

  17. a Method of Time-Series Change Detection Using Full Polsar Images from Different Sensors

    NASA Astrophysics Data System (ADS)

    Liu, W.; Yang, J.; Zhao, J.; Shi, H.; Yang, L.

    2018-04-01

    Most of the existing change detection methods using full polarimetric synthetic aperture radar (PolSAR) are limited to detecting change between two points in time. In this paper, a novel method was proposed to detect the change based on time-series data from different sensors. Firstly, the overall difference image of a time-series PolSAR was calculated by ominous statistic test. Secondly, difference images between any two images in different times ware acquired by Rj statistic test. Generalized Gaussian mixture model (GGMM) was used to obtain time-series change detection maps in the last step for the proposed method. To verify the effectiveness of the proposed method, we carried out the experiment of change detection by using the time-series PolSAR images acquired by Radarsat-2 and Gaofen-3 over the city of Wuhan, in China. Results show that the proposed method can detect the time-series change from different sensors.

  18. Stability and retention of micronutrients in fortified rice prepared using different cooking methods.

    PubMed

    Wieringa, Frank T; Laillou, Arnaud; Guyondet, Christophe; Jallier, Vincent; Moench-Pfanner, Regina; Berger, Jacques

    2014-09-01

    Fortified rice holds great potential for bringing essential micronutrients to a large part of the world population. However, it is unknown whether differences in cooking methods or in production of rice premix affect the final amount of micronutrient consumed. This paper presents a study that quantified the losses of five different micronutrients (vitamin A, iron, zinc, folic acid, and vitamin B12) in fortified rice that was produced using three different techniques (hot extrusion, cold extrusion, and coating) during cooking and five different cooking methods (absorption method with or without soaking, washing before cooking, cooking in excess water, and frying rice before cooking). Fortified rice premix from six different producers (two for each technique) was mixed with normal rice in a 1:100 ratio. Each sample was prepared in triplicate, using the five different cooking methods, and retention of iron, zinc, vitamin A, vitamin B12, and folic acid was determined. It was found that the overall retention of iron, zinc, vitamin B12, and folic acid was between 75% and 100% and was unaffected by cooking method, while the retention of vitamin A was significantly affected by cooking method, with retention ranging from 0% (excess water) to 80% (soaking), depending on the cooking method and producer of the rice premix. No systematic differences between the different production methods were observed. We conclude that different cooking methods of rice as used in different regions of the world do not lead to a major loss of most micronutrients, with the exception of vitamin A. The factors involved in protecting vitamin A against losses during cooking need to be identified. All production techniques of rice premix yielded similar results, showing that coating is not inferior to extrusion techniques. Standard overages (50%) for vitamin B12 and folic acid are too high. © 2014 New York Academy of Sciences.

  19. Differences in the Stimulus Accommodative Convergence/Accommodation Ratio using Various Techniques and Accommodative Stimuli.

    PubMed

    Satou, Tsukasa; Ito, Misae; Shinomiya, Yuma; Takahashi, Yoshiaki; Hara, Naoto; Niida, Takahiro

    2018-04-04

    To investigate differences in the stimulus accommodative convergence/accommodation (AC/A) ratio using various techniques and accommodative stimuli, and to describe a method for determining the stimulus AC/A ratio. A total of 81 subjects with a mean age of 21 years (range, 20-23 years) were enrolled. The relationship between ocular deviation and accommodation was assessed using two methods. Ocular deviation was measured by varying the accommodative requirement using spherical plus/minus lenses to create an accommodative stimulus of 10.00 diopters (D) (in 1.00 D steps). Ocular deviation was assessed using the alternate prism cover test in method 1 at distance (5 m) and near (1/3 m), and the major amblyoscope in method 2. The stimulus AC/A ratios obtained using methods 1 and 2 were calculated and defined as the stimulus AC/A ratios with low and high accommodation, respectively, using the following analysis method. The former was calculated as the difference between the convergence response to an accommodative stimulus of 3 D and 0 D, divided by 3. The latter was calculated as the difference between the convergence response to a maximum (max) accommodative stimulus with distinct vision of the subject and an accommodative stimulus of max minus 3.00 D, divided by 3. The median stimulus AC/A ratio with low accommodation (1.0 Δ/D for method 1 at distance, 2.0 Δ/D for method 1 at near, and 2.7 Δ/D for method 2) differed significantly among the measurement methods (P < 0.01). Differences in the median stimulus AC/A ratio with high accommodation (4.0 Δ/D for method 1 at distance, 3.7 Δ/D for method 1 at near, and 4.7 Δ/D for method 2) between method 1 at distance and method 2 were statistically significant (P < 0.05), while method 1 at near was not significantly different compared with other methods. Differences in the stimulus AC/A ratio value were significant according to measurement technique and accommodative stimuli. However, differences caused by measurement technique may be reduced by using a high accommodative stimulus during measurements.

  20. [Comparison of different methods in dealing with HIV viral load data with diversified missing value mechanism on HIV positive MSM].

    PubMed

    Jiang, Z; Dou, Z; Song, W L; Xu, J; Wu, Z Y

    2017-11-10

    Objective: To compare results of different methods: in organizing HIV viral load (VL) data with missing values mechanism. Methods We used software SPSS 17.0 to simulate complete and missing data with different missing value mechanism from HIV viral loading data collected from MSM in 16 cities in China in 2013. Maximum Likelihood Methods Using the Expectation and Maximization Algorithm (EM), regressive method, mean imputation, delete method, and Markov Chain Monte Carlo (MCMC) were used to supplement missing data respectively. The results: of different methods were compared according to distribution characteristics, accuracy and precision. Results HIV VL data could not be transferred into a normal distribution. All the methods showed good results in iterating data which is Missing Completely at Random Mechanism (MCAR). For the other types of missing data, regressive and MCMC methods were used to keep the main characteristic of the original data. The means of iterating database with different methods were all close to the original one. The EM, regressive method, mean imputation, and delete method under-estimate VL while MCMC overestimates it. Conclusion: MCMC can be used as the main imputation method for HIV virus loading missing data. The iterated data can be used as a reference for mean HIV VL estimation among the investigated population.

  1. [Do different interpretative methods used for evaluation of checkerboard synergy test affect the results?].

    PubMed

    Ozseven, Ayşe Gül; Sesli Çetin, Emel; Ozseven, Levent

    2012-07-01

    In recent years, owing to the presence of multi-drug resistant nosocomial bacteria, combination therapies are more frequently applied. Thus there is more need to investigate the in vitro activity of drug combinations against multi-drug resistant bacteria. Checkerboard synergy testing is among the most widely used standard technique to determine the activity of antibiotic combinations. It is based on microdilution susceptibility testing of antibiotic combinations. Although this test has a standardised procedure, there are many different methods for interpreting the results. In many previous studies carried out with multi-drug resistant bacteria, different rates of synergy have been reported with various antibiotic combinations using checkerboard technique. These differences might be attributed to the different features of the strains. However, different synergy rates detected by checkerboard method have also been reported in other studies using the same drug combinations and same types of bacteria. It was thought that these differences in synergy rates might be due to the different methods of interpretation of synergy test results. In recent years, multi-drug resistant Acinetobacter baumannii has been the most commonly encountered nosocomial pathogen especially in intensive-care units. For this reason, multidrug resistant A.baumannii has been the subject of a considerable amount of research about antimicrobial combinations. In the present study, the in vitro activities of frequently preferred combinations in A.baumannii infections like imipenem plus ampicillin/sulbactam, and meropenem plus ampicillin/sulbactam were tested by checkerboard synergy method against 34 multi-drug resistant A.baumannii isolates. Minimum inhibitory concentration (MIC) values for imipenem, meropenem and ampicillin/sulbactam were determined by the broth microdilution method. Subsequently the activity of two different combinations were tested in the dilution range of 4 x MIC and 0.03 x MIC in 96-well checkerboard plates. The results were obtained separately using the four different interpretation methods frequently preferred by researchers. Thus, it was aimed to detect to what extent the rates of synergistic, indifferent and antagonistic interactions were affected by different interpretation methods. The differences between the interpretation methods were tested by chi-square analysis for each combination used. Statistically significant differences were detected between the four different interpretation methods for the determination of synergistic and indifferent interactions (p< 0.0001). Highest rates of synergy were observed with both combinations by the method that used the lowest fractional inhibitory concentration index of all the non-turbid wells along the turbidity/non-turbidity interface. There was no statistically significant difference between the four methods for the detection of antagonism (p> 0.05). In conclusion although there is a standard procedure for checkerboard synergy testing it fails to exhibit standard results owing to different methods of interpretation of the results. Thus, there is a need to standardise the interpretation method for checkerboard synergy testing. To determine the most appropriate method of interpretation further studies investigating the clinical benefits of synergic combinations and additionally comparing the consistency of the results obtained from the other standard combination tests like time-kill studies, are required.

  2. Relation between financial market structure and the real economy: comparison between clustering methods.

    PubMed

    Musmeci, Nicoló; Aste, Tomaso; Di Matteo, T

    2015-01-01

    We quantify the amount of information filtered by different hierarchical clustering methods on correlations between stock returns comparing the clustering structure with the underlying industrial activity classification. We apply, for the first time to financial data, a novel hierarchical clustering approach, the Directed Bubble Hierarchical Tree and we compare it with other methods including the Linkage and k-medoids. By taking the industrial sector classification of stocks as a benchmark partition, we evaluate how the different methods retrieve this classification. The results show that the Directed Bubble Hierarchical Tree can outperform other methods, being able to retrieve more information with fewer clusters. Moreover,we show that the economic information is hidden at different levels of the hierarchical structures depending on the clustering method. The dynamical analysis on a rolling window also reveals that the different methods show different degrees of sensitivity to events affecting financial markets, like crises. These results can be of interest for all the applications of clustering methods to portfolio optimization and risk hedging [corrected].

  3. Relation between Financial Market Structure and the Real Economy: Comparison between Clustering Methods

    PubMed Central

    Musmeci, Nicoló; Aste, Tomaso; Di Matteo, T.

    2015-01-01

    We quantify the amount of information filtered by different hierarchical clustering methods on correlations between stock returns comparing the clustering structure with the underlying industrial activity classification. We apply, for the first time to financial data, a novel hierarchical clustering approach, the Directed Bubble Hierarchical Tree and we compare it with other methods including the Linkage and k-medoids. By taking the industrial sector classification of stocks as a benchmark partition, we evaluate how the different methods retrieve this classification. The results show that the Directed Bubble Hierarchical Tree can outperform other methods, being able to retrieve more information with fewer clusters. Moreover, we show that the economic information is hidden at different levels of the hierarchical structures depending on the clustering method. The dynamical analysis on a rolling window also reveals that the different methods show different degrees of sensitivity to events affecting financial markets, like crises. These results can be of interest for all the applications of clustering methods to portfolio optimization and risk hedging. PMID:25786703

  4. Development of a Double Glass Mounting Method Using Formaldehyde Alcohol Azocarmine Lactophenol (FAAL) and its Evaluation for Permanent Mounting of Small Nematodes

    PubMed Central

    ZAHABIUN, Farzaneh; SADJJADI, Seyed Mahmoud; ESFANDIARI, Farideh

    2015-01-01

    Background: Permanent slide preparation of nematodes especially small ones is time consuming, difficult and they become scarious margins. Regarding this problem, a modified double glass mounting method was developed and compared with classic method. Methods: A total of 209 nematode samples from human and animal origin were fixed and stained with Formaldehyde Alcohol Azocarmine Lactophenol (FAAL) followed by double glass mounting and classic dehydration method using Canada balsam as their mounting media. The slides were evaluated in different dates and times, more than four years. Different photos were made with different magnification during the evaluation time. Results: The double glass mounting method was stable during this time and comparable with classic method. There were no changes in morphologic structures of nematodes using double glass mounting method with well-defined and clear differentiation between different organs of nematodes in this method. Conclusion: Using this method is cost effective and fast for mounting of small nematodes comparing to classic method. PMID:26811729

  5. Evaluation of a visual layering methodology for colour coding control room displays.

    PubMed

    Van Laar, Darren; Deshe, Ofer

    2002-07-01

    Eighteen people participated in an experiment in which they were asked to search for targets on control room like displays which had been produced using three different coding methods. The monochrome coding method displayed the information in black and white only, the maximally discriminable method contained colours chosen for their high perceptual discriminability, the visual layers method contained colours developed from psychological and cartographic principles which grouped information into a perceptual hierarchy. The visual layers method produced significantly faster search times than the other two coding methods which did not differ significantly from each other. Search time also differed significantly for presentation order and for the method x order interaction. There was no significant difference between the methods in the number of errors made. Participants clearly preferred the visual layers coding method. Proposals are made for the design of experiments to further test and develop the visual layers colour coding methodology.

  6. Comparison of Molecular Typing Methods Useful for Detecting Clusters of Campylobacter jejuni and C. coli Isolates through Routine Surveillance

    PubMed Central

    Taboada, Eduardo; Grant, Christopher C. R.; Blakeston, Connie; Pollari, Frank; Marshall, Barbara; Rahn, Kris; MacKinnon, Joanne; Daignault, Danielle; Pillai, Dylan; Ng, Lai-King

    2012-01-01

    Campylobacter spp. may be responsible for unreported outbreaks of food-borne disease. The detection of these outbreaks is made more difficult by the fact that appropriate methods for detecting clusters of Campylobacter have not been well defined. We have compared the characteristics of five molecular typing methods on Campylobacter jejuni and C. coli isolates obtained from human and nonhuman sources during sentinel site surveillance during a 3-year period. Comparative genomic fingerprinting (CGF) appears to be one of the optimal methods for the detection of clusters of cases, and it could be supplemented by the sequencing of the flaA gene short variable region (flaA SVR sequence typing), with or without subsequent multilocus sequence typing (MLST). Different methods may be optimal for uncovering different aspects of source attribution. Finally, the use of several different molecular typing or analysis methods for comparing individuals within a population reveals much more about that population than a single method. Similarly, comparing several different typing methods reveals a great deal about differences in how the methods group individuals within the population. PMID:22162562

  7. Advances in NMR Spectroscopy for Lipid Oxidation Assessment

    USDA-ARS?s Scientific Manuscript database

    Although there are many analytical methods developed for the assessment of lipid oxidation, different analytical methods often give different, sometimes even contradictory, results. The reason for this inconsistency is that although there are many different kinds of oxidation products, most methods ...

  8. Are LOD and LOQ Reliable Parameters for Sensitivity Evaluation of Spectroscopic Methods?

    PubMed

    Ershadi, Saba; Shayanfar, Ali

    2018-03-22

    The limit of detection (LOD) and the limit of quantification (LOQ) are common parameters to assess the sensitivity of analytical methods. In this study, the LOD and LOQ of previously reported terbium sensitized analysis methods were calculated by different methods, and the results were compared with sensitivity parameters [lower limit of quantification (LLOQ)] of U.S. Food and Drug Administration guidelines. The details of the calibration curve and standard deviation of blank samples of three different terbium-sensitized luminescence methods for the quantification of mycophenolic acid, enrofloxacin, and silibinin were used for the calculation of LOD and LOQ. A comparison of LOD and LOQ values calculated by various methods and LLOQ shows a considerable difference. The significant difference of the calculated LOD and LOQ with various methods and LLOQ should be considered in the sensitivity evaluation of spectroscopic methods.

  9. Influence of the antagonist material on the wear of different composites using two different wear simulation methods.

    PubMed

    Heintze, S D; Zellweger, G; Cavalleri, A; Ferracane, J

    2006-02-01

    The aim of the study was to evaluate two ceramic materials as possible substitutes for enamel using two wear simulation methods, and to compare both methods with regard to the wear results for different materials. Flat specimens (OHSU n=6, Ivoclar n=8) of one compomer and three composite materials (Dyract AP, Tetric Ceram, Z250, experimental composite) were fabricated and subjected to wear using two different wear testing methods and two pressable ceramic materials as stylus (Empress, experimental ceramic). For the OHSU method, enamel styli of the same dimensions as the ceramic stylus were fabricated additionally. Both wear testing methods differ with regard to loading force, lateral movement of stylus, stylus dimension, number of cycles, thermocycling and abrasive medium. In the OHSU method, the wear facets (mean vertical loss) were measured using a contact profilometer, while in the Ivoclar method (maximal vertical loss) a laser scanner was used for this purpose. Additionally, the vertical loss of the ceramic stylus was quantified for the Ivoclar method. The results obtained from each method were compared by ANOVA and Tukey's test (p<0.05). To compare both wear methods, the log-transformed data were used to establish relative ranks between material/stylus combinations and assessed by applying the Pearson correlation coefficient. The experimental ceramic material generated significantly less wear in Tetric Ceram and Z250 specimens compared to the Empress stylus in the Ivoclar method, whereas with the OHSU method, no difference between the two ceramic antagonists was found with regard to abrasion or attrition. The wear generated by the enamel stylus was not statistically different from that generated by the other two ceramic materials in the OHSU method. With the Ivoclar method, wear of the ceramic stylus was only statistically different when in contact with Tetric Ceram. There was a close correlation between the attrition wear of the OHSU and the wear of the Ivoclar method (Pearson coefficient 0.83, p=0.01). Pressable ceramic materials can be used as a substitute for enamel in wear testing machines. However, material ranking may be affected by the type of ceramic material chosen. The attrition wear of the OHSU method was comparable with the wear generated with the Ivoclar method.

  10. Analysis of vestibular schwannoma size in multiple dimensions: a comparative cohort study of different measurement techniques.

    PubMed

    Varughese, J K; Wentzel-Larsen, T; Vassbotn, F; Moen, G; Lund-Johansen, M

    2010-04-01

    In this volumetric study of the vestibular schwannoma, we evaluated the accuracy and reliability of several approximation methods that are in use, and determined the minimum volume difference that needs to be measured for it to be attributable to an actual difference rather than a retest error. We also found empirical proportionality coefficients for the different methods. DESIGN/SETTING AND PARTICIPANTS: Methodological study with investigation of three different VS measurement methods compared to a reference method that was based on serial slice volume estimates. These volume estimates were based on: (i) one single diameter, (ii) three orthogonal diameters or (iii) the maximal slice area. Altogether 252 T1-weighted MRI images with gadolinium contrast, from 139 VS patients, were examined. The retest errors, in terms of relative percentages, were determined by undertaking repeated measurements on 63 scans for each method. Intraclass correlation coefficients were used to assess the agreement between each of the approximation methods and the reference method. The tendency for approximation methods to systematically overestimate/underestimate different-sized tumours was also assessed, with the help of Bland-Altman plots. The most commonly used approximation method, the maximum diameter, was the least reliable measurement method and has inherent weaknesses that need to be considered. This includes greater retest errors than area-based measurements (25% and 15%, respectively), and that it was the only approximation method that could not easily be converted into volumetric units. Area-based measurements can furthermore be more reliable for smaller volume differences than diameter-based measurements. All our findings suggest that the maximum diameter should not be used as an approximation method. We propose the use of measurement modalities that take into account growth in multiple dimensions instead.

  11. Social network extraction based on Web: 1. Related superficial methods

    NASA Astrophysics Data System (ADS)

    Khairuddin Matyuso Nasution, Mahyuddin

    2018-01-01

    Often the nature of something affects methods to resolve the related issues about it. Likewise, methods to extract social networks from the Web, but involve the structured data types differently. This paper reveals several methods of social network extraction from the same sources that is Web: the basic superficial method, the underlying superficial method, the description superficial method, and the related superficial methods. In complexity we derive the inequalities between methods and so are their computations. In this case, we find that different results from the same tools make the difference from the more complex to the simpler: Extraction of social network by involving co-occurrence is more complex than using occurrences.

  12. A different approach to estimate nonlinear regression model using numerical methods

    NASA Astrophysics Data System (ADS)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper concerns with the computational methods namely the Gauss-Newton method, Gradient algorithm methods (Newton-Raphson method, Steepest Descent or Steepest Ascent algorithm method, the Method of Scoring, the Method of Quadratic Hill-Climbing) based on numerical analysis to estimate parameters of nonlinear regression model in a very different way. Principles of matrix calculus have been used to discuss the Gradient-Algorithm methods. Yonathan Bard [1] discussed a comparison of gradient methods for the solution of nonlinear parameter estimation problems. However this article discusses an analytical approach to the gradient algorithm methods in a different way. This paper describes a new iterative technique namely Gauss-Newton method which differs from the iterative technique proposed by Gorden K. Smyth [2]. Hans Georg Bock et.al [10] proposed numerical methods for parameter estimation in DAE’s (Differential algebraic equation). Isabel Reis Dos Santos et al [11], Introduced weighted least squares procedure for estimating the unknown parameters of a nonlinear regression metamodel. For large-scale non smooth convex minimization the Hager and Zhang (HZ) conjugate gradient Method and the modified HZ (MHZ) method were presented by Gonglin Yuan et al [12].

  13. Interaction of depth probes and style of depiction

    PubMed Central

    van Doorn, Andrea J.; Koenderink, Jan J.; Leyssen, Mieke H. R.; Wagemans, Johan

    2012-01-01

    We study the effect of stylistic differences on the nature of pictorial spaces as they appear to an observer when looking into a picture. Four pictures chosen from diverse styles of depiction were studied by 2 different methods. Each method addresses pictorial depth but draws on a different bouquet of depth cues. We find that the depth structures are very similar for 8 observers, apart from an idiosyncratic depth scaling (up to a factor of 3). The differences between observers generalize over (very different) pictures and (very different) methods. They are apparently characteristic of the person. The differences between depths as sampled by the 2 methods depend upon the style of the picture. This is the case for all observers except one. PMID:23145306

  14. Analytical investigation of different mathematical approaches utilizing manipulation of ratio spectra

    NASA Astrophysics Data System (ADS)

    Osman, Essam Eldin A.

    2018-01-01

    This work represents a comparative study of different approaches of manipulating ratio spectra, applied on a binary mixture of ciprofloxacin HCl and dexamethasone sodium phosphate co-formulated as ear drops. The proposed new spectrophotometric methods are: ratio difference spectrophotometric method (RDSM), amplitude center method (ACM), first derivative of the ratio spectra (1DD) and mean centering of ratio spectra (MCR). The proposed methods were checked using laboratory-prepared mixtures and were successfully applied for the analysis of pharmaceutical formulation containing the cited drugs. The proposed methods were validated according to the ICH guidelines. A comparative study was conducted between those methods regarding simplicity, limitations and sensitivity. The obtained results were statistically compared with those obtained from the reported HPLC method, showing no significant difference with respect to accuracy and precision.

  15. Cluster detection methods applied to the Upper Cape Cod cancer data.

    PubMed

    Ozonoff, Al; Webster, Thomas; Vieira, Veronica; Weinberg, Janice; Ozonoff, David; Aschengrau, Ann

    2005-09-15

    A variety of statistical methods have been suggested to assess the degree and/or the location of spatial clustering of disease cases. However, there is relatively little in the literature devoted to comparison and critique of different methods. Most of the available comparative studies rely on simulated data rather than real data sets. We have chosen three methods currently used for examining spatial disease patterns: the M-statistic of Bonetti and Pagano; the Generalized Additive Model (GAM) method as applied by Webster; and Kulldorff's spatial scan statistic. We apply these statistics to analyze breast cancer data from the Upper Cape Cancer Incidence Study using three different latency assumptions. The three different latency assumptions produced three different spatial patterns of cases and controls. For 20 year latency, all three methods generally concur. However, for 15 year latency and no latency assumptions, the methods produce different results when testing for global clustering. The comparative analyses of real data sets by different statistical methods provides insight into directions for further research. We suggest a research program designed around examining real data sets to guide focused investigation of relevant features using simulated data, for the purpose of understanding how to interpret statistical methods applied to epidemiological data with a spatial component.

  16. Image and Imaging an Emergency Department: Expense and Benefit of Different Quality Assessment Methods

    PubMed Central

    Pfortmueller, Carmen Andrea; Keller, Michael; Mueller, Urs; Zimmermann, Heinz; Exadaktylos, Aristomenis Konstantinos

    2013-01-01

    Introduction. In this era of high-tech medicine, it is becoming increasingly important to assess patient satisfaction. There are several methods to do so, but these differ greatly in terms of cost, time, and labour and external validity. The aim of this study is to describe and compare the structure and implementation of different methods to assess the satisfaction of patients in an emergency department. Methods. The structure and implementation of the different methods to assess patient satisfaction were evaluated on the basis of a 90-minute standardised interview. Results. We identified a total of six different methods in six different hospitals. The average number of patients assessed was 5012, with a range from 230 (M5) to 20 000 patients (M2). In four methods (M1, M3, M5, and M6), the questionnaire was composed by a specialised external institute. In two methods, the questionnaire was created by the hospital itself (M2, M4).The median response rate was 58.4% (range 9–97.8%). With a reminder, the response rate increased by 60% (M3). Conclusion. The ideal method to assess patient satisfaction in the emergency department setting is to use a patient-based, in-emergency department-based assessment of patient satisfaction, planned and guided by expert personnel. PMID:23984073

  17. Comparison of Video Head Impulse Test (vHIT) Gains Between Two Commercially Available Devices and by Different Gain Analytical Methods.

    PubMed

    Lee, Sang Hun; Yoo, Myung Hoon; Park, Jun Woo; Kang, Byung Chul; Yang, Chan Joo; Kang, Woo Suk; Ahn, Joong Ho; Chung, Jong Woo; Park, Hong Ju

    2018-06-01

    To evaluate whether video head impulse test (vHIT) gains are dependent on the measuring device and method of analysis. Prospective study. vHIT was performed in 25 healthy subjects using two devices simultaneously. vHIT gains were compared between these instruments and using five different methods of comparing position and velocity gains during head movement intervals. The two devices produced different vHIT gain results with the same method of analysis. There were also significant differences in the vHIT gains measured using different analytical methods. The gain analytic method that compares the areas under the velocity curve (AUC) of the head and eye movements during head movements showed lower vHIT gains than a method that compared the peak velocities of the head and eye movements. The former method produced the vHIT gain with the smallest standard deviation among the five procedures tested in this study. vHIT gains differ in normal subjects depending on the device and method of analysis used, suggesting that it is advisable for each device to have its own normal values. Gain calculations that compare the AUC of the head and eye movements during the head movements show the smallest variance.

  18. Development of a Double Glass Mounting Method Using Formaldehyde Alcohol Azocarmine Lactophenol (FAAL) and its Evaluation for Permanent Mounting of Small Nematodes.

    PubMed

    Zahabiun, Farzaneh; Sadjjadi, Seyed Mahmoud; Esfandiari, Farideh

    2015-01-01

    Permanent slide preparation of nematodes especially small ones is time consuming, difficult and they become scarious margins. Regarding this problem, a modified double glass mounting method was developed and compared with classic method. A total of 209 nematode samples from human and animal origin were fixed and stained with Formaldehyde Alcohol Azocarmine Lactophenol (FAAL) followed by double glass mounting and classic dehydration method using Canada balsam as their mounting media. The slides were evaluated in different dates and times, more than four years. Different photos were made with different magnification during the evaluation time. The double glass mounting method was stable during this time and comparable with classic method. There were no changes in morphologic structures of nematodes using double glass mounting method with well-defined and clear differentiation between different organs of nematodes in this method. Using this method is cost effective and fast for mounting of small nematodes comparing to classic method.

  19. Comparison of methods used to estimate conventional undiscovered petroleum resources: World examples

    USGS Publications Warehouse

    Ahlbrandt, T.S.; Klett, T.R.

    2005-01-01

    Various methods for assessing undiscovered oil, natural gas, and natural gas liquid resources were compared in support of the USGS World Petroleum Assessment 2000. Discovery process, linear fractal, parabolic fractal, engineering estimates, PETRIMES, Delphi, and the USGS 2000 methods were compared. Three comparisons of these methods were made in: (1) the Neuquen Basin province, Argentina (different assessors, same input data); (2) provinces in North Africa, Oman, and Yemen (same assessors, different methods); and (3) the Arabian Peninsula, Arabian (Persian) Gulf, and North Sea (different assessors, different methods). A fourth comparison (same assessors, same assessment methods but different geologic models), between results from structural and stratigraphic assessment units in the North Sea used only the USGS 2000 method, and hence compared the type of assessment unit rather than the method. In comparing methods, differences arise from inherent differences in assumptions regarding: (1) the underlying distribution of the parent field population (all fields, discovered and undiscovered), (2) the population of fields being estimated; that is, the entire parent distribution or the undiscovered resource distribution, (3) inclusion or exclusion of large outlier fields; (4) inclusion or exclusion of field (reserve) growth, (5) deterministic or probabilistic models, (6) data requirements, and (7) scale and time frame of the assessment. Discovery process, Delphi subjective consensus, and the USGS 2000 method yield comparable results because similar procedures are employed. In mature areas such as the Neuquen Basin province in Argentina, the linear and parabolic fractal and engineering methods were conservative compared to the other five methods and relative to new reserve additions there since 1995. The PETRIMES method gave the most optimistic estimates in the Neuquen Basin. In less mature areas, the linear fractal method yielded larger estimates relative to other methods. A geologically based model, such as one using the total petroleum system approach, is preferred in that it combines the elements of petroleum source, reservoir, trap and seal with the tectono-stratigraphic history of basin evolution with petroleum resource potential. Care must be taken to demonstrate that homogeneous populations in terms of geology, geologic risk, exploration, and discovery processes are used in the assessment process. The USGS 2000 method (7th Approximation Model, EMC computational program) is robust; that is, it can be used in both mature and immature areas, and provides comparable results when using different geologic models (e.g. stratigraphic or structural) with differing amounts of subdivisions, assessment units, within the total petroleum system. ?? 2005 International Association for Mathematical Geology.

  20. Numerical solution of nonlinear partial differential equations of mixed type. [finite difference approximation

    NASA Technical Reports Server (NTRS)

    Jameson, A.

    1976-01-01

    A review is presented of some recently developed numerical methods for the solution of nonlinear equations of mixed type. The methods considered use finite difference approximations to the differential equation. Central difference formulas are employed in the subsonic zone and upwind difference formulas are used in the supersonic zone. The relaxation method for the small disturbance equation is discussed and a description is given of difference schemes for the potential flow equation in quasi-linear form. Attention is also given to difference schemes for the potential flow equation in conservation form, the analysis of relaxation schemes by the time dependent analogy, the accelerated iterative method, and three-dimensional calculations.

  1. How to Quantify Penile Corpus Cavernosum Structures with Histomorphometry: Comparison of Two Methods

    PubMed Central

    Felix-Patrício, Bruno; De Souza, Diogo Benchimol; Gregório, Bianca Martins; Costa, Waldemar Silva; Sampaio, Francisco José

    2015-01-01

    The use of morphometrical tools in biomedical research permits the accurate comparison of specimens subjected to different conditions, and the surface density of structures is commonly used for this purpose. The traditional point-counting method is reliable but time-consuming, with computer-aided methods being proposed as an alternative. The aim of this study was to compare the surface density data of penile corpus cavernosum trabecular smooth muscle in different groups of rats, measured by two observers using the point-counting or color-based segmentation method. Ten normotensive and 10 hypertensive male rats were used in this study. Rat penises were processed to obtain smooth muscle immunostained histological slices and photomicrographs captured for analysis. The smooth muscle surface density was measured in both groups by two different observers by the point-counting method and by the color-based segmentation method. Hypertensive rats showed an increase in smooth muscle surface density by the two methods, and no difference was found between the results of the two observers. However, surface density values were higher by the point-counting method. The use of either method did not influence the final interpretation of the results, and both proved to have adequate reproducibility. However, as differences were found between the two methods, results obtained by either method should not be compared. PMID:26413547

  2. How to Quantify Penile Corpus Cavernosum Structures with Histomorphometry: Comparison of Two Methods.

    PubMed

    Felix-Patrício, Bruno; De Souza, Diogo Benchimol; Gregório, Bianca Martins; Costa, Waldemar Silva; Sampaio, Francisco José

    2015-01-01

    The use of morphometrical tools in biomedical research permits the accurate comparison of specimens subjected to different conditions, and the surface density of structures is commonly used for this purpose. The traditional point-counting method is reliable but time-consuming, with computer-aided methods being proposed as an alternative. The aim of this study was to compare the surface density data of penile corpus cavernosum trabecular smooth muscle in different groups of rats, measured by two observers using the point-counting or color-based segmentation method. Ten normotensive and 10 hypertensive male rats were used in this study. Rat penises were processed to obtain smooth muscle immunostained histological slices and photomicrographs captured for analysis. The smooth muscle surface density was measured in both groups by two different observers by the point-counting method and by the color-based segmentation method. Hypertensive rats showed an increase in smooth muscle surface density by the two methods, and no difference was found between the results of the two observers. However, surface density values were higher by the point-counting method. The use of either method did not influence the final interpretation of the results, and both proved to have adequate reproducibility. However, as differences were found between the two methods, results obtained by either method should not be compared.

  3. Effects of observers using different methods upon the total population estimates of two resident island birds

    Treesearch

    Sheila Conant; Mark S. Collins; C. John Ralph

    1981-01-01

    During a 5-week study of the Nihoa Millerbird and Nihoa Finch, we censused birds using these techniques: two line transect methods, a variable-distance circular plot method, and spot-mapping of territories (millerbirds only). Densities derived from these methods varied greatly. Due to differences in behavior, it appeared that the two species reacted differently to the...

  4. Methods for comparing 3D surface attributes

    NASA Astrophysics Data System (ADS)

    Pang, Alex; Freeman, Adam

    1996-03-01

    A common task in data analysis is to compare two or more sets of data, statistics, presentations, etc. A predominant method in use is side-by-side visual comparison of images. While straightforward, it burdens the user with the task of discerning the differences between the two images. The user if further taxed when the images are of 3D scenes. This paper presents several methods for analyzing the extent, magnitude, and manner in which surfaces in 3D differ in their attributes. The surface geometry are assumed to be identical and only the surface attributes (color, texture, etc.) are variable. As a case in point, we examine the differences obtained when a 3D scene is rendered progressively using radiosity with different form factor calculation methods. The comparison methods include extensions of simple methods such as mapping difference information to color or transparency, and more recent methods including the use of surface texture, perturbation, and adaptive placements of error glyphs.

  5. Latency as a region contrast: Measuring ERP latency differences with Dynamic Time Warping.

    PubMed

    Zoumpoulaki, A; Alsufyani, A; Filetti, M; Brammer, M; Bowman, H

    2015-12-01

    Methods for measuring onset latency contrasts are evaluated against a new method utilizing the dynamic time warping (DTW) algorithm. This new method allows latency to be measured across a region instead of single point. We use computer simulations to compare the methods' power and Type I error rates under different scenarios. We perform per-participant analysis for different signal-to-noise ratios and two sizes of window (broad vs. narrow). In addition, the methods are tested in combination with single-participant and jackknife average waveforms for different effect sizes, at the group level. DTW performs better than the other methods, being less sensitive to noise as well as to placement and width of the window selected. © 2015 Society for Psychophysiological Research.

  6. Comparison of the calculation QRS angle for bundle branch block detection

    NASA Astrophysics Data System (ADS)

    Goeirmanto, L.; Mengko, R.; Rajab, T. L.

    2016-04-01

    QRS angle represent condition of blood circulation in the heart. Normally QRS angle is between -30 until 90 degree. Left Axis Defiation (LAD) and Right Axis Defiation (RAD) are abnormality conditions that lead to Bundle Branch Block. QRS angle is calculated using common method from physicians and compared to mathematical method using difference amplitudos and difference areas. We analyzed the standard 12 lead electrocardiogram data from MITBIH physiobank database. All methods using lead I and lead avF produce similar QRS angle and right QRS axis quadrant. QRS angle from mathematical method using difference areas is close to common method from physician. Mathematical method using difference areas can be used as a trigger for detecting heart condition.

  7. Using mark-recapture distance sampling methods on line transect surveys

    USGS Publications Warehouse

    Burt, Louise M.; Borchers, David L.; Jenkins, Kurt J.; Marques, Tigao A

    2014-01-01

    Synthesis and applications. Mark–recapture DS is a widely used method for estimating animal density and abundance when detection of animals at distance zero is not certain. Two observer configurations and three statistical models are described, and it is important to choose the most appropriate model for the observer configuration and target species in question. By way of making the methods more accessible to practicing ecologists, we describe the key ideas underlying MRDS methods, the sometimes subtle differences between them, and we illustrate these by applying different kinds of MRDS method to surveys of two different target species using different survey configurations.

  8. Finite elements and finite differences for transonic flow calculations

    NASA Technical Reports Server (NTRS)

    Hafez, M. M.; Murman, E. M.; Wellford, L. C.

    1978-01-01

    The paper reviews the chief finite difference and finite element techniques used for numerical solution of nonlinear mixed elliptic-hyperbolic equations governing transonic flow. The forms of the governing equations for unsteady two-dimensional transonic flow considered are the Euler equation, the full potential equation in both conservative and nonconservative form, the transonic small-disturbance equation in both conservative and nonconservative form, and the hodograph equations for the small-disturbance case and the full-potential case. Finite difference methods considered include time-dependent methods, relaxation methods, semidirect methods, and hybrid methods. Finite element methods include finite element Lax-Wendroff schemes, implicit Galerkin method, mixed variational principles, dual iterative procedures, optimal control methods and least squares.

  9. Rupture Dynamics Simulation for Non-Planar fault by a Curved Grid Finite Difference Method

    NASA Astrophysics Data System (ADS)

    Zhang, Z.; Zhu, G.; Chen, X.

    2011-12-01

    We first implement the non-staggered finite difference method to solve the dynamic rupture problem, with split-node, for non-planar fault. Split-node method for dynamic simulation has been used widely, because of that it's more precise to represent the fault plane than other methods, for example, thick fault, stress glut and so on. The finite difference method is also a popular numeric method to solve kinematic and dynamic problem in seismology. However, previous works focus most of theirs eyes on the staggered-grid method, because of its simplicity and computational efficiency. However this method has its own disadvantage comparing to non-staggered finite difference method at some fact for example describing the boundary condition, especially the irregular boundary, or non-planar fault. Zhang and Chen (2006) proposed the MacCormack high order non-staggered finite difference method based on curved grids to precisely solve irregular boundary problem. Based upon on this non-staggered grid method, we make success of simulating the spontaneous rupture problem. The fault plane is a kind of boundary condition, which could be irregular of course. So it's convinced that we could simulate rupture process in the case of any kind of bending fault plane. We will prove this method is valid in the case of Cartesian coordinate first. In the case of bending fault, the curvilinear grids will be used.

  10. On the wavelet optimized finite difference method

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1994-01-01

    When one considers the effect in the physical space, Daubechies-based wavelet methods are equivalent to finite difference methods with grid refinement in regions of the domain where small scale structure exists. Adding a wavelet basis function at a given scale and location where one has a correspondingly large wavelet coefficient is, essentially, equivalent to adding a grid point, or two, at the same location and at a grid density which corresponds to the wavelet scale. This paper introduces a wavelet optimized finite difference method which is equivalent to a wavelet method in its multiresolution approach but which does not suffer from difficulties with nonlinear terms and boundary conditions, since all calculations are done in the physical space. With this method one can obtain an arbitrarily good approximation to a conservative difference method for solving nonlinear conservation laws.

  11. Clustering Scientific Publications Based on Citation Relations: A Systematic Comparison of Different Methods.

    PubMed

    Šubelj, Lovro; van Eck, Nees Jan; Waltman, Ludo

    2016-01-01

    Clustering methods are applied regularly in the bibliometric literature to identify research areas or scientific fields. These methods are for instance used to group publications into clusters based on their relations in a citation network. In the network science literature, many clustering methods, often referred to as graph partitioning or community detection techniques, have been developed. Focusing on the problem of clustering the publications in a citation network, we present a systematic comparison of the performance of a large number of these clustering methods. Using a number of different citation networks, some of them relatively small and others very large, we extensively study the statistical properties of the results provided by different methods. In addition, we also carry out an expert-based assessment of the results produced by different methods. The expert-based assessment focuses on publications in the field of scientometrics. Our findings seem to indicate that there is a trade-off between different properties that may be considered desirable for a good clustering of publications. Overall, map equation methods appear to perform best in our analysis, suggesting that these methods deserve more attention from the bibliometric community.

  12. Clustering Scientific Publications Based on Citation Relations: A Systematic Comparison of Different Methods

    PubMed Central

    Šubelj, Lovro; van Eck, Nees Jan; Waltman, Ludo

    2016-01-01

    Clustering methods are applied regularly in the bibliometric literature to identify research areas or scientific fields. These methods are for instance used to group publications into clusters based on their relations in a citation network. In the network science literature, many clustering methods, often referred to as graph partitioning or community detection techniques, have been developed. Focusing on the problem of clustering the publications in a citation network, we present a systematic comparison of the performance of a large number of these clustering methods. Using a number of different citation networks, some of them relatively small and others very large, we extensively study the statistical properties of the results provided by different methods. In addition, we also carry out an expert-based assessment of the results produced by different methods. The expert-based assessment focuses on publications in the field of scientometrics. Our findings seem to indicate that there is a trade-off between different properties that may be considered desirable for a good clustering of publications. Overall, map equation methods appear to perform best in our analysis, suggesting that these methods deserve more attention from the bibliometric community. PMID:27124610

  13. A robust algorithm for optimisation and customisation of fractal dimensions of time series modified by nonlinearly scaling their time derivatives: mathematical theory and practical applications.

    PubMed

    Fuss, Franz Konstantin

    2013-01-01

    Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless) and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal's time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals.

  14. A Robust Algorithm for Optimisation and Customisation of Fractal Dimensions of Time Series Modified by Nonlinearly Scaling Their Time Derivatives: Mathematical Theory and Practical Applications

    PubMed Central

    2013-01-01

    Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless) and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal's time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals. PMID:24151522

  15. CompareSVM: supervised, Support Vector Machine (SVM) inference of gene regularity networks.

    PubMed

    Gillani, Zeeshan; Akash, Muhammad Sajid Hamid; Rahaman, M D Matiur; Chen, Ming

    2014-11-30

    Predication of gene regularity network (GRN) from expression data is a challenging task. There are many methods that have been developed to address this challenge ranging from supervised to unsupervised methods. Most promising methods are based on support vector machine (SVM). There is a need for comprehensive analysis on prediction accuracy of supervised method SVM using different kernels on different biological experimental conditions and network size. We developed a tool (CompareSVM) based on SVM to compare different kernel methods for inference of GRN. Using CompareSVM, we investigated and evaluated different SVM kernel methods on simulated datasets of microarray of different sizes in detail. The results obtained from CompareSVM showed that accuracy of inference method depends upon the nature of experimental condition and size of the network. For network with nodes (<200) and average (over all sizes of networks), SVM Gaussian kernel outperform on knockout, knockdown, and multifactorial datasets compared to all the other inference methods. For network with large number of nodes (~500), choice of inference method depend upon nature of experimental condition. CompareSVM is available at http://bis.zju.edu.cn/CompareSVM/ .

  16. [Comparision of Different Methods of Area Measurement in Irregular Scar].

    PubMed

    Ran, D; Li, W J; Sun, Q G; Li, J Q; Xia, Q

    2016-10-01

    To determine a measurement standard of irregular scar area by comparing the advantages and disadvantages of different measurement methods in measuring same irregular scar area. Irregular scar area was scanned by digital scanning and measured by coordinate reading method, AutoCAD pixel method, Photoshop lasso pixel method, Photoshop magic bar filled pixel method and Foxit PDF reading software, and some aspects of these methods such as measurement time, repeatability, whether could be recorded and whether could be traced were compared and analyzed. There was no significant difference in the scar areas by the measurement methods above. However, there was statistical difference in the measurement time and repeatability by one or multi performers and only Foxit PDF reading software could be traced back. The methods above can be used for measuring scar area, but each one has its advantages and disadvantages. It is necessary to develop new measurement software for forensic identification. Copyright© by the Editorial Department of Journal of Forensic Medicine

  17. Dangerous gas detection based on infrared video

    NASA Astrophysics Data System (ADS)

    Ding, Kang; Hong, Hanyu; Huang, Likun

    2018-03-01

    As the gas leak infrared imaging detection technology has significant advantages of high efficiency and remote imaging detection, in order to enhance the detail perception of observers and equivalently improve the detection limit, we propose a new type of gas leak infrared image detection method, which combines background difference methods and multi-frame interval difference method. Compared to the traditional frame methods, the multi-frame interval difference method we proposed can extract a more complete target image. By fusing the background difference image and the multi-frame interval difference image, we can accumulate the information of infrared target image of the gas leak in many aspect. The experiment demonstrate that the completeness of the gas leakage trace information is enhanced significantly, and the real-time detection effect can be achieved.

  18. An empirical investigation of methods for nonsymmetric linear systems

    NASA Technical Reports Server (NTRS)

    Sherman, A. H.

    1981-01-01

    The present investigation is concerned with a comparison of methods for solving linear algebraic systems which arise from finite difference discretizations of the elliptic convection-diffusion equation in a planar region Omega with Dirichlet boundary conditions. Such linear systems are typically of the form Ax = b where A is an N x N sparse nonsymmetric matrix. In a discussion of discretizations, it is assumed that a regular rectilinear mesh of width h has been imposed on Omega. The discretizations considered include central differences, upstream differences, and modified upstream differences. Six methods for solving Ax = b are considered. Three variants of Gaussian elimination have been chosen as representatives of state-of-the-art software for direct methods under different assumptions about pivoting. Three iterative methods are also included.

  19. The Influence of Different Processing Methods on Component Content of Sophora japonica

    NASA Astrophysics Data System (ADS)

    Ji, Y. B.; Zhu, H. J.; Xin, G. S.; Wei, C.

    2017-12-01

    The purpose of this experiment is to understand the effect of different processing methods on the content of active ingredients in Sophora japonica, and to determine the content of rutin and quercetin in Sophora japonica under different processing methods by UV spectrophotometry of the content determination. So as to compare the effect of different processing methods on the active ingredient content of Sophora japonica. Experiments can be seen in the rutin content: Fried Sophora japonica>Vinegar sunburn Sophora> Health products Sophora japonica> Charred sophora flower, Vinegar sunburn Sophora and Fried Sophora japonica difference is not obvious; Quercetin content: Charred sophora flower> Fried Sophora japonica> Vinegar sunburn Sophora>Health products Sophora japonica. It is proved that there are some differences in the content of active ingredients in Sophora japonica in different processing methods. The content of rutin increased with the increase of the processing temperature, but the content decreased after a certain temperature; Quercetin content will increase gradually with time.

  20. Review of Hull Structural Monitoring Systems for Navy Ships

    DTIC Science & Technology

    2013-05-01

    generally based on the same basic form of S-N curve, different correction methods are used by the various classification societies. ii. Methods for...Likewise there are a number of different methods employed for temperature compensation and these vary depending on the type of gauge, although typically...Analysis, Inc.[30] Figure 8. Examples of different methods of temperature compensation of fibre-optic strain sensors. It is noted in NATO

  1. Comparison of four different methods for detection of biofilm formation by uropathogens.

    PubMed

    Panda, Pragyan Swagatika; Chaudhary, Uma; Dube, Surya K

    2016-01-01

    Urinary tract infection (UTI) is one of the most common infectious diseases encountered in clinical practice. Emerging resistance of the uropathogens to the antimicrobial agents due to biofilm formation is a matter of concern while treating symptomatic UTI. However, studies comparing different methods for detection of biofilm by uropathogens are scarce. To compare four different methods for detection of biofilm formation by uropathogens. Prospective observational study conducted in a tertiary care hospital. Totally 300 isolates from urinary samples were analyzed for biofilm formation by four methods, that is, tissue culture plate (TCP) method, tube method (TM), Congo Red Agar (CRA) method and modified CRA (MCRA) method. Chi-square test was applied when two or more set of variables were compared. P < 0.05 considered as statistically significant. Considering TCP to be a gold standard method for our study we calculated other statistical parameters. The rate of biofilm detection was 45.6%, 39.3% and 11% each by TCP, TM, CRA and MCRA methods, respectively. The difference between TCP and only CRA/MCRA was significant, but not that between TCP and TM. There was no difference in the rate of biofilm detection between CRA and MCRA in other isolates, but MCRA is superior to CRA for detection of the staphylococcal biofilm formation. TCP method is the ideal method for detection of bacterial biofilm formation by uropathogens. MCRA method is superior only to CRA for detection of staphylococcal biofilm formation.

  2. Comparing four non-invasive methods to determine the ventilatory anaerobic threshold during cardiopulmonary exercise testing in children with congenital heart or lung disease.

    PubMed

    Visschers, Naomi C A; Hulzebos, Erik H; van Brussel, Marco; Takken, Tim

    2015-11-01

    The ventilatory anaerobic threshold (VAT) is an important method to assess the aerobic fitness in patients with cardiopulmonary disease. Several methods exist to determine the VAT; however, there is no consensus which of these methods is the most accurate. To compare four different non-invasive methods for the determination of the VAT via respiratory gas exchange analysis during a cardiopulmonary exercise test (CPET). A secondary objective is to determine the interobserver reliability of the VAT. CPET data of 30 children diagnosed with either cystic fibrosis (CF; N = 15) or with a surgically corrected dextro-transposition of the great arteries (asoTGA; N = 15) were included. No significant differences were found between conditions or among testers. The RER = 1 method differed the most compared to the other methods, showing significant higher results in all six variables. The PET-O2 method differed significantly on five of six and four of six exercise variables with the V-slope method and the VentEq method, respectively. The V-slope and the VentEq method differed significantly on one of six exercise variables. Ten of thirteen ICCs that were >0.80 had a 95% CI > 0.70. The RER = 1 method and the V-slope method had the highest number of significant ICCs and 95% CIs. The V-slope method, the ventilatory equivalent method and the PET-O2 method are comparable and reliable methods to determine the VAT during CPET in children with CF or asoTGA. © 2014 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.

  3. Multiple-algorithm parallel fusion of infrared polarization and intensity images based on algorithmic complementarity and synergy

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Yang, Fengbao; Ji, Linna; Lv, Sheng

    2018-01-01

    Diverse image fusion methods perform differently. Each method has advantages and disadvantages compared with others. One notion is that the advantages of different image methods can be effectively combined. A multiple-algorithm parallel fusion method based on algorithmic complementarity and synergy is proposed. First, in view of the characteristics of the different algorithms and difference-features among images, an index vector-based feature-similarity is proposed to define the degree of complementarity and synergy. This proposed index vector is a reliable evidence indicator for algorithm selection. Second, the algorithms with a high degree of complementarity and synergy are selected. Then, the different degrees of various features and infrared intensity images are used as the initial weights for the nonnegative matrix factorization (NMF). This avoids randomness of the NMF initialization parameter. Finally, the fused images of different algorithms are integrated using the NMF because of its excellent data fusing performance on independent features. Experimental results demonstrate that the visual effect and objective evaluation index of the fused images obtained using the proposed method are better than those obtained using traditional methods. The proposed method retains all the advantages that individual fusion algorithms have.

  4. Radiometer calibration methods and resulting irradiance differences: Radiometer calibration methods and resulting irradiance differences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin

    Accurate solar radiation measured by radiometers depends on instrument performance specifications, installation method, calibration procedure, measurement conditions, maintenance practices, location, and environmental conditions. This study addresses the effect of different calibration methodologies and resulting differences provided by radiometric calibration service providers such as the National Renewable Energy Laboratory (NREL) and manufacturers of radiometers. Some of these methods calibrate radiometers indoors and some outdoors. To establish or understand the differences in calibration methodologies, we processed and analyzed field-measured data from radiometers deployed for 10 months at NREL's Solar Radiation Research Laboratory. These different methods of calibration resulted in a difference ofmore » +/-1% to +/-2% in solar irradiance measurements. Analyzing these differences will ultimately assist in determining the uncertainties of the field radiometer data and will help develop a consensus on a standard for calibration. Further advancing procedures for precisely calibrating radiometers to world reference standards that reduce measurement uncertainties will help the accurate prediction of the output of planned solar conversion projects and improve the bankability of financing solar projects.« less

  5. Detection of oral HPV infection - Comparison of two different specimen collection methods and two HPV detection methods.

    PubMed

    de Souza, Marjorie M A; Hartel, Gunter; Whiteman, David C; Antonsson, Annika

    2018-04-01

    Very little is known about the natural history of oral HPV infection. Several different methods exist to collect oral specimens and detect HPV, but their respective performance characteristics are unknown. We compared two different methods for oral specimen collection (oral saline rinse and commercial saliva kit) from 96 individuals and then analyzed the samples for HPV by two different PCR detection methods (single GP5+/6+ PCR and nested MY09/11 and GP5+/6+ PCR). For the oral rinse samples, the oral HPV prevalence was 10.4% (GP+ PCR; 10% repeatability) vs 11.5% (nested PCR method; 100% repeatability). For the commercial saliva kit samples, the prevalences were 3.1% vs 16.7% with the GP+ PCR vs the nested PCR method (repeatability 100% for both detection methods). Overall the agreement was fair or poor between samples and methods (kappa 0.06-0.36). Standardizing methods of oral sample collection and HPV detection would ensure comparability between future oral HPV studies. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Simulation of 2D rarefied gas flows based on the numerical solution of the Boltzmann equation

    NASA Astrophysics Data System (ADS)

    Poleshkin, Sergey O.; Malkov, Ewgenij A.; Kudryavtsev, Alexey N.; Shershnev, Anton A.; Bondar, Yevgeniy A.; Kohanchik, A. A.

    2017-10-01

    There are various methods for calculating rarefied gas flows, in particular, statistical methods and deterministic methods based on the finite-difference solutions of the Boltzmann nonlinear kinetic equation and on the solutions of model kinetic equations. There is no universal method; each has its disadvantages in terms of efficiency or accuracy. The choice of the method depends on the problem to be solved and on parameters of calculated flows. Qualitative theoretical arguments help to determine the range of parameters of effectively solved problems for each method; however, it is advisable to perform comparative tests of calculations of the classical problems performed by different methods and with different parameters to have quantitative confirmation of this reasoning. The paper provides the results of the calculations performed by the authors with the help of the Direct Simulation Monte Carlo method and finite-difference methods of solving the Boltzmann equation and model kinetic equations. Based on this comparison, conclusions are made on selecting a particular method for flow simulations in various ranges of flow parameters.

  7. The Comparison of Matching Methods Using Different Measures of Balance: Benefits and Risks Exemplified within a Study to Evaluate the Effects of German Disease Management Programs on Long-Term Outcomes of Patients with Type 2 Diabetes.

    PubMed

    Fullerton, Birgit; Pöhlmann, Boris; Krohn, Robert; Adams, John L; Gerlach, Ferdinand M; Erler, Antje

    2016-10-01

    To present a case study on how to compare various matching methods applying different measures of balance and to point out some pitfalls involved in relying on such measures. Administrative claims data from a German statutory health insurance fund covering the years 2004-2008. We applied three different covariance balance diagnostics to a choice of 12 different matching methods used to evaluate the effectiveness of the German disease management program for type 2 diabetes (DMPDM2). We further compared the effect estimates resulting from applying these different matching techniques in the evaluation of the DMPDM2. The choice of balance measure leads to different results on the performance of the applied matching methods. Exact matching methods performed well across all measures of balance, but resulted in the exclusion of many observations, leading to a change of the baseline characteristics of the study sample and also the effect estimate of the DMPDM2. All PS-based methods showed similar effect estimates. Applying a higher matching ratio and using a larger variable set generally resulted in better balance. Using a generalized boosted instead of a logistic regression model showed slightly better performance for balance diagnostics taking into account imbalances at higher moments. Best practice should include the application of several matching methods and thorough balance diagnostics. Applying matching techniques can provide a useful preprocessing step to reveal areas of the data that lack common support. The use of different balance diagnostics can be helpful for the interpretation of different effect estimates found with different matching methods. © Health Research and Educational Trust.

  8. Evaluation method on steering for the shape-shifting robot in different configurations

    NASA Astrophysics Data System (ADS)

    Chang, Jian; Li, Bin; Wang, Chong; Zheng, Huaibing; Li, Zhiqiang

    2016-01-01

    The evaluation method on steering is based on qualitative manner in existence, which causes the result inaccurate and fuzziness. It reduces the efficiency of process execution. So the method by quantitative manner for the shape-shifting robot in different configurations is proposed. Comparing to traditional evaluation method, the most important aspects which can influence the steering abilities of the robot in different configurations are researched in detail, including the energy, angular velocity, time and space. In order to improve the robustness of system, the ideal and slippage conditions are all considered by mathematical model. Comparing to the traditional weighting confirming method, the extent of robot steering method is proposed by the combination of subjective and objective weighting method. The subjective weighting method can show more preferences of the experts and is based on five-grade scale. The objective weighting method is based on information entropy to determine the factors. By the sensors fixed on the robot, the contract force between track grouser and ground, the intrinsic motion characteristics of robot are obtained and the experiment is done to prove the algorithm which is proposed as the robot in different common configurations. Through the method proposed in the article, fuzziness and inaccurate of the evaluation method has been solved, so the operators can choose the most suitable configuration of the robot to fulfil the different tasks more quickly and simply.

  9. Digital image color analysis compared to direct dental CIE colorimeter assessment under different ambient conditions.

    PubMed

    Knösel, Michael; Attin, Rengin; Jung, Klaus; Brunner, Edgar; Kubein-Meesenburg, Dietmar; Attin, Thomas

    2009-04-01

    To evaluate the concordance and repeatability of two in vivo methods for dental color assessment and to clarify the influence of different ambient light conditions and subject's head position on the assessed color variables. Color assessments were performed by two examiners on 16 arbitrarily selected subjects under two different, standardized conditions of illumination and at two different standardized head angulations. CIE (L*a*b*) data for upper and lower central incisors were recorded in two different ways: (1) by an intra-oral contact dental colorimeter and (2) by processing digital images for performing color calculation using Adobe Photoshop software. The influence of the different ambient conditions on both methods, as well as the concordance of measurements was analyzed statistically using several mixed linear models. Ambient light as a single factor had no significant influence on maxillary L*, a* and b* values, but it did have an effect on mandible assessments. Head angulation variation resulted in significant L* value differences using the photo method. The operator had a significant influence on values a* and b* for the photo method and on a* values for the colorimeter method. In fully lit ambient condition, the operator had a significant influence on the segregated L*, a*, and b* values. With dimmed lights, head angulation became significant, but not the operator. Evaluation of segregated L* values was error prone in both methods. Comparing both methods, deltaE values did not exceed 2.85 units, indicating that color differences between methods and recorded under varying ambient conditions were well below the sensitivity of the naked eye.

  10. Comparative study on the selectivity of various spectrophotometric techniques for the determination of binary mixture of fenbendazole and rafoxanide.

    PubMed

    Saad, Ahmed S; Attia, Ali K; Alaraki, Manal S; Elzanfaly, Eman S

    2015-11-05

    Five different spectrophotometric methods were applied for simultaneous determination of fenbendazole and rafoxanide in their binary mixture; namely first derivative, derivative ratio, ratio difference, dual wavelength and H-point standard addition spectrophotometric methods. Different factors affecting each of the applied spectrophotometric methods were studied and the selectivity of the applied methods was compared. The applied methods were validated as per the ICH guidelines and good accuracy; specificity and precision were proven within the concentration range of 5-50 μg/mL for both drugs. Statistical analysis using one-way ANOVA proved no significant differences among the proposed methods for the determination of the two drugs. The proposed methods successfully determined both drugs in laboratory prepared and commercially available binary mixtures, and were found applicable for the routine analysis in quality control laboratories. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. The Effects of Different Standard Setting Methods and the Composition of Borderline Groups: A Study within a Law Curriculum

    ERIC Educational Resources Information Center

    Dochy, Filip; Kyndt, Eva; Baeten, Marlies; Pottier, Sofie; Veestraeten, Marlies; Leuven, K. U.

    2009-01-01

    The aim of this study was to examine the effect of different standard setting methods on the size and composition of the borderline group, on the discrimination between different types of students and on the types of students passing with one method but failing with another. A total of 107 university students were classified into 4 different types…

  12. Prototype Procedures to Describe Army Jobs

    DTIC Science & Technology

    2010-07-01

    ratings for the same MOS. Consistent with a multi-trait multi- method framework, high profile similarities (or low mean differences ) among different ...rater types for the same MOS would indicate convergent validity. That is, different methods (i.e., rater types) yield converging results for the same... different methods of data collection depends upon the type of data collected. For example, it could be that data on work-oriented descriptors are most

  13. Hospitalization costs of severe bacterial pneumonia in children: comparative analysis considering different costing methods

    PubMed Central

    Nunes, Sheila Elke Araujo; Minamisava, Ruth; Vieira, Maria Aparecida da Silva; Itria, Alexander; Pessoa, Vicente Porfirio; de Andrade, Ana Lúcia Sampaio Sgambatti; Toscano, Cristiana Maria

    2017-01-01

    ABSTRACT Objective To determine and compare hospitalization costs of bacterial community-acquired pneumonia cases via different costing methods under the Brazilian Public Unified Health System perspective. Methods Cost-of-illness study based on primary data collected from a sample of 59 children aged between 28 days and 35 months and hospitalized due to bacterial pneumonia. Direct medical and non-medical costs were considered and three costing methods employed: micro-costing based on medical record review, micro-costing based on therapeutic guidelines and gross-costing based on the Brazilian Public Unified Health System reimbursement rates. Costs estimates obtained via different methods were compared using the Friedman test. Results Cost estimates of inpatient cases of severe pneumonia amounted to R$ 780,70/$Int. 858.7 (medical record review), R$ 641,90/$Int. 706.90 (therapeutic guidelines) and R$ 594,80/$Int. 654.28 (Brazilian Public Unified Health System reimbursement rates). Costs estimated via micro-costing (medical record review or therapeutic guidelines) did not differ significantly (p=0.405), while estimates based on reimbursement rates were significantly lower compared to estimates based on therapeutic guidelines (p<0.001) or record review (p=0.006). Conclusion Brazilian Public Unified Health System costs estimated via different costing methods differ significantly, with gross-costing yielding lower cost estimates. Given costs estimated by different micro-costing methods are similar and costing methods based on therapeutic guidelines are easier to apply and less expensive, this method may be a valuable alternative for estimation of hospitalization costs of bacterial community-acquired pneumonia in children. PMID:28767921

  14. Estimating Soil Organic Carbon Stocks and Spatial Patterns with Statistical and GIS-Based Methods

    PubMed Central

    Zhi, Junjun; Jing, Changwei; Lin, Shengpan; Zhang, Cao; Liu, Qiankun; DeGloria, Stephen D.; Wu, Jiaping

    2014-01-01

    Accurately quantifying soil organic carbon (SOC) is considered fundamental to studying soil quality, modeling the global carbon cycle, and assessing global climate change. This study evaluated the uncertainties caused by up-scaling of soil properties from the county scale to the provincial scale and from lower-level classification of Soil Species to Soil Group, using four methods: the mean, median, Soil Profile Statistics (SPS), and pedological professional knowledge based (PKB) methods. For the SPS method, SOC stock is calculated at the county scale by multiplying the mean SOC density value of each soil type in a county by its corresponding area. For the mean or median method, SOC density value of each soil type is calculated using provincial arithmetic mean or median. For the PKB method, SOC density value of each soil type is calculated at the county scale considering soil parent materials and spatial locations of all soil profiles. A newly constructed 1∶50,000 soil survey geographic database of Zhejiang Province, China, was used for evaluation. Results indicated that with soil classification levels up-scaling from Soil Species to Soil Group, the variation of estimated SOC stocks among different soil classification levels was obviously lower than that among different methods. The difference in the estimated SOC stocks among the four methods was lowest at the Soil Species level. The differences in SOC stocks among the mean, median, and PKB methods for different Soil Groups resulted from the differences in the procedure of aggregating soil profile properties to represent the attributes of one soil type. Compared with the other three estimation methods (i.e., the SPS, mean and median methods), the PKB method holds significant promise for characterizing spatial differences in SOC distribution because spatial locations of all soil profiles are considered during the aggregation procedure. PMID:24840890

  15. Child Mortality Estimation 2013: An Overview of Updates in Estimation Methods by the United Nations Inter-Agency Group for Child Mortality Estimation

    PubMed Central

    Alkema, Leontine; New, Jin Rou; Pedersen, Jon; You, Danzhen

    2014-01-01

    Background In September 2013, the United Nations Inter-agency Group for Child Mortality Estimation (UN IGME) published an update of the estimates of the under-five mortality rate (U5MR) and under-five deaths for all countries. Compared to the UN IGME estimates published in 2012, updated data inputs and a new method for estimating the U5MR were used. Methods We summarize the new U5MR estimation method, which is a Bayesian B-spline Bias-reduction model, and highlight differences with the previously used method. Differences in UN IGME U5MR estimates as published in 2012 and those published in 2013 are presented and decomposed into differences due to the updated database and differences due to the new estimation method to explain and motivate changes in estimates. Findings Compared to the previously used method, the new UN IGME estimation method is based on a different trend fitting method that can track (recent) changes in U5MR more closely. The new method provides U5MR estimates that account for data quality issues. Resulting differences in U5MR point estimates between the UN IGME 2012 and 2013 publications are small for the majority of countries but greater than 10 deaths per 1,000 live births for 33 countries in 2011 and 19 countries in 1990. These differences can be explained by the updated database used, the curve fitting method as well as accounting for data quality issues. Changes in the number of deaths were less than 10% on the global level and for the majority of MDG regions. Conclusions The 2013 UN IGME estimates provide the most recent assessment of levels and trends in U5MR based on all available data and an improved estimation method that allows for closer-to-real-time monitoring of changes in the U5MR and takes account of data quality issues. PMID:25013954

  16. THREE-POINT BACKWARD FINITE DIFFERENCE METHOD FOR SOLVING A SYSTEM OF MIXED HYPERBOLIC-PARABOLIC PARTIAL DIFFERENTIAL EQUATIONS. (R825549C019)

    EPA Science Inventory

    A three-point backward finite-difference method has been derived for a system of mixed hyperbolic¯¯parabolic (convection¯¯diffusion) partial differential equations (mixed PDEs). The method resorts to the three-point backward differenci...

  17. Comparison of different methods to quantify fat classes in bakery products.

    PubMed

    Shin, Jae-Min; Hwang, Young-Ok; Tu, Ock-Ju; Jo, Han-Bin; Kim, Jung-Hun; Chae, Young-Zoo; Rhu, Kyung-Hun; Park, Seung-Kook

    2013-01-15

    The definition of fat differs in different countries; thus whether fat is listed on food labels depends on the country. Some countries list crude fat content in the 'Fat' section on the food label, whereas other countries list total fat. In this study, three methods were used for determining fat classes and content in bakery products: the Folch method, the automated Soxhlet method, and the AOAC 996.06 method. The results using these methods were compared. Fat (crude) extracted by the Folch and Soxhlet methods was gravimetrically determined and assessed by fat class using capillary gas chromatography (GC). In most samples, fat (total) content determined by the AOAC 996.06 method was lower than the fat (crude) content determined by the Folch or automated Soxhlet methods. Furthermore, monounsaturated fat or saturated fat content determined by the AOAC 996.06 method was lowest. Almost no difference was observed between fat (crude) content determined by the Folch method and that determined by the automated Soxhlet method for nearly all samples. In three samples (wheat biscuits, butter cookies-1, and chocolate chip cookies), monounsaturated fat, saturated fat, and trans fat content obtained by the automated Soxhlet method was higher than that obtained by the Folch method. The polyunsaturated fat content obtained by the automated Soxhlet method was not higher than that obtained by the Folch method in any sample. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Content based Image Retrieval based on Different Global and Local Color Histogram Methods: A Survey

    NASA Astrophysics Data System (ADS)

    Suhasini, Pallikonda Sarah; Sri Rama Krishna, K.; Murali Krishna, I. V.

    2017-02-01

    Different global and local color histogram methods for content based image retrieval (CBIR) are investigated in this paper. Color histogram is a widely used descriptor for CBIR. Conventional method of extracting color histogram is global, which misses the spatial content, is less invariant to deformation and viewpoint changes, and results in a very large three dimensional histogram corresponding to the color space used. To address the above deficiencies, different global and local histogram methods are proposed in recent research. Different ways of extracting local histograms to have spatial correspondence, invariant colour histogram to add deformation and viewpoint invariance and fuzzy linking method to reduce the size of the histogram are found in recent papers. The color space and the distance metric used are vital in obtaining color histogram. In this paper the performance of CBIR based on different global and local color histograms in three different color spaces, namely, RGB, HSV, L*a*b* and also with three distance measures Euclidean, Quadratic and Histogram intersection are surveyed, to choose appropriate method for future research.

  19. Titanium Hydroxide - a Volatile Species at High Temperature

    NASA Technical Reports Server (NTRS)

    Nguyen, QuynhGiao N.

    2010-01-01

    An alternative method of low-temperature plasma functionalization of carbon nanotubes provides for the simultaneous attachment of molecular groups of multiple (typically two or three) different species or different mixtures of species to carbon nanotubes at different locations within the same apparatus. This method is based on similar principles, and involves the use of mostly the same basic apparatus, as those of the methods described in "Low-Temperature Plasma Functionalization of Carbon Nanotubes" (ARC-14661-1), NASA Tech Briefs, Vol. 28, No. 5 (May 2004), page 45. The figure schematically depicts the basic apparatus used in the aforementioned method, with emphasis on features that distinguish the present alternative method from the other. In this method, one exploits the fact that the composition of the deposition plasma changes as the plasma flows from its source in the precursor chamber toward the nanotubes in the target chamber. As a result, carbon nanotubes mounted in the target chamber at different flow distances (d1, d2, d3 . . .) from the precursor chamber become functionalized with different species or different mixtures of species.

  20. Effect of Different In Vitro Aging Methods on Color Stability of a Dental Resin-Based Composite Using CIELAB and CIEDE2000 Color-Difference Formulas.

    PubMed

    de Oliveira, Dayane Carvalho Ramos Salles; Ayres, Ana Paula Almeida; Rocha, Mateus Garcia; Giannini, Marcelo; Puppin Rontani, Regina Maria; Ferracane, Jack L; Sinhoreti, Mario Alexandre Coelho

    2015-01-01

    To evaluate the effect of different in vitro aging methods on color change (CC) of an experimental dental resin-based composite using CIELAB (ΔEab ) and CIEDE2000 (ΔE00 ) color-difference formulas. The CC was evaluated with a spectrophotometer (CM700d, Konica Minolta, Tokyo, Japan) according to the CIE chromatic space. Disk-shaped specimens (Φ = 5 × 1 mm thick) (N = 10) were submitted to different in vitro aging methods: 30 days of water aging (WA); 120 hours of ultraviolet light aging (UVA); or 300 hours of an accelerated artificial aging (AAA) method with cycles of 4 hours of UV-B light exposure and 4 hours of moisture condensation to induce CC. The temperature was standardized at 37°C for all aging methods. CC was evaluated with ΔEab and ΔE00 formulas. Differences in individual Lab coordinates were also calculated. Data for the individual color parameters were submitted to one-way analysis of variance and Tukey's test for multiple comparisons (α = 0.05). All in vitro aging methods tested induced CC, in the following order: WA: ΔEab = 0.83 (0.1); ΔE00  = 1.15 (0.1) < AAA: ΔEab  = 5.64 (0.2); ΔE00  = 5.01 (0.1) < UVA: ΔEab  = 6.74 (0.2); ΔE00  = 6.03 (0.4). No changes in L* or a* coordinates were ≥1; the methods with UV aging showed a yellowing effect due a large positive change in b*. All in vitro aging methods tested induced a CC, but to different extents. Changes in color followed similar trends, but with different absolute values when calculated with the CIELAB and the CIEDE2000 formulas. Establishing the efficacy of different artificial aging methods and differences between color change using CIELAB and CIEDE2000 formulas are important to standardize color stability evaluations and facilitate the comparison of outcomes from different studies in the literature. © 2015 Wiley Periodicals, Inc.

  1. The Adams formulas for numerical integration of differential equations from 1st to 20th order

    NASA Technical Reports Server (NTRS)

    Kirkpatrick, J. C.

    1976-01-01

    The Adams Bashforth predictor coefficients and the Adams Moulton corrector coefficients for the integration of differential equations are presented for methods of 1st to 20th order. The order of the method as presented refers to the highest order difference formula used in Newton's backward difference interpolation formula, on which the Adams method is based. The Adams method is a polynomial approximation method derived from Newton's backward difference interpolation formula. The Newton formula is derived and expanded to 20th order. The Adams predictor and corrector formulas are derived and expressed in terms of differences of the derivatives, as well as in terms of the derivatives themselves. All coefficients are given to 18 significant digits. For the difference formula only, the ratio coefficients are given to 10th order.

  2. Contaminants | Hydrogen and Fuel Cells | NREL

    Science.gov Websites

    -Derived Contaminants Overview Materials Methods Data Tool Partners Publications System Contaminants using several screening methods. The materials are from different manufacturers, comprise different Characterization Methods A flowchart graphic that shows the experimental methods used in the system contaminants

  3. A FINITE-DIFFERENCE, DISCRETE-WAVENUMBER METHOD FOR CALCULATING RADAR TRACES

    EPA Science Inventory

    A hybrid of the finite-difference method and the discrete-wavenumber method is developed to calculate radar traces. The method is based on a three-dimensional model defined in the Cartesian coordinate system; the electromagnetic properties of the model are symmetric with respect ...

  4. Space methods in oceanology

    NASA Technical Reports Server (NTRS)

    Bolshakov, A. A.

    1985-01-01

    The study of Earth from space with specialized satellites, and from manned orbiting stations, has become important in the space programs. The broad complex of methods used for probing Earth from space are different methods of the study of ocean, dynamics. The different methods of ocean observation are described.

  5. Accuracy of tree diameter estimation from terrestrial laser scanning by circle-fitting methods

    NASA Astrophysics Data System (ADS)

    Koreň, Milan; Mokroš, Martin; Bucha, Tomáš

    2017-12-01

    This study compares the accuracies of diameter at breast height (DBH) estimations by three initial (minimum bounding box, centroid, and maximum distance) and two refining (Monte Carlo and optimal circle) circle-fitting methods The circle-fitting algorithms were evaluated in multi-scan mode and a simulated single-scan mode on 157 European beech trees (Fagus sylvatica L.). DBH measured by a calliper was used as reference data. Most of the studied circle-fitting algorithms significantly underestimated the mean DBH in both scanning modes. Only the Monte Carlo method in the single-scan mode significantly overestimated the mean DBH. The centroid method proved to be the least suitable and showed significantly different results from the other circle-fitting methods in both scanning modes. In multi-scan mode, the accuracy of the minimum bounding box method was not significantly different from the accuracies of the refining methods The accuracy of the maximum distance method was significantly different from the accuracies of the refining methods in both scanning modes. The accuracy of the Monte Carlo method was significantly different from the accuracy of the optimal circle method in only single-scan mode. The optimal circle method proved to be the most accurate circle-fitting method for DBH estimation from point clouds in both scanning modes.

  6. Seismic wavefield modeling based on time-domain symplectic and Fourier finite-difference method

    NASA Astrophysics Data System (ADS)

    Fang, Gang; Ba, Jing; Liu, Xin-xin; Zhu, Kun; Liu, Guo-Chang

    2017-06-01

    Seismic wavefield modeling is important for improving seismic data processing and interpretation. Calculations of wavefield propagation are sometimes not stable when forward modeling of seismic wave uses large time steps for long times. Based on the Hamiltonian expression of the acoustic wave equation, we propose a structure-preserving method for seismic wavefield modeling by applying the symplectic finite-difference method on time grids and the Fourier finite-difference method on space grids to solve the acoustic wave equation. The proposed method is called the symplectic Fourier finite-difference (symplectic FFD) method, and offers high computational accuracy and improves the computational stability. Using acoustic approximation, we extend the method to anisotropic media. We discuss the calculations in the symplectic FFD method for seismic wavefield modeling of isotropic and anisotropic media, and use the BP salt model and BP TTI model to test the proposed method. The numerical examples suggest that the proposed method can be used in seismic modeling of strongly variable velocities, offering high computational accuracy and low numerical dispersion. The symplectic FFD method overcomes the residual qSV wave of seismic modeling in anisotropic media and maintains the stability of the wavefield propagation for large time steps.

  7. Numerical solution of the Saint-Venant equations by an efficient hybrid finite-volume/finite-difference method

    NASA Astrophysics Data System (ADS)

    Lai, Wencong; Khan, Abdul A.

    2018-04-01

    A computationally efficient hybrid finite-volume/finite-difference method is proposed for the numerical solution of Saint-Venant equations in one-dimensional open channel flows. The method adopts a mass-conservative finite volume discretization for the continuity equation and a semi-implicit finite difference discretization for the dynamic-wave momentum equation. The spatial discretization of the convective flux term in the momentum equation employs an upwind scheme and the water-surface gradient term is discretized using three different schemes. The performance of the numerical method is investigated in terms of efficiency and accuracy using various examples, including steady flow over a bump, dam-break flow over wet and dry downstream channels, wetting and drying in a parabolic bowl, and dam-break floods in laboratory physical models. Numerical solutions from the hybrid method are compared with solutions from a finite volume method along with analytic solutions or experimental measurements. Comparisons demonstrates that the hybrid method is efficient, accurate, and robust in modeling various flow scenarios, including subcritical, supercritical, and transcritical flows. In this method, the QUICK scheme for the surface slope discretization is more accurate and less diffusive than the center difference and the weighted average schemes.

  8. Age adjustment in ecological studies: using a study on arsenic ingestion and bladder cancer as an example.

    PubMed

    Guo, How-Ran

    2011-10-20

    Despite its limitations, ecological study design is widely applied in epidemiology. In most cases, adjustment for age is necessary, but different methods may lead to different conclusions. To compare three methods of age adjustment, a study on the associations between arsenic in drinking water and incidence of bladder cancer in 243 townships in Taiwan was used as an example. A total of 3068 cases of bladder cancer, including 2276 men and 792 women, were identified during a ten-year study period in the study townships. Three methods were applied to analyze the same data set on the ten-year study period. The first (Direct Method) applied direct standardization to obtain standardized incidence rate and then used it as the dependent variable in the regression analysis. The second (Indirect Method) applied indirect standardization to obtain standardized incidence ratio and then used it as the dependent variable in the regression analysis instead. The third (Variable Method) used proportions of residents in different age groups as a part of the independent variables in the multiple regression models. All three methods showed a statistically significant positive association between arsenic exposure above 0.64 mg/L and incidence of bladder cancer in men and women, but different results were observed for the other exposure categories. In addition, the risk estimates obtained by different methods for the same exposure category were all different. Using an empirical example, the current study confirmed the argument made by other researchers previously that whereas the three different methods of age adjustment may lead to different conclusions, only the third approach can obtain unbiased estimates of the risks. The third method can also generate estimates of the risk associated with each age group, but the other two are unable to evaluate the effects of age directly.

  9. Effect of DNA Extraction Methods and Sampling Techniques on the Apparent Structure of Cow and Sheep Rumen Microbial Communities

    PubMed Central

    Henderson, Gemma; Cox, Faith; Kittelmann, Sandra; Miri, Vahideh Heidarian; Zethof, Michael; Noel, Samantha J.; Waghorn, Garry C.; Janssen, Peter H.

    2013-01-01

    Molecular microbial ecology techniques are widely used to study the composition of the rumen microbiota and to increase understanding of the roles they play. Therefore, sampling and DNA extraction methods that result in adequate yields of microbial DNA that also accurately represents the microbial community are crucial. Fifteen different methods were used to extract DNA from cow and sheep rumen samples. The DNA yield and quality, and its suitability for downstream PCR amplifications varied considerably, depending on the DNA extraction method used. DNA extracts from nine extraction methods that passed these first quality criteria were evaluated further by quantitative PCR enumeration of microbial marker loci. Absolute microbial numbers, determined on the same rumen samples, differed by more than 100-fold, depending on the DNA extraction method used. The apparent compositions of the archaeal, bacterial, ciliate protozoal, and fungal communities in identical rumen samples were assessed using 454 Titanium pyrosequencing. Significant differences in microbial community composition were observed between extraction methods, for example in the relative abundances of members of the phyla Bacteroidetes and Firmicutes. Microbial communities in parallel samples collected from cows by oral stomach-tubing or through a rumen fistula, and in liquid and solid rumen digesta fractions, were compared using one of the DNA extraction methods. Community representations were generally similar, regardless of the rumen sampling technique used, but significant differences in the abundances of some microbial taxa such as the Clostridiales and the Methanobrevibacter ruminantium clade were observed. The apparent microbial community composition differed between rumen sample fractions, and Prevotellaceae were most abundant in the liquid fraction. DNA extraction methods that involved phenol-chloroform extraction and mechanical lysis steps tended to be more comparable. However, comparison of data from studies in which different sampling techniques, different rumen sample fractions or different DNA extraction methods were used should be avoided. PMID:24040342

  10. Effect of DNA extraction methods and sampling techniques on the apparent structure of cow and sheep rumen microbial communities.

    PubMed

    Henderson, Gemma; Cox, Faith; Kittelmann, Sandra; Miri, Vahideh Heidarian; Zethof, Michael; Noel, Samantha J; Waghorn, Garry C; Janssen, Peter H

    2013-01-01

    Molecular microbial ecology techniques are widely used to study the composition of the rumen microbiota and to increase understanding of the roles they play. Therefore, sampling and DNA extraction methods that result in adequate yields of microbial DNA that also accurately represents the microbial community are crucial. Fifteen different methods were used to extract DNA from cow and sheep rumen samples. The DNA yield and quality, and its suitability for downstream PCR amplifications varied considerably, depending on the DNA extraction method used. DNA extracts from nine extraction methods that passed these first quality criteria were evaluated further by quantitative PCR enumeration of microbial marker loci. Absolute microbial numbers, determined on the same rumen samples, differed by more than 100-fold, depending on the DNA extraction method used. The apparent compositions of the archaeal, bacterial, ciliate protozoal, and fungal communities in identical rumen samples were assessed using 454 Titanium pyrosequencing. Significant differences in microbial community composition were observed between extraction methods, for example in the relative abundances of members of the phyla Bacteroidetes and Firmicutes. Microbial communities in parallel samples collected from cows by oral stomach-tubing or through a rumen fistula, and in liquid and solid rumen digesta fractions, were compared using one of the DNA extraction methods. Community representations were generally similar, regardless of the rumen sampling technique used, but significant differences in the abundances of some microbial taxa such as the Clostridiales and the Methanobrevibacter ruminantium clade were observed. The apparent microbial community composition differed between rumen sample fractions, and Prevotellaceae were most abundant in the liquid fraction. DNA extraction methods that involved phenol-chloroform extraction and mechanical lysis steps tended to be more comparable. However, comparison of data from studies in which different sampling techniques, different rumen sample fractions or different DNA extraction methods were used should be avoided.

  11. The Role of Psychological and Physiological Factors in Decision Making under Risk and in a Dilemma

    PubMed Central

    Fooken, Jonas; Schaffner, Markus

    2016-01-01

    Different methods to elicit risk attitudes of individuals often provide differing results despite a common theory. Reasons for such inconsistencies may be the different influence of underlying factors in risk-taking decisions. In order to evaluate this conjecture, a better understanding of underlying factors across methods and decision contexts is desirable. In this paper we study the difference in result of two different risk elicitation methods by linking estimates of risk attitudes to gender, age, and personality traits, which have been shown to be related. We also investigate the role of these factors during decision-making in a dilemma situation. For these two decision contexts we also investigate the decision-maker's physiological state during the decision, measured by heart rate variability (HRV), which we use as an indicator of emotional involvement. We found that the two elicitation methods provide different individual risk attitude measures which is partly reflected in a different gender effect between the methods. Personality traits explain only relatively little in terms of driving risk attitudes and the difference between methods. We also found that risk taking and the physiological state are related for one of the methods, suggesting that more emotionally involved individuals are more risk averse in the experiment. Finally, we found evidence that personality traits are connected to whether individuals made a decision in the dilemma situation, but risk attitudes and the physiological state were not indicative for the ability to decide in this decision context. PMID:26834591

  12. Temperature Profiles of Different Cooling Methods in Porcine Pancreas Procurement

    PubMed Central

    Weegman, Brad P.; Suszynski, Thomas M.; Scott, William E.; Ferrer, Joana; Avgoustiniatos, Efstathios S.; Anazawa, Takayuki; O’Brien, Timothy D.; Rizzari, Michael D.; Karatzas, Theodore; Jie, Tun; Sutherland, David ER.; Hering, Bernhard J.; Papas, Klearchos K.

    2014-01-01

    Background Porcine islet xenotransplantation is a promising alternative to human islet allotransplantation. Porcine pancreas cooling needs to be optimized to reduce the warm ischemia time (WIT) following donation after cardiac death, which is associated with poorer islet isolation outcomes. Methods This study examines the effect of 4 different cooling Methods on core porcine pancreas temperature (n=24) and histopathology (n=16). All Methods involved surface cooling with crushed ice and chilled irrigation. Method A, which is the standard for porcine pancreas procurement, used only surface cooling. Method B involved an intravascular flush with cold solution through the pancreas arterial system. Method C involved an intraductal infusion with cold solution through the major pancreatic duct, and Method D combined all 3 cooling Methods. Results Surface cooling alone (Method A) gradually decreased core pancreas temperature to < 10 °C after 30 minutes. Using an intravascular flush (Method B) improved cooling during the entire duration of procurement, but incorporating an intraductal infusion (Method C) rapidly reduced core temperature 15–20 °C within the first 2 minutes of cooling. Combining all methods (Method D) was the most effective at rapidly reducing temperature and providing sustained cooling throughout the duration of procurement, although the recorded WIT was not different between Methods (p=0.36). Histological scores were different between the cooling Methods (p=0.02) and the worst with Method A. There were differences in histological scores between Methods A and C (p=0.02) and Methods A and D (p=0.02), but not between Methods C and D (p=0.95), which may highlight the importance of early cooling using an intraductal infusion. Conclusions In conclusion, surface cooling alone cannot rapidly cool large (porcine or human) pancreata. Additional cooling with an intravascular flush and intraductal infusion results in improved core porcine pancreas temperature profiles during procurement and histopathology scores. These data may also have implications on human pancreas procurement since use of an intraductal infusion is not common practice. PMID:25040217

  13. Efficiency trade-offs of steady-state methods using FEM and FDM. [iterative solutions for nonlinear flow equations

    NASA Technical Reports Server (NTRS)

    Gartling, D. K.; Roache, P. J.

    1978-01-01

    The efficiency characteristics of finite element and finite difference approximations for the steady-state solution of the Navier-Stokes equations are examined. The finite element method discussed is a standard Galerkin formulation of the incompressible, steady-state Navier-Stokes equations. The finite difference formulation uses simple centered differences that are O(delta x-squared). Operation counts indicate that a rapidly converging Newton-Raphson-Kantorovitch iteration scheme is generally preferable over a Picard method. A split NOS Picard iterative algorithm for the finite difference method was most efficient.

  14. A real-time TV logo tracking method using template matching

    NASA Astrophysics Data System (ADS)

    Li, Zhi; Sang, Xinzhu; Yan, Binbin; Leng, Junmin

    2012-11-01

    A fast and accurate TV Logo detection method is presented based on real-time image filtering, noise eliminating and recognition of image features including edge and gray level information. It is important to accurately extract the optical template using the time averaging method from the sample video stream, and then different templates are used to match different logos in separated video streams with different resolution based on the topology features of logos. 12 video streams with different logos are used to verify the proposed method, and the experimental result demonstrates that the achieved accuracy can be up to 99%.

  15. Differences between Cheddar cheese manufactured by the milled-curd and stirred-curd methods using different commercial starters.

    PubMed

    Shakeel-ur-Rehman; Drake, M A; Farkye, N Y

    2008-01-01

    Traditionally, Cheddar cheese is made by the milled-curd method. However, because of the mechanization of cheese making and time constraints, the stirred-curd method is more commonly used by many large-scale commercial manufacturers. This study was undertaken to evaluate quality differences during ripening (at 2 and 8 degrees C) of Cheddar cheese made by the milled-curd and stirred-curd methods, using 4 different commercial starters. Twenty-four vats (4 starters x 2 methods x 3 replicates) were made, with approximately 625 kg of pasteurized (72 degrees C x 16 s) whole milk in each vat. Fat, protein, and salt contents of the cheeses were not affected by the starter. Starter cell densities in cheese were not affected by the method of manufacture. Nonstarter lactic acid bacteria counts at 90, 180, and 270 d were influenced by the manufacturing method, with a higher trend in milled-curd cheeses. Proteolysis in cheese (percentage of water-soluble N) was influenced by the starter and manufacturing method (270 d). Sensory analysis by a trained descriptive panel (n = 8) revealed differences in cooked, whey, sulfur, brothy, milk fat, umami, and bitter attributes caused by the starter, whereas only brothy flavor was influenced by storage temperature. The method of manufacture influenced diacetyl, sour, and salty flavors.

  16. Quantifying distinct associations on different temporal scales: comparison of DCCA and Pearson methods

    NASA Astrophysics Data System (ADS)

    Piao, Lin; Fu, Zuntao

    2016-11-01

    Cross-correlation between pairs of variables takes multi-time scale characteristic, and it can be totally different on different time scales (changing from positive correlation to negative one), e.g., the associations between mean air temperature and relative humidity over regions to the east of Taihang mountain in China. Therefore, how to correctly unveil these correlations on different time scales is really of great importance since we actually do not know if the correlation varies with scales in advance. Here, we compare two methods, i.e. Detrended Cross-Correlation Analysis (DCCA for short) and Pearson correlation, in quantifying scale-dependent correlations directly to raw observed records and artificially generated sequences with known cross-correlation features. Studies show that 1) DCCA related methods can indeed quantify scale-dependent correlations, but not Pearson method; 2) the correlation features from DCCA related methods are robust to contaminated noises, however, the results from Pearson method are sensitive to noise; 3) the scale-dependent correlation results from DCCA related methods are robust to the amplitude ratio between slow and fast components, while Pearson method may be sensitive to the amplitude ratio. All these features indicate that DCCA related methods take some advantages in correctly quantifying scale-dependent correlations, which results from different physical processes.

  17. Quantifying distinct associations on different temporal scales: comparison of DCCA and Pearson methods.

    PubMed

    Piao, Lin; Fu, Zuntao

    2016-11-09

    Cross-correlation between pairs of variables takes multi-time scale characteristic, and it can be totally different on different time scales (changing from positive correlation to negative one), e.g., the associations between mean air temperature and relative humidity over regions to the east of Taihang mountain in China. Therefore, how to correctly unveil these correlations on different time scales is really of great importance since we actually do not know if the correlation varies with scales in advance. Here, we compare two methods, i.e. Detrended Cross-Correlation Analysis (DCCA for short) and Pearson correlation, in quantifying scale-dependent correlations directly to raw observed records and artificially generated sequences with known cross-correlation features. Studies show that 1) DCCA related methods can indeed quantify scale-dependent correlations, but not Pearson method; 2) the correlation features from DCCA related methods are robust to contaminated noises, however, the results from Pearson method are sensitive to noise; 3) the scale-dependent correlation results from DCCA related methods are robust to the amplitude ratio between slow and fast components, while Pearson method may be sensitive to the amplitude ratio. All these features indicate that DCCA related methods take some advantages in correctly quantifying scale-dependent correlations, which results from different physical processes.

  18. Comparison of Chemical Constituents in Scrophulariae Radix Processed by Different Methods based on UFLC-MS Combined with Multivariate Statistical Analysis.

    PubMed

    Wang, Shengnan; Hua, Yujiao; Zou, Lisi; Liu, Xunhong; Yan, Ying; Zhao, Hui; Luo, Yiyuan; Liu, Juanxiu

    2018-02-01

    Scrophulariae Radix is one of the most popular traditional Chinese medicines (TCMs). Primary processing of Scrophulariae Radix is an important link which closely related to the quality of products in this TCM. The aim of this study is to explore the influence of different processing methods on chemical constituents in Scrophulariae Radix. The difference of chemical constituents in Scrophulariae Radix processed by different methods was analyzed by using ultra fast liquid chromatography-triple quadrupole-time of flight mass spectrometry coupled with principal component analysis and orthogonal partial least squares discriminant analysis. Furthermore, the contents of 12 index differential constituents in Scrophulariae Radix processed by different methods were simultaneously determined by using ultra fast liquid chromatography coupled with triple quadrupole-linear ion trap mass spectrometry. Gray relational analysis was performed to evaluate the different processed samples according to the contents of 12 constituents. All of the results demonstrated that the quality of Scrophulariae Radix processed by "sweating" method was better. This study will provide the basic information for revealing the change law of chemical constituents in Scrophulariae Radix processed by different methods and facilitating selection of the suitable processing method of this TCM. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. A FINITE-DIFFERENCE, DISCRETE-WAVENUMBER METHOD FOR CALCULATING RADAR TRACES

    EPA Science Inventory

    A hybrid of the finite-difference method and the discrete-wavenumber method is developed to calculate radar traces. The method is based on a three-dimensional model defined in the Cartesian coordinate system; the electromag-netic properties of the model are symmetric with respect...

  20. Palmprint Recognition Across Different Devices.

    PubMed

    Jia, Wei; Hu, Rong-Xiang; Gui, Jie; Zhao, Yang; Ren, Xiao-Ming

    2012-01-01

    In this paper, the problem of Palmprint Recognition Across Different Devices (PRADD) is investigated, which has not been well studied so far. Since there is no publicly available PRADD image database, we created a non-contact PRADD image database containing 12,000 grayscale captured from 100 subjects using three devices, i.e., one digital camera and two smart-phones. Due to the non-contact image acquisition used, rotation and scale changes between different images captured from a same palm are inevitable. We propose a robust method to calculate the palm width, which can be effectively used for scale normalization of palmprints. On this PRADD image database, we evaluate the recognition performance of three different methods, i.e., subspace learning method, correlation method, and orientation coding based method, respectively. Experiments results show that orientation coding based methods achieved promising recognition performance for PRADD.

  1. Palmprint Recognition across Different Devices

    PubMed Central

    Jia, Wei; Hu, Rong-Xiang; Gui, Jie; Zhao, Yang; Ren, Xiao-Ming

    2012-01-01

    In this paper, the problem of Palmprint Recognition Across Different Devices (PRADD) is investigated, which has not been well studied so far. Since there is no publicly available PRADD image database, we created a non-contact PRADD image database containing 12,000 grayscale captured from 100 subjects using three devices, i.e., one digital camera and two smart-phones. Due to the non-contact image acquisition used, rotation and scale changes between different images captured from a same palm are inevitable. We propose a robust method to calculate the palm width, which can be effectively used for scale normalization of palmprints. On this PRADD image database, we evaluate the recognition performance of three different methods, i.e., subspace learning method, correlation method, and orientation coding based method, respectively. Experiments results show that orientation coding based methods achieved promising recognition performance for PRADD. PMID:22969380

  2. A highly accurate finite-difference method with minimum dispersion error for solving the Helmholtz equation

    NASA Astrophysics Data System (ADS)

    Wu, Zedong; Alkhalifah, Tariq

    2018-07-01

    Numerical simulation of the acoustic wave equation in either isotropic or anisotropic media is crucial to seismic modeling, imaging and inversion. Actually, it represents the core computation cost of these highly advanced seismic processing methods. However, the conventional finite-difference method suffers from severe numerical dispersion errors and S-wave artifacts when solving the acoustic wave equation for anisotropic media. We propose a method to obtain the finite-difference coefficients by comparing its numerical dispersion with the exact form. We find the optimal finite difference coefficients that share the dispersion characteristics of the exact equation with minimal dispersion error. The method is extended to solve the acoustic wave equation in transversely isotropic (TI) media without S-wave artifacts. Numerical examples show that the method is highly accurate and efficient.

  3. Microbial composition during Chinese soy sauce koji-making based on culture dependent and independent methods.

    PubMed

    Yan, Yin-zhuo; Qian, Yu-lin; Ji, Feng-di; Chen, Jing-yu; Han, Bei-zhong

    2013-05-01

    Koji-making is a key process for production of high quality soy sauce. The microbial composition during koji-making was investigated by culture-dependent and culture-independent methods to determine predominant bacterial and fungal populations. The culture-dependent methods used were direct culture and colony morphology observation, and PCR amplification of 16S/26S rDNA fragments followed by sequencing analysis. The culture-independent method was based on the analysis of 16S/26S rDNA clone libraries. There were differences between the results obtained by different methods. However, sufficient overlap existed between the different methods to identify potentially significant microbial groups. 16 and 20 different bacterial species were identified using culture-dependent and culture-independent methods, respectively. 7 species could be identified by both methods. The most predominant bacterial genera were Weissella and Staphylococcus. Both 6 different fungal species were identified using culture-dependent and culture-independent methods, respectively. Only 3 species could be identified by both sets of methods. The most predominant fungi were Aspergillus and Candida species. This work illustrated the importance of a comprehensive polyphasic approach in the analysis of microbial composition during soy sauce koji-making, the knowledge of which will enable further optimization of microbial composition and quality control of koji to upgrade Chinese traditional soy sauce product. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Sawing SHOLO logs: three methods

    Treesearch

    Ronald E. Coleman; Hugh W. Reynolds

    1973-01-01

    Three different methods of sawing the SHOLO log were compared on a board-foot yield basis. Using sawmill simulation, all three methods of sawing were performed on the same sample of logs, eliminating differences due to sapling. A statistical test was made to determine whether or not there were any real differences between the board-foot yields. Two of the sawing...

  5. Methods of Suicide by Age: Sex and Race Differences among the Young and Old.

    ERIC Educational Resources Information Center

    McIntosh, John L.; Santos, John F.

    1986-01-01

    Annual official statistics for specific methods of suicide (firearms, hanging, poisons) by age for different sex and racial groups (Whites, Blacks, non-Whites excluding Black) were examined from 1960 to 1978. Comparisons among the age-sex-race groups, along with trends over time and differences in the methods employed, were noted. (Author/ABL)

  6. Examining the Value Master's and PhD Students Place on Various Instructional Methods in Educational Leadership Preparation

    ERIC Educational Resources Information Center

    Gordon, Stephen P.; Oliver, John

    2015-01-01

    The purpose of this study was to determine the value that graduate students place on different types of instructional methods used by professors in educational leadership preparation programs, and to determine if master's and doctoral students place different values on different instructional methods. The participants included 87 graduate…

  7. Diabat Interpolation for Polymorph Free-Energy Differences.

    PubMed

    Kamat, Kartik; Peters, Baron

    2017-02-02

    Existing methods to compute free-energy differences between polymorphs use harmonic approximations, advanced non-Boltzmann bias sampling techniques, and/or multistage free-energy perturbations. This work demonstrates how Bennett's diabat interpolation method ( J. Comput. Phys. 1976, 22, 245 ) can be combined with energy gaps from lattice-switch Monte Carlo techniques ( Phys. Rev. E 2000, 61, 906 ) to swiftly estimate polymorph free-energy differences. The new method requires only two unbiased molecular dynamics simulations, one for each polymorph. To illustrate the new method, we compute the free-energy difference between face-centered cubic and body-centered cubic polymorphs for a Gaussian core solid. We discuss the justification for parabolic models of the free-energy diabats and similarities to methods that have been used in studies of electron transfer.

  8. Methods for Synthesizing Findings on Moderation Effects Across Multiple Randomized Trials

    PubMed Central

    Brown, C Hendricks; Sloboda, Zili; Faggiano, Fabrizio; Teasdale, Brent; Keller, Ferdinand; Burkhart, Gregor; Vigna-Taglianti, Federica; Howe, George; Masyn, Katherine; Wang, Wei; Muthén, Bengt; Stephens, Peggy; Grey, Scott; Perrino, Tatiana

    2011-01-01

    This paper presents new methods for synthesizing results from subgroup and moderation analyses across different randomized trials. We demonstrate that such a synthesis generally results in additional power to detect significant moderation findings above what one would find in a single trial. Three general methods for conducting synthesis analyses are discussed, with two methods, integrative data analysis, and parallel analyses, sharing a large advantage over traditional methods available in meta-analysis. We present a broad class of analytic models to examine moderation effects across trials that can be used to assess their overall effect and explain sources of heterogeneity, and present ways to disentangle differences across trials due to individual differences, contextual level differences, intervention, and trial design. PMID:21360061

  9. Methods for synthesizing findings on moderation effects across multiple randomized trials.

    PubMed

    Brown, C Hendricks; Sloboda, Zili; Faggiano, Fabrizio; Teasdale, Brent; Keller, Ferdinand; Burkhart, Gregor; Vigna-Taglianti, Federica; Howe, George; Masyn, Katherine; Wang, Wei; Muthén, Bengt; Stephens, Peggy; Grey, Scott; Perrino, Tatiana

    2013-04-01

    This paper presents new methods for synthesizing results from subgroup and moderation analyses across different randomized trials. We demonstrate that such a synthesis generally results in additional power to detect significant moderation findings above what one would find in a single trial. Three general methods for conducting synthesis analyses are discussed, with two methods, integrative data analysis and parallel analyses, sharing a large advantage over traditional methods available in meta-analysis. We present a broad class of analytic models to examine moderation effects across trials that can be used to assess their overall effect and explain sources of heterogeneity, and present ways to disentangle differences across trials due to individual differences, contextual level differences, intervention, and trial design.

  10. Distortion analysis of subband adaptive filtering methods for FMRI active noise control systems.

    PubMed

    Milani, Ali A; Panahi, Issa M; Briggs, Richard

    2007-01-01

    Delayless subband filtering structure, as a high performance frequency domain filtering technique, is used for canceling broadband fMRI noise (8 kHz bandwidth). In this method, adaptive filtering is done in subbands and the coefficients of the main canceling filter are computed by stacking the subband weights together. There are two types of stacking methods called FFT and FFT-2. In this paper, we analyze the distortion introduced by these two stacking methods. The effect of the stacking distortion on the performance of different adaptive filters in FXLMS algorithm with non-minimum phase secondary path is explored. The investigation is done for different adaptive algorithms (nLMS, APA and RLS), different weight stacking methods, and different number of subbands.

  11. Child mortality estimation 2013: an overview of updates in estimation methods by the United Nations Inter-agency Group for Child Mortality Estimation.

    PubMed

    Alkema, Leontine; New, Jin Rou; Pedersen, Jon; You, Danzhen

    2014-01-01

    In September 2013, the United Nations Inter-agency Group for Child Mortality Estimation (UN IGME) published an update of the estimates of the under-five mortality rate (U5MR) and under-five deaths for all countries. Compared to the UN IGME estimates published in 2012, updated data inputs and a new method for estimating the U5MR were used. We summarize the new U5MR estimation method, which is a Bayesian B-spline Bias-reduction model, and highlight differences with the previously used method. Differences in UN IGME U5MR estimates as published in 2012 and those published in 2013 are presented and decomposed into differences due to the updated database and differences due to the new estimation method to explain and motivate changes in estimates. Compared to the previously used method, the new UN IGME estimation method is based on a different trend fitting method that can track (recent) changes in U5MR more closely. The new method provides U5MR estimates that account for data quality issues. Resulting differences in U5MR point estimates between the UN IGME 2012 and 2013 publications are small for the majority of countries but greater than 10 deaths per 1,000 live births for 33 countries in 2011 and 19 countries in 1990. These differences can be explained by the updated database used, the curve fitting method as well as accounting for data quality issues. Changes in the number of deaths were less than 10% on the global level and for the majority of MDG regions. The 2013 UN IGME estimates provide the most recent assessment of levels and trends in U5MR based on all available data and an improved estimation method that allows for closer-to-real-time monitoring of changes in the U5MR and takes account of data quality issues.

  12. MPN estimation of qPCR target sequence recoveries from whole cell calibrator samples.

    PubMed

    Sivaganesan, Mano; Siefring, Shawn; Varma, Manju; Haugland, Richard A

    2011-12-01

    DNA extracts from enumerated target organism cells (calibrator samples) have been used for estimating Enterococcus cell equivalent densities in surface waters by a comparative cycle threshold (Ct) qPCR analysis method. To compare surface water Enterococcus density estimates from different studies by this approach, either a consistent source of calibrator cells must be used or the estimates must account for any differences in target sequence recoveries from different sources of calibrator cells. In this report we describe two methods for estimating target sequence recoveries from whole cell calibrator samples based on qPCR analyses of their serially diluted DNA extracts and most probable number (MPN) calculation. The first method employed a traditional MPN calculation approach. The second method employed a Bayesian hierarchical statistical modeling approach and a Monte Carlo Markov Chain (MCMC) simulation method to account for the uncertainty in these estimates associated with different individual samples of the cell preparations, different dilutions of the DNA extracts and different qPCR analytical runs. The two methods were applied to estimate mean target sequence recoveries per cell from two different lots of a commercially available source of enumerated Enterococcus cell preparations. The mean target sequence recovery estimates (and standard errors) per cell from Lot A and B cell preparations by the Bayesian method were 22.73 (3.4) and 11.76 (2.4), respectively, when the data were adjusted for potential false positive results. Means were similar for the traditional MPN approach which cannot comparably assess uncertainty in the estimates. Cell numbers and estimates of recoverable target sequences in calibrator samples prepared from the two cell sources were also used to estimate cell equivalent and target sequence quantities recovered from surface water samples in a comparative Ct method. Our results illustrate the utility of the Bayesian method in accounting for uncertainty, the high degree of precision attainable by the MPN approach and the need to account for the differences in target sequence recoveries from different calibrator sample cell sources when they are used in the comparative Ct method. Published by Elsevier B.V.

  13. Safety training for working youth: Methods used versus methods wanted.

    PubMed

    Zierold, Kristina M

    2016-04-07

    Safety training is promoted as a tool to prevent workplace injury; however, little is known about the safety training experiences young workers get on-the-job. Furthermore, nothing is known about what methods they think would be the most helpful for learning about safe work practices. To compare safety training methods teens get on the job to those safety training methods teens think would be the best for learning workplace safety, focusing on age differences. A cross-sectional survey was administered to students in two large high schools in spring 2011. Seventy percent of working youth received safety training. The top training methods that youth reported getting at work were safety videos (42%), safety lectures (25%), and safety posters/signs (22%). In comparison to the safety training methods used, the top methods youth wanted included videos (54%), hands-on (47%), and on-the-job demonstrations (34%). This study demonstrated that there were differences in training methods that youth wanted by age; with older youth seemingly wanting more independent methods of training and younger teens wanting more involvement. Results indicate that youth want methods of safety training that are different from what they are getting on the job. The differences in methods wanted by age may aid in developing training programs appropriate for the developmental level of working youth.

  14. High-Order Entropy Stable Finite Difference Schemes for Nonlinear Conservation Laws: Finite Domains

    NASA Technical Reports Server (NTRS)

    Fisher, Travis C.; Carpenter, Mark H.

    2013-01-01

    Developing stable and robust high-order finite difference schemes requires mathematical formalism and appropriate methods of analysis. In this work, nonlinear entropy stability is used to derive provably stable high-order finite difference methods with formal boundary closures for conservation laws. Particular emphasis is placed on the entropy stability of the compressible Navier-Stokes equations. A newly derived entropy stable weighted essentially non-oscillatory finite difference method is used to simulate problems with shocks and a conservative, entropy stable, narrow-stencil finite difference approach is used to approximate viscous terms.

  15. Drying method has no substantial effect on δ(15)N or δ(13)C values of muscle tissue from teleost fishes.

    PubMed

    Bessey, Cindy; Vanderklift, Mathew A

    2014-02-15

    Stable isotope analysis (SIA) is a powerful tool in many fields of research that enables quantitative comparisons among studies, if similar methods have been used. The goal of this study was to determine if three different drying methods commonly used to prepare samples for SIA yielded different δ(15)N and δ(13)C values. Muscle subsamples from 10 individuals each of three teleost species were dried using three methods: (i) oven, (ii) food dehydrator, and (iii) freeze-dryer. All subsamples were analysed for δ(15)N and δ(13)C values, and nitrogen and carbon content, using a continuous flow system consisting of a Delta V Plus mass spectrometer and a Flush 1112 elemental analyser via a Conflo IV universal interface. The δ(13)C values were normalized to constant lipid content using the equations proposed by McConnaughey and McRoy. Although statistically significant, the differences in δ(15)N values between the drying methods were small (mean differences ≤0.21‰). The differences in δ(13)C values between the drying methods were not statistically significant, and normalising the δ(13)C values to constant lipid content reduced the mean differences for all treatments to ≤0.65‰. A statistically significant difference of ~2% in C content existed between tissues dried in a food dehydrator and those dried in a freeze-dryer for two fish species. There was no significant effect of fish size on the differences between methods. No substantial effect of drying method was found on the δ(15)N or δ(13)C values of teleost muscle tissue. Copyright © 2013 John Wiley & Sons, Ltd.

  16. Two Project Methods: Preliminary Observations on the Similarities and Differences between William Heard Kilpatrick's Project Method and John Dewey's Problem-Solving Method

    ERIC Educational Resources Information Center

    Sutinen, Ari

    2013-01-01

    The project method became a famous teaching method when William Heard Kilpatrick published his article "Project Method" in 1918. The key idea in Kilpatrick's project method is to try to explain how pupils learn things when they work in projects toward different common objects. The same idea of pupils learning by work or action in an…

  17. The Comparative Method of Language Acquisition Research: A Mayan Case Study

    ERIC Educational Resources Information Center

    Pye, Clifton; Pfeiler, Barbara

    2014-01-01

    This article demonstrates how the Comparative Method can be applied to cross-linguistic research on language acquisition. The Comparative Method provides a systematic procedure for organizing and interpreting acquisition data from different languages. The Comparative Method controls for cross-linguistic differences at all levels of the grammar and…

  18. Automatic frequency and phase alignment of in vivo J-difference-edited MR spectra by frequency domain correlation.

    PubMed

    Wiegers, Evita C; Philips, Bart W J; Heerschap, Arend; van der Graaf, Marinette

    2017-12-01

    J-difference editing is often used to select resonances of compounds with coupled spins in 1 H-MR spectra. Accurate phase and frequency alignment prior to subtracting J-difference-edited MR spectra is important to avoid artefactual contributions to the edited resonance. In-vivo J-difference-edited MR spectra were aligned by maximizing the normalized scalar product between two spectra (i.e., the correlation over a spectral region). The performance of our correlation method was compared with alignment by spectral registration and by alignment of the highest point in two spectra. The correlation method was tested at different SNR levels and for a broad range of phase and frequency shifts. In-vivo application of the proposed correlation method showed reduced subtraction errors and increased fit reliability in difference spectra as compared with conventional peak alignment. The correlation method and the spectral registration method generally performed equally well. However, better alignment using the correlation method was obtained for spectra with a low SNR (down to ~2) and for relatively large frequency shifts. Our correlation method for simultaneously phase and frequency alignment is able to correct both small and large phase and frequency drifts and also performs well at low SNR levels.

  19. SOME NEW FINITE DIFFERENCE METHODS FOR HELMHOLTZ EQUATIONS ON IRREGULAR DOMAINS OR WITH INTERFACES

    PubMed Central

    Wan, Xiaohai; Li, Zhilin

    2012-01-01

    Solving a Helmholtz equation Δu + λu = f efficiently is a challenge for many applications. For example, the core part of many efficient solvers for the incompressible Navier-Stokes equations is to solve one or several Helmholtz equations. In this paper, two new finite difference methods are proposed for solving Helmholtz equations on irregular domains, or with interfaces. For Helmholtz equations on irregular domains, the accuracy of the numerical solution obtained using the existing augmented immersed interface method (AIIM) may deteriorate when the magnitude of λ is large. In our new method, we use a level set function to extend the source term and the PDE to a larger domain before we apply the AIIM. For Helmholtz equations with interfaces, a new maximum principle preserving finite difference method is developed. The new method still uses the standard five-point stencil with modifications of the finite difference scheme at irregular grid points. The resulting coefficient matrix of the linear system of finite difference equations satisfies the sign property of the discrete maximum principle and can be solved efficiently using a multigrid solver. The finite difference method is also extended to handle temporal discretized equations where the solution coefficient λ is inversely proportional to the mesh size. PMID:22701346

  20. SOME NEW FINITE DIFFERENCE METHODS FOR HELMHOLTZ EQUATIONS ON IRREGULAR DOMAINS OR WITH INTERFACES.

    PubMed

    Wan, Xiaohai; Li, Zhilin

    2012-06-01

    Solving a Helmholtz equation Δu + λu = f efficiently is a challenge for many applications. For example, the core part of many efficient solvers for the incompressible Navier-Stokes equations is to solve one or several Helmholtz equations. In this paper, two new finite difference methods are proposed for solving Helmholtz equations on irregular domains, or with interfaces. For Helmholtz equations on irregular domains, the accuracy of the numerical solution obtained using the existing augmented immersed interface method (AIIM) may deteriorate when the magnitude of λ is large. In our new method, we use a level set function to extend the source term and the PDE to a larger domain before we apply the AIIM. For Helmholtz equations with interfaces, a new maximum principle preserving finite difference method is developed. The new method still uses the standard five-point stencil with modifications of the finite difference scheme at irregular grid points. The resulting coefficient matrix of the linear system of finite difference equations satisfies the sign property of the discrete maximum principle and can be solved efficiently using a multigrid solver. The finite difference method is also extended to handle temporal discretized equations where the solution coefficient λ is inversely proportional to the mesh size.

  1. Comparison of three explicit multigrid methods for the Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Chima, Rodrick V.; Turkel, Eli; Schaffer, Steve

    1987-01-01

    Three explicit multigrid methods, Ni's method, Jameson's finite-volume method, and a finite-difference method based on Brandt's work, are described and compared for two model problems. All three methods use an explicit multistage Runge-Kutta scheme on the fine grid, and this scheme is also described. Convergence histories for inviscid flow over a bump in a channel for the fine-grid scheme alone show that convergence rate is proportional to Courant number and that implicit residual smoothing can significantly accelerate the scheme. Ni's method was slightly slower than the implicitly-smoothed scheme alone. Brandt's and Jameson's methods are shown to be equivalent in form but differ in their node versus cell-centered implementations. They are about 8.5 times faster than Ni's method in terms of CPU time. Results for an oblique shock/boundary layer interaction problem verify the accuracy of the finite-difference code. All methods slowed considerably on the stretched viscous grid but Brandt's method was still 2.1 times faster than Ni's method.

  2. Chosen interval methods for solving linear interval systems with special type of matrix

    NASA Astrophysics Data System (ADS)

    Szyszka, Barbara

    2013-10-01

    The paper is devoted to chosen direct interval methods for solving linear interval systems with special type of matrix. This kind of matrix: band matrix with a parameter, from finite difference problem is obtained. Such linear systems occur while solving one dimensional wave equation (Partial Differential Equations of hyperbolic type) by using the central difference interval method of the second order. Interval methods are constructed so as the errors of method are enclosed in obtained results, therefore presented linear interval systems contain elements that determining the errors of difference method. The chosen direct algorithms have been applied for solving linear systems because they have no errors of method. All calculations were performed in floating-point interval arithmetic.

  3. Taste intensities of ten vegetables commonly consumed in the Netherlands.

    PubMed

    van Stokkom, V L; Teo, P S; Mars, M; de Graaf, C; van Kooten, O; Stieger, M

    2016-09-01

    Bitterness has been suggested to be the main reason for the limited palatability of several vegetables. Vegetable acceptance has been associated with preparation method. However, the taste intensity of a variety of vegetables prepared by different methods has not been studied yet. The objective of this study is to assess the intensity of the five basic tastes and fattiness of ten vegetables commonly consumed in the Netherlands prepared by different methods using the modified Spectrum method. Intensities of sweetness, sourness, bitterness, umami, saltiness and fattiness were assessed for ten vegetables (cauliflower, broccoli, leek, carrot, onion, red bell pepper, French beans, tomato, cucumber and iceberg lettuce) by a panel (n=9) trained in a modified Spectrum method. Each vegetable was assessed prepared by different methods (raw, cooked, mashed and as a cold pressed juice). Spectrum based reference solutions were available with fixed reference points at 13.3mm (R1), 33.3mm (R2) and 66.7mm (R3) for each taste modality on a 100mm line scale. For saltiness, R1 and R3 differed (16.7mm and 56.7mm). Mean intensities of all taste modalities and fattiness for all vegetables were mostly below R1 (13.3mm). Significant differences (p<0.05) within vegetables between preparation methods were found. Sweetness was the most intensive taste, followed by sourness, bitterness, fattiness, umami and saltiness. In conclusion, all ten vegetables prepared by different methods showed low mean intensities of all taste modalities and fattiness. Preparation method affected taste and fattiness intensity and the effect differed by vegetable type. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Comparison of four different reduction methods for anterior dislocation of the shoulder.

    PubMed

    Guler, Olcay; Ekinci, Safak; Akyildiz, Faruk; Tirmik, Uzeyir; Cakmak, Selami; Ugras, Akin; Piskin, Ahmet; Mahirogullari, Mahir

    2015-05-28

    Shoulder dislocations account for almost 50% of all major joint dislocations and are mainly anterior. The aim is a comparative retrospective study of different reduction maneuvers without anesthesia to reduce the dislocated shoulder. Patients were treated with different reduction maneuvers, including various forms of traction and external rotation, in the emergency departments of four training hospitals between 2009 and 2012. Each of the four hospitals had different treatment protocols for reduction and applying one of four maneuvers: Spaso, Chair, Kocher, and Matsen methods. Thirty-nine patients were treated by the Spaso method, 47 by the Chair reduction method, 40 by the Kocher method, and 27 patients by Matsen's traction-countertraction method. All patients' demographic data were recorded. Dislocation number, reduction time, time interval between dislocation and reduction, and associated complications, pre- and post-reduction period, were recorded prospectively. No anesthetic method was used for the reduction. All of the methods used included traction and some external rotation. The Chair method had the shortest reduction time. All surgeons involved in the study agreed that the Kocher and Matsen methods needed more force for the reduction. Patients could contract their muscles because of the pain in these two methods. The Spaso method includes flexion of the shoulder and blocks muscle contraction somewhat. The Chair method was found to be the easiest because the patients could not contract their muscles while sitting on a chair with the affected arm at their side. We suggest that the Chair method is an effective and fast reduction maneuver that may be an alternative for the treatment of anterior shoulder dislocations. Further prospective studies with larger sample size are needed to compare safety of different reduction techniques.

  5. Comparative study between recent methods manipulating ratio spectra and classical methods based on two-wavelength selection for the determination of binary mixture of antazoline hydrochloride and tetryzoline hydrochloride

    NASA Astrophysics Data System (ADS)

    Abdel-Halim, Lamia M.; Abd-El Rahman, Mohamed K.; Ramadan, Nesrin K.; EL Sanabary, Hoda F. A.; Salem, Maissa Y.

    2016-04-01

    A comparative study was developed between two classical spectrophotometric methods (dual wavelength method and Vierordt's method) and two recent methods manipulating ratio spectra (ratio difference method and first derivative of ratio spectra method) for simultaneous determination of Antazoline hydrochloride (AN) and Tetryzoline hydrochloride (TZ) in their combined pharmaceutical formulation and in the presence of benzalkonium chloride as a preservative without preliminary separation. The dual wavelength method depends on choosing two wavelengths for each drug in a way so that the difference in absorbance at those two wavelengths is zero for the other drug. While Vierordt's method, is based upon measuring the absorbance and the absorptivity values of the two drugs at their λmax (248.0 and 219.0 nm for AN and TZ, respectively), followed by substitution in the corresponding Vierordt's equation. Recent methods manipulating ratio spectra depend on either measuring the difference in amplitudes of ratio spectra between 255.5 and 269.5 nm for AN and 220.0 and 273.0 nm for TZ in case of ratio difference method or computing first derivative of the ratio spectra for each drug then measuring the peak amplitude at 250.0 nm for AN and at 224.0 nm for TZ in case of first derivative of ratio spectrophotometry. The specificity of the developed methods was investigated by analyzing different laboratory prepared mixtures of the two drugs. All methods were applied successfully for the determination of the selected drugs in their combined dosage form proving that the classical spectrophotometric methods can still be used successfully in analysis of binary mixture using minimal data manipulation rather than recent methods which require relatively more steps. Furthermore, validation of the proposed methods was performed according to ICH guidelines; accuracy, precision and repeatability are found to be within the acceptable limits. Statistical studies showed that the methods can be competitively applied in quality control laboratories.

  6. Comparison of lipid and calorie loss from donor human milk among 3 methods of simulated gavage feeding: one-hour, 2-hour, and intermittent gravity feedings.

    PubMed

    Brooks, Christine; Vickers, Amy Manning; Aryal, Subhash

    2013-04-01

    The objective of this study was to compare the differences in lipid loss from 24 samples of banked donor human milk (DHM) among 3 feeding methods: DHM given by syringe pump over 1 hour, 2 hours, and by bolus/gravity gavage. Comparative, descriptive. There were no human subjects. Twenty-four samples of 8 oz of DHM were divided into four 60-mL aliquots. Timed feedings were given by Medfusion 2001 syringe pumps with syringes connected to narrow-lumened extension sets designed for enteral feedings and connected to standard silastic enteral feeding tubes. Gravity feedings were given using the identical syringes connected to the same silastic feeding tubes. All aliquots were analyzed with the York Dairy Analyzer. Univariate repeated-measures analyses of variance were used for the omnibus testing for overall differences between the feeding methods. Lipid content expressed as grams per deciliter at the end of each feeding method was compared with the prefed control samples using the Dunnett's test. The Tukey correction was used for other pairwise multiple comparisons. The univariate repeated-measures analysis of variance conducted to test for overall differences between feeding methods showed a significant difference between the methods (F = 58.57, df = 3, 69, P < .0001). Post hoc analysis using the Dunnett's approach revealed that there was a significant difference in fat content between the control sample and the 1-hour and 2-hours feeding methods (P < .0001), but we did not find any significant difference in fat content between the control and the gravity feeding methods (P = .3296). Pairwise comparison using the Tukey correction revealed a significant difference between both gravity and 1-hour feeding methods (P < .0001), and gravity and 2-hour feeding method (P < .0001). There was no significant difference in lipid content between the 1-hour and 2-hour feeding methods (P = .2729). Unlike gravity feedings, the timed feedings resulted in a statistically significant loss of fat as compared with their controls. These findings should raise questions about how those infants in the neonatal intensive care unit are routinely gavage fed.

  7. Evaluating the Effects of Differences in Group Abilities on the Tucker and the Levine Observed-Score Methods for Common-Item Nonequivalent Groups Equating. ACT Research Report Series 2010-1

    ERIC Educational Resources Information Center

    Chen, Hanwei; Cui, Zhongmin; Zhu, Rongchun; Gao, Xiaohong

    2010-01-01

    The most critical feature of a common-item nonequivalent groups equating design is that the average score difference between the new and old groups can be accurately decomposed into a group ability difference and a form difficulty difference. Two widely used observed-score linear equating methods, the Tucker and the Levine observed-score methods,…

  8. Microphone Array

    NASA Astrophysics Data System (ADS)

    Bader, Rolf

    This chapter deals with microphone arrays. It is arranged according to the different methods available to proceed through the different problems and through the different mathematical methods. After discussing general properties of different array types, such as plane arrays, spherical arrays, or scanning arrays, it proceeds to the signal processing tools that are most used in speech processing. In the third section, backpropagating methods based on the Helmholtz-Kirchhoff integral are discussed, which result in spatial radiation patterns of vibrating bodies or air.

  9. Kinds of access: different methods for report reveal different kinds of metacognitive access

    PubMed Central

    Overgaard, Morten; Sandberg, Kristian

    2012-01-01

    In experimental investigations of consciousness, participants are asked to reflect upon their own experiences by issuing reports about them in different ways. For this reason, a participant needs some access to the content of her own conscious experience in order to report. In such experiments, the reports typically consist of some variety of ratings of confidence or direct descriptions of one's own experiences. Whereas different methods of reporting are typically used interchangeably, recent experiments indicate that different results are obtained with different kinds of reporting. We argue that there is not only a theoretical, but also an empirical difference between different methods of reporting. We hypothesize that differences in the sensitivity of different scales may reveal that different types of access are used to issue direct reports about experiences and metacognitive reports about the classification process. PMID:22492747

  10. Impact of enumeration method on diversity of Escherichia coli genotypes isolated from surface water.

    PubMed

    Martin, E C; Gentry, T J

    2016-11-01

    There are numerous regulatory-approved Escherichia coli enumeration methods, but it is not known whether differences in media composition and incubation conditions impact the diversity of E. coli populations detected by these methods. A study was conducted to determine if three standard water quality assessments, Colilert ® , USEPA Method 1603, (modified mTEC) and USEPA Method 1604 (MI), detect different populations of E. coli. Samples were collected from six watersheds and analysed using the three enumeration approaches followed by E. coli isolation and genotyping. Results indicated that the three methods generally produced similar enumeration data across the sites, although there were some differences on a site-by-site basis. The Colilert ® method consistently generated the least diverse collection of E. coli genotypes as compared to modified mTEC and MI, with those two methods being roughly equal to each other. Although the three media assessed in this study were designed to enumerate E. coli, the differences in the media composition, incubation temperature, and growth platform appear to have a strong selective influence on the populations of E. coli isolated. This study suggests that standardized methods of enumeration and isolation may be warranted if researchers intend to obtain individual E. coli isolates for further characterization. This study characterized the impact of three USEPA-approved Escherichia coli enumeration methods on observed E. coli population diversity in surface water samples. Results indicated that these methods produced similar E. coli enumeration data but were more variable in the diversity of E. coli genotypes observed. Although the three methods enumerate the same species, differences in media composition, growth platform, and incubation temperature likely contribute to the selection of different cultivable populations of E. coli, and thus caution should be used when implementing these methods interchangeably for downstream applications which require cultivated isolates. © 2016 The Society for Applied Microbiology.

  11. A seismic analysis for masonry constructions: The different schematization methods of masonry walls

    NASA Astrophysics Data System (ADS)

    Olivito, Renato. S.; Codispoti, Rosamaria; Scuro, Carmelo

    2017-11-01

    Seismic analysis of masonry structures is usually analyzed through the use of structural calculation software based on equivalent frames method or to macro-elements method. In these approaches, the masonry walls are divided into vertical elements, masonry walls, and horizontal elements, so-called spandrel elements, interconnected by rigid nodes. The aim of this work is to make a critical comparison between different schematization methods of masonry wall underlining the structural importance of the spandrel elements. In order to implement the methods, two different structural calculation software were used and an existing masonry building has been examined.

  12. Mass, height of burst, and source–receiver distance constraints on the acoustic coda phase delay method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albert, Sarah; Bowman, Daniel; Rodgers, Arthur

    Here, this research uses the acoustic coda phase delay method to estimate relative changes in air temperature between explosions with varying event masses and heights of burst. It also places a bound on source–receiver distance for the method. Previous studies used events with different shapes, height of bursts, and masses and recorded the acoustic codas at source–receiver distances less than 1 km. This research further explores the method using explosions that differ in mass (by up to an order of magnitude) and are placed at varying heights. Source–receiver distances also cover an area out to 7 km. Relative air temperaturemore » change estimates are compared to complementary meteorological observations. Results show that two explosions that differ by an order of magnitude cannot be used with this method because their propagation times in the near field and their fundamental frequencies are different. These differences are expressed as inaccuracies in the relative air temperature change estimates. An order of magnitude difference in mass is also shown to bias estimates higher. Small differences in height of burst do not affect the accuracy of the method. Finally, an upper bound of 1 km on source–receiver distance is provided based on the standard deviation characteristics of the estimates.« less

  13. Mass, height of burst, and source–receiver distance constraints on the acoustic coda phase delay method

    DOE PAGES

    Albert, Sarah; Bowman, Daniel; Rodgers, Arthur; ...

    2018-04-23

    Here, this research uses the acoustic coda phase delay method to estimate relative changes in air temperature between explosions with varying event masses and heights of burst. It also places a bound on source–receiver distance for the method. Previous studies used events with different shapes, height of bursts, and masses and recorded the acoustic codas at source–receiver distances less than 1 km. This research further explores the method using explosions that differ in mass (by up to an order of magnitude) and are placed at varying heights. Source–receiver distances also cover an area out to 7 km. Relative air temperaturemore » change estimates are compared to complementary meteorological observations. Results show that two explosions that differ by an order of magnitude cannot be used with this method because their propagation times in the near field and their fundamental frequencies are different. These differences are expressed as inaccuracies in the relative air temperature change estimates. An order of magnitude difference in mass is also shown to bias estimates higher. Small differences in height of burst do not affect the accuracy of the method. Finally, an upper bound of 1 km on source–receiver distance is provided based on the standard deviation characteristics of the estimates.« less

  14. An Unsupervised Change Detection Method Using Time-Series of PolSAR Images from Radarsat-2 and GaoFen-3.

    PubMed

    Liu, Wensong; Yang, Jie; Zhao, Jinqi; Shi, Hongtao; Yang, Le

    2018-02-12

    The traditional unsupervised change detection methods based on the pixel level can only detect the changes between two different times with same sensor, and the results are easily affected by speckle noise. In this paper, a novel method is proposed to detect change based on time-series data from different sensors. Firstly, the overall difference image of the time-series PolSAR is calculated by omnibus test statistics, and difference images between any two images in different times are acquired by R j test statistics. Secondly, the difference images are segmented with a Generalized Statistical Region Merging (GSRM) algorithm which can suppress the effect of speckle noise. Generalized Gaussian Mixture Model (GGMM) is then used to obtain the time-series change detection maps in the final step of the proposed method. To verify the effectiveness of the proposed method, we carried out the experiment of change detection using time-series PolSAR images acquired by Radarsat-2 and Gaofen-3 over the city of Wuhan, in China. Results show that the proposed method can not only detect the time-series change from different sensors, but it can also better suppress the influence of speckle noise and improve the overall accuracy and Kappa coefficient.

  15. Wavefront reconstruction for multi-lateral shearing interferometry using difference Zernike polynomials fitting

    NASA Astrophysics Data System (ADS)

    Liu, Ke; Wang, Jiannian; Wang, Hai; Li, Yanqiu

    2018-07-01

    For the multi-lateral shearing interferometers (multi-LSIs), the measurement accuracy can be enhanced by estimating the wavefront under test with the multidirectional phase information encoded in the shearing interferogram. Usually the multi-LSIs reconstruct the test wavefront from the phase derivatives in multiple directions using the discrete Fourier transforms (DFT) method, which is only suitable to small shear ratios and relatively sensitive to noise. To improve the accuracy of multi-LSIs, wavefront reconstruction from the multidirectional phase differences using the difference Zernike polynomials fitting (DZPF) method is proposed in this paper. For the DZPF method applied in the quadriwave LSI, difference Zernike polynomials in only two orthogonal shear directions are required to represent the phase differences in multiple shear directions. In this way, the test wavefront can be reconstructed from the phase differences in multiple shear directions using a noise-variance weighted least-squares method with almost no extra computational burden, compared with the usual recovery from the phase differences in two orthogonal directions. Numerical simulation results show that the DZPF method can maintain high reconstruction accuracy in a wider range of shear ratios and has much better anti-noise performance than the DFT method. A null test experiment of the quadriwave LSI has been conducted and the experimental results show that the measurement accuracy of the quadriwave LSI can be improved from 0.0054 λ rms to 0.0029 λ rms (λ = 632.8 nm) by substituting the DFT method with the proposed DZPF method in the wavefront reconstruction process.

  16. Formulation and application of optimal homotopty asymptotic method to coupled differential-difference equations.

    PubMed

    Ullah, Hakeem; Islam, Saeed; Khan, Ilyas; Shafie, Sharidan; Fiza, Mehreen

    2015-01-01

    In this paper we applied a new analytic approximate technique Optimal Homotopy Asymptotic Method (OHAM) for treatment of coupled differential-difference equations (DDEs). To see the efficiency and reliability of the method, we consider Relativistic Toda coupled nonlinear differential-difference equation. It provides us a convenient way to control the convergence of approximate solutions when it is compared with other methods of solution found in the literature. The obtained solutions show that OHAM is effective, simpler, easier and explicit.

  17. Formulation and Application of Optimal Homotopty Asymptotic Method to Coupled Differential - Difference Equations

    PubMed Central

    Ullah, Hakeem; Islam, Saeed; Khan, Ilyas; Shafie, Sharidan; Fiza, Mehreen

    2015-01-01

    In this paper we applied a new analytic approximate technique Optimal Homotopy Asymptotic Method (OHAM) for treatment of coupled differential- difference equations (DDEs). To see the efficiency and reliability of the method, we consider Relativistic Toda coupled nonlinear differential-difference equation. It provides us a convenient way to control the convergence of approximate solutions when it is compared with other methods of solution found in the literature. The obtained solutions show that OHAM is effective, simpler, easier and explicit. PMID:25874457

  18. Cryopreservation of in vitro grown shoot tips of Diospyros kaki thunb. using different methods.

    PubMed

    Niu, Y L; Luo, Z R; Zhang, Y F; Zhang, Q L

    2012-01-01

    The objective of this study was to compare the potential of different cryopreservation strategies for in vitro shoot tips of Diospyros kaki Thunb. The treatments consisted of three different cryopreservation methods: vitrification, droplet-vitrification and modified droplet-vitrification. The following variables were assessed: cold acclimation, sucrose concentration in the preculture medium and PVS2 treatment time. A higher average survival level was obtained using the modified droplet-vitrification method compared to the other two methods.

  19. Nicotine Metabolite Ratio (3-hydroxycotinine/cotinine) in Plasma and Urine by Different Analytical Methods and Laboratories: Implications for Clinical Implementation

    PubMed Central

    Tanner, Julie-Anne; Novalen, Maria; Jatlow, Peter; Huestis, Marilyn A.; Murphy, Sharon E.; Kaprio, Jaakko; Kankaanpää, Aino; Galanti, Laurence; Stefan, Cristiana; George, Tony P.; Benowitz, Neal L.; Lerman, Caryn; Tyndale, Rachel F.

    2015-01-01

    Background The highly genetically variable enzyme CYP2A6 metabolizes nicotine to cotinine (COT) and COT to trans-3′-hydroxycotinine (3HC). The nicotine metabolite ratio (NMR, 3HC/COT) is commonly used as a biomarker of CYP2A6 enzymatic activity, rate of nicotine metabolism, and total nicotine clearance; NMR is associated with numerous smoking phenotypes, including smoking cessation. Our objective was to investigate the impact of different measurement methods, at different sites, on plasma and urinary NMR measures from ad libitum smokers. Methods Plasma (n=35) and urine (n=35) samples were sent to eight different laboratories, which employed similar and different methods of COT and 3HC measurements to derive the NMR. We used Bland-Altman analysis to assess agreement, and Pearson correlations to evaluate associations, between NMR measured by different methods. Results Measures of plasma NMR were in strong agreement between methods according to Bland-Altman analysis (ratios 0.82–1.16) and were highly correlated (all Pearson r>0.96, P<0.0001). Measures of urinary NMR were in relatively weaker agreement (ratios 0.62–1.71) and less strongly correlated (Pearson r values of 0.66–0.98, P<0.0001) between different methods. Plasma and urinary COT and 3HC concentrations, while weaker than NMR, also showed good agreement in plasma, which was better than in urine, as was observed for NMR. Conclusions Plasma is a very reliable biological source for the determination of NMR, robust to differences in these analytical protocols or assessment site. Impact Together this indicates a reduced need for differential interpretation of plasma NMR results based on the approach used, allowing for direct comparison of different studies. PMID:26014804

  20. Methods for specifying the target difference in a randomised controlled trial: the Difference ELicitation in TriAls (DELTA) systematic review.

    PubMed

    Hislop, Jenni; Adewuyi, Temitope E; Vale, Luke D; Harrild, Kirsten; Fraser, Cynthia; Gurung, Tara; Altman, Douglas G; Briggs, Andrew H; Fayers, Peter; Ramsay, Craig R; Norrie, John D; Harvey, Ian M; Buckley, Brian; Cook, Jonathan A

    2014-05-01

    Randomised controlled trials (RCTs) are widely accepted as the preferred study design for evaluating healthcare interventions. When the sample size is determined, a (target) difference is typically specified that the RCT is designed to detect. This provides reassurance that the study will be informative, i.e., should such a difference exist, it is likely to be detected with the required statistical precision. The aim of this review was to identify potential methods for specifying the target difference in an RCT sample size calculation. A comprehensive systematic review of medical and non-medical literature was carried out for methods that could be used to specify the target difference for an RCT sample size calculation. The databases searched were MEDLINE, MEDLINE In-Process, EMBASE, the Cochrane Central Register of Controlled Trials, the Cochrane Methodology Register, PsycINFO, Science Citation Index, EconLit, the Education Resources Information Center (ERIC), and Scopus (for in-press publications); the search period was from 1966 or the earliest date covered, to between November 2010 and January 2011. Additionally, textbooks addressing the methodology of clinical trials and International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) tripartite guidelines for clinical trials were also consulted. A narrative synthesis of methods was produced. Studies that described a method that could be used for specifying an important and/or realistic difference were included. The search identified 11,485 potentially relevant articles from the databases searched. Of these, 1,434 were selected for full-text assessment, and a further nine were identified from other sources. Fifteen clinical trial textbooks and the ICH tripartite guidelines were also reviewed. In total, 777 studies were included, and within them, seven methods were identified-anchor, distribution, health economic, opinion-seeking, pilot study, review of the evidence base, and standardised effect size. A variety of methods are available that researchers can use for specifying the target difference in an RCT sample size calculation. Appropriate methods may vary depending on the aim (e.g., specifying an important difference versus a realistic difference), context (e.g., research question and availability of data), and underlying framework adopted (e.g., Bayesian versus conventional statistical approach). Guidance on the use of each method is given. No single method provides a perfect solution for all contexts.

  1. Comparing Indirect Effects in Different Groups in Single-Group and Multi-Group Structural Equation Models

    PubMed Central

    Ryu, Ehri; Cheong, Jeewon

    2017-01-01

    In this article, we evaluated the performance of statistical methods in single-group and multi-group analysis approaches for testing group difference in indirect effects and for testing simple indirect effects in each group. We also investigated whether the performance of the methods in the single-group approach was affected when the assumption of equal variance was not satisfied. The assumption was critical for the performance of the two methods in the single-group analysis: the method using a product term for testing the group difference in a single path coefficient, and the Wald test for testing the group difference in the indirect effect. Bootstrap confidence intervals in the single-group approach and all methods in the multi-group approach were not affected by the violation of the assumption. We compared the performance of the methods and provided recommendations. PMID:28553248

  2. Estimation of the impacts of different homogenization approaches on the variability of temperature series in Catalonia (North Eastern-Spain), Andorra and South Eastern - France. An experiment under the umbrella of the HOME-COST action.

    NASA Astrophysics Data System (ADS)

    Aguilar, E.; Prohom, M.; Mestre, O.; Esteban, P.; Kuglitsch, F. G.; Gruber, C.; Herrero, M.

    2008-12-01

    The almost unanimously accepted fact of climate change has brought many scientists to investigate the seasonal and interannual variability and change in instrumental climatic records. Unfortunately, these records are nearly always affected by homogeneity problems caused by changes in the station or its environment. The European Cooperation in the Field of Scientific and Technical Research (COST) is sponsoring the action COST-ES0601: Advances in homogenisation methods of climate series: an integrated approach (HOME), which aims amongst others to investigate the impacts of different homogenisation ap-proaches on the observed data series. In this work, we apply different detection/correction methods (SNHT, RhTest, Caussinus-Mestre, Vincent Interpolation Method, HOM Method) to annual, sea-sonal, monthly and daily data of a multi-country quality controlled dataset (17 stations in Catalonia (NE Spain); 3 stations in Andorra and 11 stations in SE France). The different outputs are analysed and the differences in the final se-ries studied. After this experiment, we can state that - although all the applied methods im-prove the homogeneity of the original series - the conclusions extracted from the analysis of the homogenised annual, seasonal, monthly data and extreme indices derived from daily data demonstrate important differences. As an exam-ple, some methods (SNHT) tend to detect fewer breakpoints than others (Caussinus-Mestre). Even if metadata or a pre-identified list of breakpoints is available, the correction factors calculated by the different approaches differ both in annual, seasonal, monthly and daily scales. In the latter case, some methods like HOM - based on the modelling of a candidate series against a reference series - present a richest solution than others based on the mere in-terpolation of monthly factors (Vincent Method), although the former are not al-ways applicable due to lack of good reference stations. In order to identify the best performing method (or suite of methods) COST-HOME action is conducting an intensive testing of the different homogenisation methods over simulated, surrogated and real series. At the end of the action (2011), we expect to present a significant contribution to a better evaluation of seasonal and interannual variability and change.

  3. A Preliminary Study of the Effectiveness of Different Recitation Teaching Methods

    NASA Astrophysics Data System (ADS)

    Endorf, Robert J.; Koenig, Kathleen M.; Braun, Gregory A.

    2006-02-01

    We present preliminary results from a comparative study of student understanding for students who attended recitation classes which used different teaching methods. Student volunteers from our introductory calculus-based physics course attended a special recitation class that was taught using one of four different teaching methods. A total of 272 students were divided into approximately equal groups for each method. Students in each class were taught the same topic, "Changes in energy and momentum," from Tutorials in Introductory Physics. The different teaching methods varied in the amount of student and teacher engagement. Student understanding was evaluated through pretests and posttests given at the recitation class. Our results demonstrate the importance of the instructor's role in teaching recitation classes. The most effective teaching method was for students working in cooperative learning groups with the instructors questioning the groups using Socratic dialogue. These results provide guidance and evidence for the teaching methods which should be emphasized in training future teachers and faculty members.

  4. Comparison study of two procedures for the determination of emamectin benzoate in medicated fish feed.

    PubMed

    Farer, Leslie J; Hayes, John M

    2005-01-01

    A new method has been developed for the determination of emamectin benzoate in fish feed. The method uses a wet extraction, cleanup by solid-phase extraction, and quantitation and separation by liquid chromatography (LC). In this paper, we compare the performance of this method with that of a previously reported LC assay for the determination of emamectin benzoate in fish feed. Although similar to the previous method, the new procedure uses a different sample pretreatment, wet extraction, and quantitation method. The performance of the new method was compared with that of the previously reported method by analyses of 22 medicated feed samples from various commercial sources. A comparison of the results presented here reveals slightly lower assay values obtained with the new method. Although a paired sample t-test indicates the difference in results is significant, this difference is within the method precision of either procedure.

  5. An improved algorithm of mask image dodging for aerial image

    NASA Astrophysics Data System (ADS)

    Zhang, Zuxun; Zou, Songbai; Zuo, Zhiqi

    2011-12-01

    The technology of Mask image dodging based on Fourier transform is a good algorithm in removing the uneven luminance within a single image. At present, the difference method and the ratio method are the methods in common use, but they both have their own defects .For example, the difference method can keep the brightness uniformity of the whole image, but it is deficient in local contrast; meanwhile the ratio method can work better in local contrast, but sometimes it makes the dark areas of the original image too bright. In order to remove the defects of the two methods effectively, this paper on the basis of research of the two methods proposes a balance solution. Experiments show that the scheme not only can combine the advantages of the difference method and the ratio method, but also can avoid the deficiencies of the two algorithms.

  6. Body Segment Differences in Surface Area, Skin Temperature and 3D Displacement and the Estimation of Heat Balance during Locomotion in Hominins

    PubMed Central

    Cross, Alan; Collard, Mark; Nelson, Andrew

    2008-01-01

    The conventional method of estimating heat balance during locomotion in humans and other hominins treats the body as an undifferentiated mass. This is problematic because the segments of the body differ with respect to several variables that can affect thermoregulation. Here, we report a study that investigated the impact on heat balance during locomotion of inter-segment differences in three of these variables: surface area, skin temperature and rate of movement. The approach adopted in the study was to generate heat balance estimates with the conventional method and then compare them with heat balance estimates generated with a method that takes into account inter-segment differences in surface area, skin temperature and rate of movement. We reasoned that, if the hypothesis that inter-segment differences in surface area, skin temperature and rate of movement affect heat balance during locomotion is correct, the estimates yielded by the two methods should be statistically significantly different. Anthropometric data were collected on seven adult male volunteers. The volunteers then walked on a treadmill at 1.2 m/s while 3D motion capture cameras recorded their movements. Next, the conventional and segmented methods were used to estimate the volunteers' heat balance while walking in four ambient temperatures. Lastly, the estimates produced with the two methods were compared with the paired t-test. The estimates of heat balance during locomotion yielded by the two methods are significantly different. Those yielded by the segmented method are significantly lower than those produced by the conventional method. Accordingly, the study supports the hypothesis that inter-segment differences in surface area, skin temperature and rate of movement impact heat balance during locomotion. This has important implications not only for current understanding of heat balance during locomotion in hominins but also for how future research on this topic should be approached. PMID:18560580

  7. Body segment differences in surface area, skin temperature and 3D displacement and the estimation of heat balance during locomotion in hominins.

    PubMed

    Cross, Alan; Collard, Mark; Nelson, Andrew

    2008-06-18

    The conventional method of estimating heat balance during locomotion in humans and other hominins treats the body as an undifferentiated mass. This is problematic because the segments of the body differ with respect to several variables that can affect thermoregulation. Here, we report a study that investigated the impact on heat balance during locomotion of inter-segment differences in three of these variables: surface area, skin temperature and rate of movement. The approach adopted in the study was to generate heat balance estimates with the conventional method and then compare them with heat balance estimates generated with a method that takes into account inter-segment differences in surface area, skin temperature and rate of movement. We reasoned that, if the hypothesis that inter-segment differences in surface area, skin temperature and rate of movement affect heat balance during locomotion is correct, the estimates yielded by the two methods should be statistically significantly different. Anthropometric data were collected on seven adult male volunteers. The volunteers then walked on a treadmill at 1.2 m/s while 3D motion capture cameras recorded their movements. Next, the conventional and segmented methods were used to estimate the volunteers' heat balance while walking in four ambient temperatures. Lastly, the estimates produced with the two methods were compared with the paired t-test. The estimates of heat balance during locomotion yielded by the two methods are significantly different. Those yielded by the segmented method are significantly lower than those produced by the conventional method. Accordingly, the study supports the hypothesis that inter-segment differences in surface area, skin temperature and rate of movement impact heat balance during locomotion. This has important implications not only for current understanding of heat balance during locomotion in hominins but also for how future research on this topic should be approached.

  8. Effect of different drying methods on the composition of steviol glycosides in Stevia rebaudiana Bertoni leaves

    NASA Astrophysics Data System (ADS)

    Aranda-González, Irma; Betancur-Ancona, David; Chel-Guerrero, Luis; Moguel-Ordóñez, Yolanda

    2017-01-01

    Drying techniques can modify the composition of certain plant compounds. Therefore, the aim of the study was to assess the effect of different drying methods on steviol glycosides in Stevia rebaudiana Bertoni leaves. Four different drying methods were applied to Stevia rebaudiana Bertoni leaves, which were then subjected to aqueous extraction. Radiation or convection drying was performed in stoves at 60°C, whereas shade or sun drying methods were applied at 29.7°C and 70% of relative humidity. Stevioside, rebaudioside A, rebaudioside B, rebaudioside C, rebaudioside D, dulcoside A, and steviolbioside were quantified by a validated HPLC method. Among steviol glycosides, the content (g 100 g-1 dry basis) of stevioside, rebaudioside A, rebaudioside B, and rebaudioside C varied according to the drying method. The total glycoside content was higher in sun-dried samples, with no significant differences compared to shade or convection drying, whereas radiation drying adversely affected the content of rebaudioside A and rebaudioside C (p <0.01) and was therefore a method lowering total glycoside content. The effect of the different drying methods was also reflected in the proportion of the sweetener profile. Convection drying could be suitable for modern food processing industries while shadow or sun drying may be a low-cost alternative for farmers.

  9. Machine learning of swimming data via wisdom of crowd and regression analysis.

    PubMed

    Xie, Jiang; Xu, Junfu; Nie, Celine; Nie, Qing

    2017-04-01

    Every performance, in an officially sanctioned meet, by a registered USA swimmer is recorded into an online database with times dating back to 1980. For the first time, statistical analysis and machine learning methods are systematically applied to 4,022,631 swim records. In this study, we investigate performance features for all strokes as a function of age and gender. The variances in performance of males and females for different ages and strokes were studied, and the correlations of performances for different ages were estimated using the Pearson correlation. Regression analysis show the performance trends for both males and females at different ages and suggest critical ages for peak training. Moreover, we assess twelve popular machine learning methods to predict or classify swimmer performance. Each method exhibited different strengths or weaknesses in different cases, indicating no one method could predict well for all strokes. To address this problem, we propose a new method by combining multiple inference methods to derive Wisdom of Crowd Classifier (WoCC). Our simulation experiments demonstrate that the WoCC is a consistent method with better overall prediction accuracy. Our study reveals several new age-dependent trends in swimming and provides an accurate method for classifying and predicting swimming times.

  10. Factors affecting the relationship between quantitative polymerase chain reaction (qPCR) and culture-based enumeration of Enterococcus in environmental waters.

    PubMed

    Raith, M R; Ebentier, D L; Cao, Y; Griffith, J F; Weisberg, S B

    2014-03-01

    To determine the extent to which discrepancies between qPCR and culture-based results in beach water quality monitoring can be attributed to: (i) within-method variability, (ii) between-method difference within each method class (qPCR or culture) and (iii) between-class difference. We analysed 306 samples using two culture-based (EPA1600 and Enterolert) and two qPCR (Taqman and Scorpion) methods, each in duplicate. Both qPCR methods correlated with EPA1600, but regression analyses indicated approximately 0·8 log10 unit overestimation by qPCR compared to culture methods. Differences between methods within a class were less than half of this and were minimal for between-replicate within a method. Using the 104 Enterococcus per 100 ml management decision threshold, Taqman qPCR indicated the same decisions as EPA1600 for 87% of the samples, but indicated beach posting for unhealthful water when EPA1600 did not for 12% of the samples. After accounting for within-method and within-class variability, 8% of the samples exhibited true between-class discrepancy where both qPCR methods indicated beach posting while both culture methods did not. Measurement target difference (DNA vs growth) accounted for the majority of the qPCR-vs-culture discrepancy, but its influence on monitoring application is outweighed by frequent incorrect posting with culture methods due to incubation time delay. This is the first study to quantify the frequency with which culture-vs-qPCR discrepancies can be attributed to target difference - vs - method variability. © 2013 The Society for Applied Microbiology.

  11. Evaluating fMRI methods for assessing hemispheric language dominance in healthy subjects.

    PubMed

    Baciu, Monica; Juphard, Alexandra; Cousin, Emilie; Bas, Jean François Le

    2005-08-01

    We evaluated two methods for quantifying the hemispheric language dominance in healthy subjects, by using a rhyme detection (deciding whether couple of words rhyme) and a word fluency (generating words starting with a given letter) task. One of methods called "flip method" (FM) was based on the direct statistical comparison between hemispheres' activity. The second one, the classical lateralization indices method (LIM), was based on calculating lateralization indices by taking into account the number of activated pixels within hemispheres. The main difference between methods is the statistical assessment of the inter-hemispheric difference: while FM shows if the difference between hemispheres' activity is statistically significant, LIM shows only that if there is a difference between hemispheres. The robustness of LIM and FM was assessed by calculating correlation coefficients between LIs obtained with each of these methods and manual lateralization indices MLI obtained with Edinburgh inventory. Our results showed significant correlation between LIs provided by each method and the MIL, suggesting that both methods are robust for quantifying hemispheric dominance for language in healthy subjects. In the present study we also evaluated the effect of spatial normalization, smoothing and "clustering" (NSC) on the intra-hemispheric location of activated regions and inter-hemispheric asymmetry of the activation. Our results have shown that NSC did not affect the hemispheric specialization but increased the value of the inter-hemispheric difference.

  12. Comparative analysis for strength serum sodium and potassium in three different methods: Flame photometry, ion-selective electrode (ISE) and colorimetric enzymatic.

    PubMed

    Garcia, Rafaela Alvim; Vanelli, Chislene Pereira; Pereira Junior, Olavo Dos Santos; Corrêa, José Otávio do Amaral

    2018-06-19

    Hydroelectrolytic disorders are common in clinical situations and may be harmful to the patient, especially those involving plasma sodium and potassium dosages. Among the possible methods for the dosages are flame photometry, ion-selective electrode (ISE) and colorimetric enzymatic method. We analyzed 175 samples in the three different methods cited from patients attending the laboratory of the University Hospital of the Federal University of Juiz de Fora. The values obtained were statistically treated using SPSS 19.0 software. The present study aims to evaluate the impact of the use of these different methods in the determination of plasma sodium and potassium. The averages obtained for sodium and potassium measurements by flame photometry were similar (P > .05) to the means obtained for the two electrolytes by ISE. The averages obtained by the colorimetric enzymatic method presented statistical difference in relation to ISE, both for sodium and potassium. In the correlation analysis, both flame photometry and colorimetric enzymatic showed a strong correlation with the ISE method for both dosages. At the first time in the same work sodium and potassium were analyzed by three different methods and the results allowed us to conclude that the methods showed a positive and strong correlation, and can be applied in the clinical routine. © 2018 Wiley Periodicals, Inc.

  13. 49 CFR 105.35 - Serving documents in PHMSA proceedings.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... following methods, except where a different method of service is specifically required: (1) Registered or... document by one of the following methods, except where a different method of service is specifically... at http://www.regulations.gov. [67 FR 42951, June 25, 2002, as amended at 72 FR 55682, Oct. 1, 2007] ...

  14. Choosing Learning Methods Suitable for Teaching and Learning in Computer Science

    ERIC Educational Resources Information Center

    Taylor, Estelle; Breed, Marnus; Hauman, Ilette; Homann, Armando

    2013-01-01

    Our aim is to determine which teaching methods students in Computer Science and Information Systems prefer. There are in total 5 different paradigms (behaviorism, cognitivism, constructivism, design-based and humanism) with 32 models between them. Each model is unique and states different learning methods. Recommendations are made on methods that…

  15. Novel ratio difference at coabsorptive point spectrophotometric method for determination of components with wide variation in their absorptivities.

    PubMed

    Saad, Ahmed S; Abo-Talib, Nisreen F; El-Ghobashy, Mohamed R

    2016-01-05

    Different methods have been introduced to enhance selectivity of UV-spectrophotometry thus enabling accurate determination of co-formulated components, however mixtures whose components exhibit wide variation in absorptivities has been an obstacle against application of UV-spectrophotometry. The developed ratio difference at coabsorptive point method (RDC) represents a simple effective solution for the mentioned problem, where the additive property of light absorbance enabled the consideration of the two components as multiples of the lower absorptivity component at certain wavelength (coabsorptive point), at which their total concentration multiples could be determined, whereas the other component was selectively determined by applying the ratio difference method in a single step. Mixture of perindopril arginine (PA) and amlodipine besylate (AM) figures that problem, where the low absorptivity of PA relative to AM hinders selective spectrophotometric determination of PA. The developed method successfully determined both components in the overlapped region of their spectra with accuracy 99.39±1.60 and 100.51±1.21, for PA and AM, respectively. The method was validated as per the USP guidelines and showed no significant difference upon statistical comparison with reported chromatographic method. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Detection of admittivity anomaly on high-contrast heterogeneous backgrounds using frequency difference EIT.

    PubMed

    Jang, J; Seo, J K

    2015-06-01

    This paper describes a multiple background subtraction method in frequency difference electrical impedance tomography (fdEIT) to detect an admittivity anomaly from a high-contrast background conductivity distribution. The proposed method expands the use of the conventional weighted frequency difference EIT method, which has been used limitedly to detect admittivity anomalies in a roughly homogeneous background. The proposed method can be viewed as multiple weighted difference imaging in fdEIT. Although the spatial resolutions of the output images by fdEIT are very low due to the inherent ill-posedness, numerical simulations and phantom experiments of the proposed method demonstrate its feasibility to detect anomalies. It has potential application in stroke detection in a head model, which is highly heterogeneous due to the skull.

  17. The research progress of perforating gun inner wall blind hole machining method

    NASA Astrophysics Data System (ADS)

    Wang, Zhe; Shen, Hongbing

    2018-04-01

    Blind hole processing method has been a concerned technical problem in oil, electronics, aviation and other fields. This paper introduces different methods for blind hole machining, focus on machining method for perforating gun inner wall blind hole processing. Besides, the advantages and disadvantages of different methods are also discussed, and the development trend of blind hole processing were introduced significantly.

  18. A comparison of five partial volume correction methods for Tau and Amyloid PET imaging with [18F]THK5351 and [11C]PIB.

    PubMed

    Shidahara, Miho; Thomas, Benjamin A; Okamura, Nobuyuki; Ibaraki, Masanobu; Matsubara, Keisuke; Oyama, Senri; Ishikawa, Yoichi; Watanuki, Shoichi; Iwata, Ren; Furumoto, Shozo; Tashiro, Manabu; Yanai, Kazuhiko; Gonda, Kohsuke; Watabe, Hiroshi

    2017-08-01

    To suppress partial volume effect (PVE) in brain PET, there have been many algorithms proposed. However, each methodology has different property due to its assumption and algorithms. Our aim of this study was to investigate the difference among partial volume correction (PVC) method for tau and amyloid PET study. We investigated two of the most commonly used PVC methods, Müller-Gärtner (MG) and geometric transfer matrix (GTM) and also other three methods for clinical tau and amyloid PET imaging. One healthy control (HC) and one Alzheimer's disease (AD) PET studies of both [ 18 F]THK5351 and [ 11 C]PIB were performed using a Eminence STARGATE scanner (Shimadzu Inc., Kyoto, Japan). All PET images were corrected for PVE by MG, GTM, Labbé (LABBE), Regional voxel-based (RBV), and Iterative Yang (IY) methods, with segmented or parcellated anatomical information processed by FreeSurfer, derived from individual MR images. PVC results of 5 algorithms were compared with the uncorrected data. In regions of high uptake of [ 18 F]THK5351 and [ 11 C]PIB, different PVCs demonstrated different SUVRs. The degree of difference between PVE uncorrected and corrected depends on not only PVC algorithm but also type of tracer and subject condition. Presented PVC methods are straight-forward to implement but the corrected images require careful interpretation as different methods result in different levels of recovery.

  19. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys

    PubMed Central

    Hund, Lauren; Bedrick, Edward J.; Pagano, Marcello

    2015-01-01

    Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis. PMID:26125967

  20. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys.

    PubMed

    Hund, Lauren; Bedrick, Edward J; Pagano, Marcello

    2015-01-01

    Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis.

  1. Comparing deflection measurements of a magnetically steerable catheter using optical imaging and MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lillaney, Prasheel, E-mail: Prasheel.Lillaney@ucsf.edu; Caton, Curtis; Martin, Alastair J.

    2014-02-15

    Purpose: Magnetic resonance imaging (MRI) is an emerging modality for interventional radiology, giving clinicians another tool for minimally invasive image-guided interventional procedures. Difficulties associated with endovascular catheter navigation using MRI guidance led to the development of a magnetically steerable catheter. The focus of this study was to mechanically characterize deflections of two different prototypes of the magnetically steerable catheterin vitro to better understand their efficacy. Methods: A mathematical model for deflection of the magnetically steerable catheter is formulated based on the principle that at equilibrium the mechanical and magnetic torques are equal to each other. Furthermore, two different image basedmore » methods for empirically measuring the catheter deflection angle are presented. The first, referred to as the absolute tip method, measures the angle of the line that is tangential to the catheter tip. The second, referred to the base to tip method, is an approximation that is used when it is not possible to measure the angle of the tangent line. Optical images of the catheter deflection are analyzed using the absolute tip method to quantitatively validate the predicted deflections from the mathematical model. Optical images of the catheter deflection are also analyzed using the base to tip method to quantitatively determine the differences between the absolute tip and base to tip methods. Finally, the optical images are compared to MR images using the base to tip method to determine the accuracy of measuring the catheter deflection using MR. Results: The optical catheter deflection angles measured for both catheter prototypes using the absolute tip method fit very well to the mathematical model (R{sup 2} = 0.91 and 0.86 for each prototype, respectively). It was found that the angles measured using the base to tip method were consistently smaller than those measured using the absolute tip method. The deflection angles measured using optical data did not demonstrate a significant difference from the angles measured using MR image data when compared using the base to tip method. Conclusions: This study validates the theoretical description of the magnetically steerable catheter, while also giving insight into different methods and modalities for measuring the deflection angles of the prototype catheters. These results can be used to mechanically model future iterations of the design. Quantifying the difference between the different methods for measuring catheter deflection will be important when making deflection measurements in future studies. Finally, MR images can be used to reliably measure deflection angles since there is no significant difference between the MR and optical measurements.« less

  2. Comparison of six methods for isolating mycobacteria from swine lymph nodes.

    PubMed

    Thoen, C O; Richards, W D; Jarnagin, J L

    1974-03-01

    Six laboratory methods were compared for isolating acid-fast bacteria. Tuberculous lymph nodes from each of 48 swine as identified by federal meat inspectors were processed by each of the methods. Treated tissue suspensions were inoculated onto each of eight media which were observed at 7-day intervals for 9 weeks. There were no statistically significant differences between the number of Mycobacterium avium complex bacteria isolated by each of the six methods. Rapid tissue preparation methods involving treatment with 2% sodium hydroxide or treatment with 0.2% zephiran required only one-third to one-fourth the processing time as a standard method. There were small differences in the amount of contamination among the six methods, but no detectable differences in the time of first appearance of M. avium complex colonies.

  3. Methods for Specifying the Target Difference in a Randomised Controlled Trial: The Difference ELicitation in TriAls (DELTA) Systematic Review

    PubMed Central

    Hislop, Jenni; Adewuyi, Temitope E.; Vale, Luke D.; Harrild, Kirsten; Fraser, Cynthia; Gurung, Tara; Altman, Douglas G.; Briggs, Andrew H.; Fayers, Peter; Ramsay, Craig R.; Norrie, John D.; Harvey, Ian M.; Buckley, Brian; Cook, Jonathan A.

    2014-01-01

    Background Randomised controlled trials (RCTs) are widely accepted as the preferred study design for evaluating healthcare interventions. When the sample size is determined, a (target) difference is typically specified that the RCT is designed to detect. This provides reassurance that the study will be informative, i.e., should such a difference exist, it is likely to be detected with the required statistical precision. The aim of this review was to identify potential methods for specifying the target difference in an RCT sample size calculation. Methods and Findings A comprehensive systematic review of medical and non-medical literature was carried out for methods that could be used to specify the target difference for an RCT sample size calculation. The databases searched were MEDLINE, MEDLINE In-Process, EMBASE, the Cochrane Central Register of Controlled Trials, the Cochrane Methodology Register, PsycINFO, Science Citation Index, EconLit, the Education Resources Information Center (ERIC), and Scopus (for in-press publications); the search period was from 1966 or the earliest date covered, to between November 2010 and January 2011. Additionally, textbooks addressing the methodology of clinical trials and International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) tripartite guidelines for clinical trials were also consulted. A narrative synthesis of methods was produced. Studies that described a method that could be used for specifying an important and/or realistic difference were included. The search identified 11,485 potentially relevant articles from the databases searched. Of these, 1,434 were selected for full-text assessment, and a further nine were identified from other sources. Fifteen clinical trial textbooks and the ICH tripartite guidelines were also reviewed. In total, 777 studies were included, and within them, seven methods were identified—anchor, distribution, health economic, opinion-seeking, pilot study, review of the evidence base, and standardised effect size. Conclusions A variety of methods are available that researchers can use for specifying the target difference in an RCT sample size calculation. Appropriate methods may vary depending on the aim (e.g., specifying an important difference versus a realistic difference), context (e.g., research question and availability of data), and underlying framework adopted (e.g., Bayesian versus conventional statistical approach). Guidance on the use of each method is given. No single method provides a perfect solution for all contexts. Please see later in the article for the Editors' Summary PMID:24824338

  4. Comparison of different wind data interpolation methods for a region with complex terrain in Central Asia

    NASA Astrophysics Data System (ADS)

    Reinhardt, Katja; Samimi, Cyrus

    2018-01-01

    While climatological data of high spatial resolution are largely available in most developed countries, the network of climatological stations in many other regions of the world still constitutes large gaps. Especially for those regions, interpolation methods are important tools to fill these gaps and to improve the data base indispensible for climatological research. Over the last years, new hybrid methods of machine learning and geostatistics have been developed which provide innovative prospects in spatial predictive modelling. This study will focus on evaluating the performance of 12 different interpolation methods for the wind components \\overrightarrow{u} and \\overrightarrow{v} in a mountainous region of Central Asia. Thereby, a special focus will be on applying new hybrid methods on spatial interpolation of wind data. This study is the first evaluating and comparing the performance of several of these hybrid methods. The overall aim of this study is to determine whether an optimal interpolation method exists, which can equally be applied for all pressure levels, or whether different interpolation methods have to be used for the different pressure levels. Deterministic (inverse distance weighting) and geostatistical interpolation methods (ordinary kriging) were explored, which take into account only the initial values of \\overrightarrow{u} and \\overrightarrow{v} . In addition, more complex methods (generalized additive model, support vector machine and neural networks as single methods and as hybrid methods as well as regression-kriging) that consider additional variables were applied. The analysis of the error indices revealed that regression-kriging provided the most accurate interpolation results for both wind components and all pressure heights. At 200 and 500 hPa, regression-kriging is followed by the different kinds of neural networks and support vector machines and for 850 hPa it is followed by the different types of support vector machine and ordinary kriging. Overall, explanatory variables improve the interpolation results.

  5. New Laboratory Methods for Characterizing the Immersion Factors for Irradiance

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Zibordi, Giuseppe; DAlimonte, Davide; vaderLinde, Dirk; Brown, James W.

    2003-01-01

    The experimental determination of the immersion factor, I(sub f)(lambda), of irradiance collectors is a requirement of any in-water radiometer. The eighth SeaWiFS Intercalibration Round-Robin Experiment (SIRREX-8) showed different implementations, at different laboratories, of the same I(sub f)(lambda) measurement protocol. The different implementations make use of different setups, volumes, and water types. Consequently, they exhibit different accuracies and require different execution times for characterizing an irradiance sensor. In view of standardizing the characterization of I(sub f)(lambda) values for in-water radiometers, together with an increase in the accuracy of methods and a decrease in the execution time, alternative methods are presented, and assessed versus the traditional method. The proposed new laboratory methods include: a) the continuous method, in which optical measurements taken with discrete water depths are substituted by continuous profiles created by removing the water from the water vessel at a constant flow rate (which significantly reduces the time required for the characterization of a single radiometer); and b) the Compact Portable Advanced Characterization Tank (ComPACT) method, in which the commonly used large tanks are replaced by a small water vessel, thereby allowing the determination of I(sub f)(lambda) values with a small water volume, and more importantly, permitting I(sub f)(lambda) characterizations with pure water. Intercomparisons between the continuous and the traditional method showed results within the variance of I(sub f) (lambda) determinations. The use of the continuous method, however, showed a much shorter realization time. Intercomparisons between the ComPACT and the traditional method showed generally higher I(sub f)(lambda) values for the former. This is in agreement with the generalized expectations of a reduction in scattering effects, because of the use of pure water with the ComPACT method versus the use of tap water with the traditional method.

  6. Star sub-pixel centroid calculation based on multi-step minimum energy difference method

    NASA Astrophysics Data System (ADS)

    Wang, Duo; Han, YanLi; Sun, Tengfei

    2013-09-01

    The star's centroid plays a vital role in celestial navigation, star images which be gotten during daytime, due to the strong sky background, have a low SNR, and the star objectives are nearly submerged in the background, takes a great trouble to the centroid localization. Traditional methods, such as a moment method, weighted centroid calculation method is simple but has a big error, especially in the condition of a low SNR. Gaussian method has a high positioning accuracy, but the computational complexity. Analysis of the energy distribution in star image, a location method for star target centroids based on multi-step minimum energy difference is proposed. This method uses the linear superposition to narrow the centroid area, in the certain narrow area uses a certain number of interpolation to pixels for the pixels' segmentation, and then using the symmetry of the stellar energy distribution, tentatively to get the centroid position: assume that the current pixel is the star centroid position, and then calculates and gets the difference of the sum of the energy which in the symmetric direction(in this paper we take the two directions of transverse and longitudinal) and the equal step length(which can be decided through different conditions, the paper takes 9 as the step length) of the current pixel, and obtain the centroid position in this direction when the minimum difference appears, and so do the other directions, then the validation comparison of simulated star images, and compare with several traditional methods, experiments shows that the positioning accuracy of the method up to 0.001 pixel, has good effect to calculate the centroid of low SNR conditions; at the same time, uses this method on a star map which got at the fixed observation site during daytime in near-infrared band, compare the results of the paper's method with the position messages which were known of the star, it shows that :the multi-step minimum energy difference method achieves a better effect.

  7. Proposed Modifications to Engineering Design Guidelines Related to Resistivity Measurements and Spacecraft Charging

    NASA Technical Reports Server (NTRS)

    Dennison, J. R.; Swaminathan, Prasanna; Jost, Randy; Brunson, Jerilyn; Green, Nelson; Frederickson, A. Robb

    2005-01-01

    A key parameter in modeling differential spacecraft charging is the resistivity of insulating materials. This determines how charge will accumulate and redistribute across the spacecraft, as well as the time scale for charge transport and dissipation. Existing spacecraft charging guidelines recommend use of tests and imported resistivity data from handbooks that are based principally upon ASTM methods that are more applicable to classical ground conditions and designed for problems associated with power loss through the dielectric, than for how long charge can be stored on an insulator. These data have been found to underestimate charging effects by one to four orders of magnitude for spacecraft charging applications. A review is presented of methods to measure the resistive of highly insulating materials, including the electrometer-resistance method, the electrometer-constant voltage method, the voltage rate-of-change method and the charge storage method. This is based on joint experimental studies conducted at NASA Jet Propulsion Laboratory and Utah State University to investigate the charge storage method and its relation to spacecraft charging. The different methods are found to be appropriate for different resistivity ranges and for different charging circumstances. A simple physics-based model of these methods allows separation of the polarization current and dark current components from long duration measurements of resistivity over day- to month-long time scales. Model parameters are directly related to the magnitude of charge transfer and storage and the rate of charge transport. The model largely explains the observed differences in resistivity found using the different methods and provides a framework for recommendations for the appropriate test method for spacecraft materials with different resistivities and applications. The proposed changes to the existing engineering guidelines are intended to provide design engineers more appropriate methods for consideration and measurements of resistivity for many typical spacecraft charging scenarios.

  8. Comparison of Chemical Extraction Methods for Determination of Soil Potassium in Different Soil Types

    NASA Astrophysics Data System (ADS)

    Zebec, V.; Rastija, D.; Lončarić, Z.; Bensa, A.; Popović, B.; Ivezić, V.

    2017-12-01

    Determining potassium supply of soil plays an important role in intensive crop production, since it is the basis for balancing nutrients and issuing fertilizer recommendations for achieving high and stable yields within economic feasibility. The aim of this study was to compare the different extraction methods of soil potassium from arable horizon of different types of soils with ammonium lactate method (KAL), which is frequently used as analytical method for determining the accessibility of nutrients and it is a common method used for issuing fertilizer recommendations in many Europe countries. In addition to the ammonium lactate method (KAL, pH 3.75), potassium was extracted with ammonium acetate (KAA, pH 7), ammonium acetate ethylenediaminetetraacetic acid (KAAEDTA, pH 4.6), Bray (KBRAY, pH 2.6) and with barium chloride (K_{BaCl_2 }, pH 8.1). The analyzed soils were extremely heterogeneous with a wide range of determined values. Soil pH reaction ( {pH_{H_2 O} } ) ranged from 4.77 to 8.75, organic matter content ranged from 1.87 to 4.94% and clay content from 8.03 to 37.07%. In relation to KAL method as the standard method, K_{BaCl_2 } method extracts 12.9% more on average of soil potassium, while in relation to standard method, on average KAA extracts 5.3%, KAAEDTA 10.3%, and KBRAY 27.5% less of potassium. Comparison of analyzed extraction methods of potassium from the soil is of high precision, and most reliable comparison was KAL method with KAAEDTA, followed by a: KAA, K_{BaCl_2 } and KBRAY method. Extremely significant statistical correlation between different extractive methods for determining potassium in the soil indicates that any of the methods can be used to accurately predict the concentration of potassium in the soil, and that carried out research can be used to create prediction model for concentration of potassium based on different methods of extraction.

  9. Improved Estimation of Cardiac Function Parameters Using a Combination of Independent Automated Segmentation Results in Cardiovascular Magnetic Resonance Imaging.

    PubMed

    Lebenberg, Jessica; Lalande, Alain; Clarysse, Patrick; Buvat, Irene; Casta, Christopher; Cochet, Alexandre; Constantinidès, Constantin; Cousty, Jean; de Cesare, Alain; Jehan-Besson, Stephanie; Lefort, Muriel; Najman, Laurent; Roullot, Elodie; Sarry, Laurent; Tilmant, Christophe; Frouin, Frederique; Garreau, Mireille

    2015-01-01

    This work aimed at combining different segmentation approaches to produce a robust and accurate segmentation result. Three to five segmentation results of the left ventricle were combined using the STAPLE algorithm and the reliability of the resulting segmentation was evaluated in comparison with the result of each individual segmentation method. This comparison was performed using a supervised approach based on a reference method. Then, we used an unsupervised statistical evaluation, the extended Regression Without Truth (eRWT) that ranks different methods according to their accuracy in estimating a specific biomarker in a population. The segmentation accuracy was evaluated by estimating six cardiac function parameters resulting from the left ventricle contour delineation using a public cardiac cine MRI database. Eight different segmentation methods, including three expert delineations and five automated methods, were considered, and sixteen combinations of the automated methods using STAPLE were investigated. The supervised and unsupervised evaluations demonstrated that in most cases, STAPLE results provided better estimates than individual automated segmentation methods. Overall, combining different automated segmentation methods improved the reliability of the segmentation result compared to that obtained using an individual method and could achieve the accuracy of an expert.

  10. Improved Estimation of Cardiac Function Parameters Using a Combination of Independent Automated Segmentation Results in Cardiovascular Magnetic Resonance Imaging

    PubMed Central

    Lebenberg, Jessica; Lalande, Alain; Clarysse, Patrick; Buvat, Irene; Casta, Christopher; Cochet, Alexandre; Constantinidès, Constantin; Cousty, Jean; de Cesare, Alain; Jehan-Besson, Stephanie; Lefort, Muriel; Najman, Laurent; Roullot, Elodie; Sarry, Laurent; Tilmant, Christophe

    2015-01-01

    This work aimed at combining different segmentation approaches to produce a robust and accurate segmentation result. Three to five segmentation results of the left ventricle were combined using the STAPLE algorithm and the reliability of the resulting segmentation was evaluated in comparison with the result of each individual segmentation method. This comparison was performed using a supervised approach based on a reference method. Then, we used an unsupervised statistical evaluation, the extended Regression Without Truth (eRWT) that ranks different methods according to their accuracy in estimating a specific biomarker in a population. The segmentation accuracy was evaluated by estimating six cardiac function parameters resulting from the left ventricle contour delineation using a public cardiac cine MRI database. Eight different segmentation methods, including three expert delineations and five automated methods, were considered, and sixteen combinations of the automated methods using STAPLE were investigated. The supervised and unsupervised evaluations demonstrated that in most cases, STAPLE results provided better estimates than individual automated segmentation methods. Overall, combining different automated segmentation methods improved the reliability of the segmentation result compared to that obtained using an individual method and could achieve the accuracy of an expert. PMID:26287691

  11. Comparison of methods of DNA extraction for real-time PCR in a model of pleural tuberculosis.

    PubMed

    Santos, Ana; Cremades, Rosa; Rodríguez, Juan Carlos; García-Pachón, Eduardo; Ruiz, Montserrat; Royo, Gloria

    2010-01-01

    Molecular methods have been reported to have different sensitivities in the diagnosis of pleural tuberculosis and this may in part be caused by the use of different methods of DNA extraction. Our study compares nine DNA extraction systems in an experimental model of pleural tuberculosis. An inoculum of Mycobacterium tuberculosis was added to 23 pleural liquid samples with different characteristics. DNA was subsequently extracted using nine different methods (seven manual and two automatic) for analysis with real-time PCR. Only two methods were able to detect the presence of M. tuberculosis DNA in all the samples: extraction using columns (Qiagen) and automated extraction with the TNAI system (Roche). The automatic method is more expensive, but requires less time. Almost all the false negatives were because of the difficulty involved in extracting M. tuberculosis DNA, as in general, all the methods studied are capable of eliminating inhibitory substances that block the amplification reaction. The method of M. tuberculosis DNA extraction used affects the results of the diagnosis of pleural tuberculosis by molecular methods. DNA extraction systems that have been shown to be effective in pleural liquid should be used.

  12. Evaluation of multivariate calibration models with different pre-processing and processing algorithms for a novel resolution and quantitation of spectrally overlapped quaternary mixture in syrup

    NASA Astrophysics Data System (ADS)

    Moustafa, Azza A.; Hegazy, Maha A.; Mohamed, Dalia; Ali, Omnia

    2016-02-01

    A novel approach for the resolution and quantitation of severely overlapped quaternary mixture of carbinoxamine maleate (CAR), pholcodine (PHL), ephedrine hydrochloride (EPH) and sunset yellow (SUN) in syrup was demonstrated utilizing different spectrophotometric assisted multivariate calibration methods. The applied methods have used different processing and pre-processing algorithms. The proposed methods were partial least squares (PLS), concentration residuals augmented classical least squares (CRACLS), and a novel method; continuous wavelet transforms coupled with partial least squares (CWT-PLS). These methods were applied to a training set in the concentration ranges of 40-100 μg/mL, 40-160 μg/mL, 100-500 μg/mL and 8-24 μg/mL for the four components, respectively. The utilized methods have not required any preliminary separation step or chemical pretreatment. The validity of the methods was evaluated by an external validation set. The selectivity of the developed methods was demonstrated by analyzing the drugs in their combined pharmaceutical formulation without any interference from additives. The obtained results were statistically compared with the official and reported methods where no significant difference was observed regarding both accuracy and precision.

  13. Sector Identification in a Set of Stock Return Time Series Traded at the London Stock Exchange

    NASA Astrophysics Data System (ADS)

    Coronnello, C.; Tumminello, M.; Lillo, F.; Micciche, S.; Mantegna, R. N.

    2005-09-01

    We compare some methods recently used in the literature to detect the existence of a certain degree of common behavior of stock returns belonging to the same economic sector. Specifically, we discuss methods based on random matrix theory and hierarchical clustering techniques. We apply these methods to a portfolio of stocks traded at the London Stock Exchange. The investigated time series are recorded both at a daily time horizon and at a 5-minute time horizon. The correlation coefficient matrix is very different at different time horizons confirming that more structured correlation coefficient matrices are observed for long time horizons. All the considered methods are able to detect economic information and the presence of clusters characterized by the economic sector of stocks. However, different methods present a different degree of sensitivity with respect to different sectors. Our comparative analysis suggests that the application of just a single method could not be able to extract all the economic information present in the correlation coefficient matrix of a stock portfolio.

  14. Evaluation of methods for the extraction of DNA from drinking water distribution system biofilms.

    PubMed

    Hwang, Chiachi; Ling, Fangqiong; Andersen, Gary L; LeChevallier, Mark W; Liu, Wen-Tso

    2012-01-01

    While drinking water biofilms have been characterized in various drinking water distribution systems (DWDS), little is known about the impact of different DNA extraction methods on the subsequent analysis of microbial communities in drinking water biofilms. Since different DNA extraction methods have been shown to affect the outcome of microbial community analysis in other environments, it is necessary to select a DNA extraction method prior to the application of molecular tools to characterize the complex microbial ecology of the DWDS. This study compared the quantity and quality of DNA yields from selected DWDS bacteria with different cell wall properties using five widely used DNA extraction methods. These were further selected and evaluated for their efficiency and reproducibility of DNA extraction from DWDS samples. Terminal restriction fragment length analysis and the 454 pyrosequencing technique were used to interpret the differences in microbial community structure and composition, respectively, from extracted DNA. Such assessments serve as a concrete step towards the determination of an optimal DNA extraction method for drinking water biofilms, which can then provide a reliable comparison of the meta-analysis results obtained in different laboratories.

  15. A total variation diminishing finite difference algorithm for sonic boom propagation models

    NASA Technical Reports Server (NTRS)

    Sparrow, Victor W.

    1993-01-01

    It is difficult to accurately model the rise phases of sonic boom waveforms with traditional finite difference algorithms because of finite difference phase dispersion. This paper introduces the concept of a total variation diminishing (TVD) finite difference method as a tool for accurately modeling the rise phases of sonic booms. A standard second order finite difference algorithm and its TVD modified counterpart are both applied to the one-way propagation of a square pulse. The TVD method clearly outperforms the non-TVD method, showing great potential as a new computational tool in the analysis of sonic boom propagation.

  16. Study of EEG during Sternberg Tasks with Different Direction of Arrangement for Letters

    NASA Astrophysics Data System (ADS)

    Kamihoriuchi, Kenji; Nuruki, Atsuo; Matae, Tadashi; Kurono, Asutsugu; Yunokuchi, Kazutomo

    In previous study, we recorded electroencephalogram (EEG) of patients with dementia and healthy subjects during Sternberg task. But, only one presentation method of Sternberg task was considered in previous study. Therefore, we examined whether the EEG was different in two different presentation methods wrote letters horizontally and wrote letters vertically in this study. We recorded EEG of six healthy subjects during Sternberg task using two different presentation methods. The result was not different in EEG topography of all subjects. In all subjects, correct rate increased in case of vertically arranged letters.

  17. Evaluation of Alternative Difference-in-Differences Methods

    ERIC Educational Resources Information Center

    Yu, Bing

    2013-01-01

    Difference-in-differences (DID) strategies are particularly useful for evaluating policy effects in natural experiments in which, for example, a policy affects some schools and students but not others. However, the standard DID method may produce biased estimation of the policy effect if the confounding effect of concurrent events varies by…

  18. Gestalt Therapy: Its Inheritance from Gestalt Psychology.

    ERIC Educational Resources Information Center

    Yontef, Gary M.

    When adequately elaborated, the basic method of Gestalt therapy can be traced to the phenomenological field theory of Gestalt psychology. Gestalt therapy differs from Gestalt psychology not because of a difference in philosophy or method, but because of different contexts; the clinical context has different demands than those of basic research.…

  19. State of the art of immunoassay methods for B-type natriuretic peptides: An update.

    PubMed

    Clerico, Aldo; Franzini, Maria; Masotti, Silvia; Prontera, Concetta; Passino, Claudio

    2015-01-01

    The aim of this review article is to give an update on the state of the art of the immunoassay methods for the measurement of B-type natriuretic peptide (BNP) and its related peptides. Using chromatographic procedures, several studies reported an increasing number of circulating peptides related to BNP in human plasma of patients with heart failure. These peptides may have reduced or even no biological activity. Furthermore, other studies have suggested that, using immunoassays that are considered specific for BNP, the precursor of the peptide hormone, proBNP, constitutes a major portion of the peptide measured in plasma of patients with heart failure. Because BNP immunoassay methods show large (up to 50%) systematic differences in values, the use of identical decision values for all immunoassay methods, as suggested by the most recent international guidelines, seems unreasonable. Since proBNP significantly cross-reacts with all commercial immunoassay methods considered specific for BNP, manufacturers should test and clearly declare the degree of cross-reactivity of glycosylated and non-glycosylated proBNP in their BNP immunoassay methods. Clinicians should take into account that there are large systematic differences between methods when they compare results from different laboratories that use different BNP immunoassays. On the other hand, clinical laboratories should take part in external quality assessment (EQA) programs to evaluate the bias of their method in comparison to other BNP methods. Finally, the authors believe that the development of more specific methods for the active peptide, BNP1-32, should reduce the systematic differences between methods and result in better harmonization of results.

  20. Age and Gender Differences in the Use of Various Poisoning Methods for Deliberate Parasuicide Cases Admitted to Loghman Hospital in Tehran (2000-2004)

    ERIC Educational Resources Information Center

    Ghazinour, Mehdi; Emami, Habib; Richter, Jorg; Abdollahi, Mohammad; Pazhumand, Abdolkarim

    2009-01-01

    Different methods of poisoning used by individuals with the diagnosis of parasuicide admitted to the Loghman Hospital, Tehran, from 2000 to 2004 were investigated, with particular focus on gender and age differences. Drugs, pesticides, and other agricultural chemicals (women: 12.7%, men: 9%) were the most commonly used methods. In males, the…

  1. What Are Reasons for the Large Gender Differences in the Lethality of Suicidal Acts? An Epidemiological Analysis in Four European Countries

    PubMed Central

    Heinrichs, Katherina; Székely, András; Tóth, Mónika Ditta; Coyne, James; Quintão, Sónia; Arensman, Ella; Coffey, Claire; Maxwell, Margaret; Värnik, Airi; van Audenhove, Chantal; McDaid, David; Sarchiapone, Marco; Schmidtke, Armin; Genz, Axel; Gusmão, Ricardo; Hegerl, Ulrich

    2015-01-01

    Background In Europe, men have lower rates of attempted suicide compared to women and at the same time a higher rate of completed suicides, indicating major gender differences in lethality of suicidal behaviour. The aim of this study was to analyse the extent to which these gender differences in lethality can be explained by factors such as choice of more lethal methods or lethality differences within the same suicide method or age. In addition, we explored gender differences in the intentionality of suicide attempts. Methods and Findings Methods. Design: Epidemiological study using a combination of self-report and official data. Setting: Mental health care services in four European countries: Germany, Hungary, Ireland, and Portugal. Data basis: Completed suicides derived from official statistics for each country (767 acts, 74.4% male) and assessed suicide attempts excluding habitual intentional self-harm (8,175 acts, 43.2% male). Main Outcome Measures and Data Analysis. We collected data on suicidal acts in eight regions of four European countries participating in the EU-funded “OSPI-Europe”-project (www.ospi-europe.com). We calculated method-specific lethality using the number of completed suicides per method * 100 / (number of completed suicides per method + number of attempted suicides per method). We tested gender differences in the distribution of suicidal acts for significance by using the χ2-test for two-by-two tables. We assessed the effect sizes with phi coefficients (φ). We identified predictors of lethality with a binary logistic regression analysis. Poisson regression analysis examined the contribution of choice of methods and method-specific lethality to gender differences in the lethality of suicidal acts. Findings Main Results Suicidal acts (fatal and non-fatal) were 3.4 times more lethal in men than in women (lethality 13.91% (regarding 4106 suicidal acts) versus 4.05% (regarding 4836 suicidal acts)), the difference being significant for the methods hanging, jumping, moving objects, sharp objects and poisoning by substances other than drugs. Median age at time of suicidal behaviour (35–44 years) did not differ between males and females. The overall gender difference in lethality of suicidal behaviour was explained by males choosing more lethal suicide methods (odds ratio (OR) = 2.03; 95% CI = 1.65 to 2.50; p < 0.000001) and additionally, but to a lesser degree, by a higher lethality of suicidal acts for males even within the same method (OR = 1.64; 95% CI = 1.32 to 2.02; p = 0.000005). Results of a regression analysis revealed neither age nor country differences were significant predictors for gender differences in the lethality of suicidal acts. The proportion of serious suicide attempts among all non-fatal suicidal acts with known intentionality (NFSAi) was significantly higher in men (57.1%; 1,207 of 2,115 NFSAi) than in women (48.6%; 1,508 of 3,100 NFSAi) (χ2 = 35.74; p < 0.000001). Main limitations of the study Due to restrictive data security regulations to ensure anonymity in Ireland, specific ages could not be provided because of the relatively low absolute numbers of suicide in the Irish intervention and control region. Therefore, analyses of the interaction between gender and age could only be conducted for three of the four countries. Attempted suicides were assessed for patients presenting to emergency departments or treated in hospitals. An unknown rate of attempted suicides remained undetected. This may have caused an overestimation of the lethality of certain methods. Moreover, the detection of attempted suicides and the registration of completed suicides might have differed across the four countries. Some suicides might be hidden and misclassified as undetermined deaths. Conclusions Men more often used highly lethal methods in suicidal behaviour, but there was also a higher method-specific lethality which together explained the large gender differences in the lethality of suicidal acts. Gender differences in the lethality of suicidal acts were fairly consistent across all four European countries examined. Males and females did not differ in age at time of suicidal behaviour. Suicide attempts by males were rated as being more serious independent of the method used, with the exceptions of attempted hanging, suggesting gender differences in intentionality associated with suicidal behaviour. These findings contribute to understanding of the spectrum of reasons for gender differences in the lethality of suicidal behaviour and should inform the development of gender specific strategies for suicide prevention. PMID:26147965

  2. The method of space-time and conservation element and solution element: A new approach for solving the Navier-Stokes and Euler equations

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung

    1995-01-01

    A new numerical framework for solving conservation laws is being developed. This new framework differs substantially in both concept and methodology from the well-established methods, i.e., finite difference, finite volume, finite element, and spectral methods. It is conceptually simple and designed to overcome several key limitations of the above traditional methods. A two-level scheme for solving the convection-diffusion equation is constructed and used to illuminate the major differences between the present method and those previously mentioned. This explicit scheme, referred to as the a-mu scheme, has two independent marching variables.

  3. Ensemble Methods for MiRNA Target Prediction from Expression Data.

    PubMed

    Le, Thuc Duy; Zhang, Junpeng; Liu, Lin; Li, Jiuyong

    2015-01-01

    microRNAs (miRNAs) are short regulatory RNAs that are involved in several diseases, including cancers. Identifying miRNA functions is very important in understanding disease mechanisms and determining the efficacy of drugs. An increasing number of computational methods have been developed to explore miRNA functions by inferring the miRNA-mRNA regulatory relationships from data. Each of the methods is developed based on some assumptions and constraints, for instance, assuming linear relationships between variables. For such reasons, computational methods are often subject to the problem of inconsistent performance across different datasets. On the other hand, ensemble methods integrate the results from individual methods and have been proved to outperform each of their individual component methods in theory. In this paper, we investigate the performance of some ensemble methods over the commonly used miRNA target prediction methods. We apply eight different popular miRNA target prediction methods to three cancer datasets, and compare their performance with the ensemble methods which integrate the results from each combination of the individual methods. The validation results using experimentally confirmed databases show that the results of the ensemble methods complement those obtained by the individual methods and the ensemble methods perform better than the individual methods across different datasets. The ensemble method, Pearson+IDA+Lasso, which combines methods in different approaches, including a correlation method, a causal inference method, and a regression method, is the best performed ensemble method in this study. Further analysis of the results of this ensemble method shows that the ensemble method can obtain more targets which could not be found by any of the single methods, and the discovered targets are more statistically significant and functionally enriched. The source codes, datasets, miRNA target predictions by all methods, and the ground truth for validation are available in the Supplementary materials.

  4. Ensemble Methods for MiRNA Target Prediction from Expression Data

    PubMed Central

    Le, Thuc Duy; Zhang, Junpeng; Liu, Lin; Li, Jiuyong

    2015-01-01

    Background microRNAs (miRNAs) are short regulatory RNAs that are involved in several diseases, including cancers. Identifying miRNA functions is very important in understanding disease mechanisms and determining the efficacy of drugs. An increasing number of computational methods have been developed to explore miRNA functions by inferring the miRNA-mRNA regulatory relationships from data. Each of the methods is developed based on some assumptions and constraints, for instance, assuming linear relationships between variables. For such reasons, computational methods are often subject to the problem of inconsistent performance across different datasets. On the other hand, ensemble methods integrate the results from individual methods and have been proved to outperform each of their individual component methods in theory. Results In this paper, we investigate the performance of some ensemble methods over the commonly used miRNA target prediction methods. We apply eight different popular miRNA target prediction methods to three cancer datasets, and compare their performance with the ensemble methods which integrate the results from each combination of the individual methods. The validation results using experimentally confirmed databases show that the results of the ensemble methods complement those obtained by the individual methods and the ensemble methods perform better than the individual methods across different datasets. The ensemble method, Pearson+IDA+Lasso, which combines methods in different approaches, including a correlation method, a causal inference method, and a regression method, is the best performed ensemble method in this study. Further analysis of the results of this ensemble method shows that the ensemble method can obtain more targets which could not be found by any of the single methods, and the discovered targets are more statistically significant and functionally enriched. The source codes, datasets, miRNA target predictions by all methods, and the ground truth for validation are available in the Supplementary materials. PMID:26114448

  5. Detecting and Analyzing Corrosion Spots on the Hull of Large Marine Vessels Using Colored 3d LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Aijazi, A. K.; Malaterre, L.; Tazir, M. L.; Trassoudaine, L.; Checchin, P.

    2016-06-01

    This work presents a new method that automatically detects and analyzes surface defects such as corrosion spots of different shapes and sizes, on large ship hulls. In the proposed method several scans from different positions and viewing angles around the ship are registered together to form a complete 3D point cloud. The R, G, B values associated with each scan, obtained with the help of an integrated camera are converted into HSV space to separate out the illumination invariant color component from the intensity. Using this color component, different surface defects such as corrosion spots of different shapes and sizes are automatically detected, within a selected zone, using two different methods depending upon the level of corrosion/defects. The first method relies on a histogram based distribution whereas the second on adaptive thresholds. The detected corrosion spots are then analyzed and quantified to help better plan and estimate the cost of repair and maintenance. Results are evaluated on real data using different standard evaluation metrics to demonstrate the efficacy as well as the technical strength of the proposed method.

  6. Experiments to Evaluate and Implement Passive Tracer Gas Methods to Measure Ventilation Rates in Homes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lunden, Melissa; Faulkner, David; Heredia, Elizabeth

    2012-10-01

    This report documents experiments performed in three homes to assess the methodology used to determine air exchange rates using passive tracer techniques. The experiments used four different tracer gases emitted simultaneously but implemented with different spatial coverage in the home. Two different tracer gas sampling methods were used. The results characterize the factors of the execution and analysis of the passive tracer technique that affect the uncertainty in the calculated air exchange rates. These factors include uncertainties in tracer gas emission rates, differences in measured concentrations for different tracer gases, temporal and spatial variability of the concentrations, the comparison betweenmore » different gas sampling methods, and the effect of different ventilation conditions.« less

  7. Accessibility to primary health care in Belgium: an evaluation of policies awarding financial assistance in shortage areas

    PubMed Central

    2013-01-01

    Background In many countries, financial assistance is awarded to physicians who settle in an area that is designated as a shortage area to prevent unequal accessibility to primary health care. Today, however, policy makers use fairly simple methods to define health care accessibility, with physician-to-population ratios (PPRs) within predefined administrative boundaries being overwhelmingly favoured. Our purpose is to verify whether these simple methods are accurate enough for adequately designating medical shortage areas and explore how these perform relative to more advanced GIS-based methods. Methods Using a geographical information system (GIS), we conduct a nation-wide study of accessibility to primary care physicians in Belgium using four different methods: PPR, distance to closest physician, cumulative opportunity, and floating catchment area (FCA) methods. Results The official method used by policy makers in Belgium (calculating PPR per physician zone) offers only a crude representation of health care accessibility, especially because large contiguous areas (physician zones) are considered. We found substantial differences in the number and spatial distribution of medical shortage areas when applying different methods. Conclusions The assessment of spatial health care accessibility and concomitant policy initiatives are affected by and dependent on the methodology used. The major disadvantage of PPR methods is its aggregated approach, masking subtle local variations. Some simple GIS methods overcome this issue, but have limitations in terms of conceptualisation of physician interaction and distance decay. Conceptually, the enhanced 2-step floating catchment area (E2SFCA) method, an advanced FCA method, was found to be most appropriate for supporting areal health care policies, since this method is able to calculate accessibility at a small scale (e.g. census tracts), takes interaction between physicians into account, and considers distance decay. While at present in health care research methodological differences and modifiable areal unit problems have remained largely overlooked, this manuscript shows that these aspects have a significant influence on the insights obtained. Hence, it is important for policy makers to ascertain to what extent their policy evaluations hold under different scales of analysis and when different methods are used. PMID:23964751

  8. Band selection method based on spectrum difference in targets of interest in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaohan; Yang, Guang; Yang, Yongbo; Huang, Junhua

    2016-10-01

    While hyperspectral data shares rich spectrum information, it has numbers of bands with high correlation coefficients, causing great data redundancy. A reasonable band selection is important for subsequent processing. Bands with large amount of information and low correlation should be selected. On this basis, according to the needs of target detection applications, the spectral characteristics of the objects of interest are taken into consideration in this paper, and a new method based on spectrum difference is proposed. Firstly, according to the spectrum differences of targets of interest, a difference matrix which represents the different spectral reflectance of different targets in different bands is structured. By setting a threshold, the bands satisfying the conditions would be left, constituting a subset of bands. Then, the correlation coefficients between bands are calculated and correlation matrix is given. According to the size of the correlation coefficient, the bands can be set into several groups. At last, the conception of normalized variance is used on behalf of the information content of each band. The bands are sorted by the value of its normalized variance. Set needing number of bands, and the optimum band combination solution can be get by these three steps. This method retains the greatest degree of difference between the target of interest and is easy to achieve by computer automatically. Besides, false color image synthesis experiment is carried out using the bands selected by this method as well as other 3 methods to show the performance of method in this paper.

  9. A spring system method for a mesh generation problem

    NASA Astrophysics Data System (ADS)

    Romanov, A.

    2018-04-01

    A new direct method for the 2d-mesh generation for a simply-connected domain using a spring system is observed. The method can be used with other methods to modify a mesh for growing solid problems. Advantages and disadvantages of the method are shown. Different types of boundary conditions are explored. The results of modelling for different target domains are given. Some applications for composite materials are studied.

  10. Comparison of Three Different Methods for Pile Integrity Testing on a Cylindrical Homogeneous Polyamide Specimen

    NASA Astrophysics Data System (ADS)

    Lugovtsova, Y. D.; Soldatov, A. I.

    2016-01-01

    Three different methods for pile integrity testing are proposed to compare on a cylindrical homogeneous polyamide specimen. The methods are low strain pile integrity testing, multichannel pile integrity testing and testing with a shaker system. Since the low strain pile integrity testing is well-established and standardized method, the results from it are used as a reference for other two methods.

  11. Accurate Simulation and Detection of Coevolution Signals in Multiple Sequence Alignments

    PubMed Central

    Ackerman, Sharon H.; Tillier, Elisabeth R.; Gatti, Domenico L.

    2012-01-01

    Background While the conserved positions of a multiple sequence alignment (MSA) are clearly of interest, non-conserved positions can also be important because, for example, destabilizing effects at one position can be compensated by stabilizing effects at another position. Different methods have been developed to recognize the evolutionary relationship between amino acid sites, and to disentangle functional/structural dependencies from historical/phylogenetic ones. Methodology/Principal Findings We have used two complementary approaches to test the efficacy of these methods. In the first approach, we have used a new program, MSAvolve, for the in silico evolution of MSAs, which records a detailed history of all covarying positions, and builds a global coevolution matrix as the accumulated sum of individual matrices for the positions forced to co-vary, the recombinant coevolution, and the stochastic coevolution. We have simulated over 1600 MSAs for 8 protein families, which reflect sequences of different sizes and proteins with widely different functions. The calculated coevolution matrices were compared with the coevolution matrices obtained for the same evolved MSAs with different coevolution detection methods. In a second approach we have evaluated the capacity of the different methods to predict close contacts in the representative X-ray structures of an additional 150 protein families using only experimental MSAs. Conclusions/Significance Methods based on the identification of global correlations between pairs were found to be generally superior to methods based only on local correlations in their capacity to identify coevolving residues using either simulated or experimental MSAs. However, the significant variability in the performance of different methods with different proteins suggests that the simulation of MSAs that replicate the statistical properties of the experimental MSA can be a valuable tool to identify the coevolution detection method that is most effective in each case. PMID:23091608

  12. Further Insight and Additional Inference Methods for Polynomial Regression Applied to the Analysis of Congruence

    ERIC Educational Resources Information Center

    Cohen, Ayala; Nahum-Shani, Inbal; Doveh, Etti

    2010-01-01

    In their seminal paper, Edwards and Parry (1993) presented the polynomial regression as a better alternative to applying difference score in the study of congruence. Although this method is increasingly applied in congruence research, its complexity relative to other methods for assessing congruence (e.g., difference score methods) was one of the…

  13. Colorimetric characterization of digital cameras with unrestricted capture settings applicable for different illumination circumstances

    NASA Astrophysics Data System (ADS)

    Fang, Jingyu; Xu, Haisong; Wang, Zhehong; Wu, Xiaomin

    2016-05-01

    With colorimetric characterization, digital cameras can be used as image-based tristimulus colorimeters for color communication. In order to overcome the restriction of fixed capture settings adopted in the conventional colorimetric characterization procedures, a novel method was proposed considering capture settings. The method calculating colorimetric value of the measured image contains five main steps, including conversion from RGB values to equivalent ones of training settings through factors based on imaging system model so as to build the bridge between different settings, scaling factors involved in preparation steps for transformation mapping to avoid errors resulted from nonlinearity of polynomial mapping for different ranges of illumination levels. The experiment results indicate that the prediction error of the proposed method, which was measured by CIELAB color difference formula, reaches less than 2 CIELAB units under different illumination levels and different correlated color temperatures. This prediction accuracy for different capture settings remains the same level as the conventional method for particular lighting condition.

  14. Newton's method applied to finite-difference approximations for the steady-state compressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Bailey, Harry E.; Beam, Richard M.

    1991-01-01

    Finite-difference approximations for steady-state compressible Navier-Stokes equations, whose two spatial dimensions are written in generalized curvilinear coordinates and strong conservation-law form, are presently solved by means of Newton's method in order to obtain a lifting-airfoil flow field under subsonic and transonnic conditions. In addition to ascertaining the computational requirements of an initial guess ensuring convergence and the degree of computational efficiency obtainable via the approximate Newton method's freezing of the Jacobian matrices, attention is given to the need for auxiliary methods assessing the temporal stability of steady-state solutions. It is demonstrated that nonunique solutions of the finite-difference equations are obtainable by Newton's method in conjunction with a continuation method.

  15. Computationally efficient finite-difference modal method for the solution of Maxwell's equations.

    PubMed

    Semenikhin, Igor; Zanuccoli, Mauro

    2013-12-01

    In this work, a new implementation of the finite-difference (FD) modal method (FDMM) based on an iterative approach to calculate the eigenvalues and corresponding eigenfunctions of the Helmholtz equation is presented. Two relevant enhancements that significantly increase the speed and accuracy of the method are introduced. First of all, the solution of the complete eigenvalue problem is avoided in favor of finding only the meaningful part of eigenmodes by using iterative methods. Second, a multigrid algorithm and Richardson extrapolation are implemented. Simultaneous use of these techniques leads to an enhancement in terms of accuracy, which allows a simple method such as the FDMM with a typical three-point difference scheme to be significantly competitive with an analytical modal method.

  16. A New Moving Object Detection Method Based on Frame-difference and Background Subtraction

    NASA Astrophysics Data System (ADS)

    Guo, Jiajia; Wang, Junping; Bai, Ruixue; Zhang, Yao; Li, Yong

    2017-09-01

    Although many methods of moving object detection have been proposed, moving object extraction is still the core in video surveillance. However, with the complex scene in real world, false detection, missed detection and deficiencies resulting from cavities inside the body still exist. In order to solve the problem of incomplete detection for moving objects, a new moving object detection method combined an improved frame-difference and Gaussian mixture background subtraction is proposed in this paper. To make the moving object detection more complete and accurate, the image repair and morphological processing techniques which are spatial compensations are applied in the proposed method. Experimental results show that our method can effectively eliminate ghosts and noise and fill the cavities of the moving object. Compared to other four moving object detection methods which are GMM, VIBE, frame-difference and a literature's method, the proposed method improve the efficiency and accuracy of the detection.

  17. Comprehensive Numerical Analysis of Finite Difference Time Domain Methods for Improving Optical Waveguide Sensor Accuracy

    PubMed Central

    Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly

    2016-01-01

    This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.

  18. Numerical simulation using vorticity-vector potential formulation

    NASA Technical Reports Server (NTRS)

    Tokunaga, Hiroshi

    1993-01-01

    An accurate and efficient computational method is needed for three-dimensional incompressible viscous flows in engineering applications. On solving the turbulent shear flows directly or using the subgrid scale model, it is indispensable to resolve the small scale fluid motions as well as the large scale motions. From this point of view, the pseudo-spectral method is used so far as the computational method. However, the finite difference or the finite element methods are widely applied for computing the flow with practical importance since these methods are easily applied to the flows with complex geometric configurations. However, there exist several problems in applying the finite difference method to direct and large eddy simulations. Accuracy is one of most important problems. This point was already addressed by the present author on the direct simulations on the instability of the plane Poiseuille flow and also on the transition to turbulence. In order to obtain high efficiency, the multi-grid Poisson solver is combined with the higher-order, accurate finite difference method. The formulation method is also one of the most important problems in applying the finite difference method to the incompressible turbulent flows. The three-dimensional Navier-Stokes equations have been solved so far in the primitive variables formulation. One of the major difficulties of this method is the rigorous satisfaction of the equation of continuity. In general, the staggered grid is used for the satisfaction of the solenoidal condition for the velocity field at the wall boundary. However, the velocity field satisfies the equation of continuity automatically in the vorticity-vector potential formulation. From this point of view, the vorticity-vector potential method was extended to the generalized coordinate system. In the present article, we adopt the vorticity-vector potential formulation, the generalized coordinate system, and the 4th-order accurate difference method as the computational method. We present the computational method and apply the present method to computations of flows in a square cavity at large Reynolds number in order to investigate its effectiveness.

  19. Evaluation of two-phase flow solvers using Level Set and Volume of Fluid methods

    NASA Astrophysics Data System (ADS)

    Bilger, C.; Aboukhedr, M.; Vogiatzaki, K.; Cant, R. S.

    2017-09-01

    Two principal methods have been used to simulate the evolution of two-phase immiscible flows of liquid and gas separated by an interface. These are the Level-Set (LS) method and the Volume of Fluid (VoF) method. Both methods attempt to represent the very sharp interface between the phases and to deal with the large jumps in physical properties associated with it. Both methods have their own strengths and weaknesses. For example, the VoF method is known to be prone to excessive numerical diffusion, while the basic LS method has some difficulty in conserving mass. Major progress has been made in remedying these deficiencies, and both methods have now reached a high level of physical accuracy. Nevertheless, there remains an issue, in that each of these methods has been developed by different research groups, using different codes and most importantly the implementations have been fine tuned to tackle different applications. Thus, it remains unclear what are the remaining advantages and drawbacks of each method relative to the other, and what might be the optimal way to unify them. In this paper, we address this gap by performing a direct comparison of two current state-of-the-art variations of these methods (LS: RCLSFoam and VoF: interPore) and implemented in the same code (OpenFoam). We subject both methods to a pair of benchmark test cases while using the same numerical meshes to examine a) the accuracy of curvature representation, b) the effect of tuning parameters, c) the ability to minimise spurious velocities and d) the ability to tackle fluids with very different densities. For each method, one of the test cases is chosen to be fairly benign while the other test case is expected to present a greater challenge. The results indicate that both methods can be made to work well on both test cases, while displaying different sensitivity to the relevant parameters.

  20. Method for Non-Invasive Determination of Chemical Properties of Aqueous Solutions

    NASA Technical Reports Server (NTRS)

    Jones, Alan (Inventor); Thomas, Nathan A. (Inventor); Todd, Paul W. (Inventor)

    2016-01-01

    A method for non-invasively determining a chemical property of an aqueous solution is provided. The method provides the steps of providing a colored solute having a light absorbance spectrum and transmitting light through the colored solute at two different wavelengths. The method further provides the steps of measuring light absorbance of the colored solute at the two different transmitted light wavelengths, and comparing the light absorbance of the colored solute at the two different wavelengths to determine a chemical property of an aqueous solution.

  1. Temperature profiles of different cooling methods in porcine pancreas procurement.

    PubMed

    Weegman, Bradley P; Suszynski, Thomas M; Scott, William E; Ferrer Fábrega, Joana; Avgoustiniatos, Efstathios S; Anazawa, Takayuki; O'Brien, Timothy D; Rizzari, Michael D; Karatzas, Theodore; Jie, Tun; Sutherland, David E R; Hering, Bernhard J; Papas, Klearchos K

    2014-01-01

    Porcine islet xenotransplantation is a promising alternative to human islet allotransplantation. Porcine pancreas cooling needs to be optimized to reduce the warm ischemia time (WIT) following donation after cardiac death, which is associated with poorer islet isolation outcomes. This study examines the effect of four different cooling Methods on core porcine pancreas temperature (n = 24) and histopathology (n = 16). All Methods involved surface cooling with crushed ice and chilled irrigation. Method A, which is the standard for porcine pancreas procurement, used only surface cooling. Method B involved an intravascular flush with cold solution through the pancreas arterial system. Method C involved an intraductal infusion with cold solution through the major pancreatic duct, and Method D combined all three cooling Methods. Surface cooling alone (Method A) gradually decreased core pancreas temperature to <10 °C after 30 min. Using an intravascular flush (Method B) improved cooling during the entire duration of procurement, but incorporating an intraductal infusion (Method C) rapidly reduced core temperature 15-20 °C within the first 2 min of cooling. Combining all methods (Method D) was the most effective at rapidly reducing temperature and providing sustained cooling throughout the duration of procurement, although the recorded WIT was not different between Methods (P = 0.36). Histological scores were different between the cooling Methods (P = 0.02) and the worst with Method A. There were differences in histological scores between Methods A and C (P = 0.02) and Methods A and D (P = 0.02), but not between Methods C and D (P = 0.95), which may highlight the importance of early cooling using an intraductal infusion. In conclusion, surface cooling alone cannot rapidly cool large (porcine or human) pancreata. Additional cooling with an intravascular flush and intraductal infusion results in improved core porcine pancreas temperature profiles during procurement and histopathology scores. These data may also have implications on human pancreas procurement as use of an intraductal infusion is not common practice. © 2014 John Wiley & Sons A/S Published by John Wiley & Sons Ltd.

  2. System and method employing a minimum distance and a load feature database to identify electric load types of different electric loads

    DOEpatents

    Lu, Bin; Yang, Yi; Sharma, Santosh K; Zambare, Prachi; Madane, Mayura A

    2014-12-23

    A method identifies electric load types of a plurality of different electric loads. The method includes providing a load feature database of a plurality of different electric load types, each of the different electric load types including a first load feature vector having at least four different load features; sensing a voltage signal and a current signal for each of the different electric loads; determining a second load feature vector comprising at least four different load features from the sensed voltage signal and the sensed current signal for a corresponding one of the different electric loads; and identifying by a processor one of the different electric load types by determining a minimum distance of the second load feature vector to the first load feature vector of the different electric load types of the load feature database.

  3. A simple and rapid method for preparing the whole section of starchy seed to investigate the morphology and distribution of starch in different regions of seed.

    PubMed

    Zhao, Lingxiao; Pan, Ting; Guo, Dongwei; Wei, Cunxu

    2018-01-01

    Storage starch in starchy seed influences the seed weight and texture, and determines its applications in food and nonfood industries. Starch granules from different plant sources have significantly different shapes and sizes, and even more the difference exists in the different regions of the same tissue. Therefore, it is very important to in situ investigate the morphology and distribution of starch in the whole seed. However, a simple and rapid method is deficient to prepare the whole section of starchy seed for investigating the morphology and distribution of starch in the whole seeds for a large number of samples. A simple and rapid method was established to prepare the whole section of starchy seed, especially for floury seed, in this study. The whole seeds of translucent and chalky rice, vitreous and floury maize, and normal barley and wheat were sectioned successfully using the newly established method. The iodine-stained section clearly exhibited the shapes and size of starch granules in different regions of seed. The starch granules with different morphologies and iodine-staining colors existed regionally in the seeds of high-amylose rice and maize. The sections of lotus and kidney bean seeds also showed the feasibility of this method for starchy non-cereal seeds. The simple and rapid method was proven effective for preparing the whole sections of starchy seeds. The whole section of seed could be used to investigate the morphology and distribution of starch granules in different regions of the whole seed. The method was especially suitable for large sample numbers to investigate the starch morphology in short time.

  4. Energy stable and high-order-accurate finite difference methods on staggered grids

    NASA Astrophysics Data System (ADS)

    O'Reilly, Ossian; Lundquist, Tomas; Dunham, Eric M.; Nordström, Jan

    2017-10-01

    For wave propagation over distances of many wavelengths, high-order finite difference methods on staggered grids are widely used due to their excellent dispersion properties. However, the enforcement of boundary conditions in a stable manner and treatment of interface problems with discontinuous coefficients usually pose many challenges. In this work, we construct a provably stable and high-order-accurate finite difference method on staggered grids that can be applied to a broad class of boundary and interface problems. The staggered grid difference operators are in summation-by-parts form and when combined with a weak enforcement of the boundary conditions, lead to an energy stable method on multiblock grids. The general applicability of the method is demonstrated by simulating an explosive acoustic source, generating waves reflecting against a free surface and material discontinuity.

  5. Shear Strength of Remoulding Clay Samples Using Different Methods of Moulding

    NASA Astrophysics Data System (ADS)

    Norhaliza, W.; Ismail, B.; Azhar, A. T. S.; Nurul, N. J.

    2016-07-01

    Shear strength for clay soil was required to determine the soil stability. Clay was known as a soil with complex natural formations and very difficult to obtain undisturbed samples at the site. The aim of this paper was to determine the unconfined shear strength of remoulded clay on different methods in moulding samples which were proctor compaction, hand operated soil compacter and miniature mould methods. All the samples were remoulded with the same optimum moisture content (OMC) and density that were 18% and 1880 kg/m3 respectively. The unconfined shear strength results of remoulding clay soils for proctor compaction method was 289.56kPa with the strain 4.8%, hand operated method was 261.66kPa with the strain 4.4% and miniature mould method was 247.52kPa with the strain 3.9%. Based on the proctor compaction method, the reduction percentage of unconfined shear strength of remoulded clay soil of hand operated method was 9.66%, and for miniature mould method was 14.52%. Thus, because there was no significant difference of reduction percentage of unconfined shear strength between three different methods, so it can be concluded that remoulding clay by hand operated method and miniature mould method were accepted and suggested to perform remoulding clay samples by other future researcher. However for comparison, the hand operated method was more suitable to form remoulded clay sample in term of easiness, saving time and less energy for unconfined shear strength determination purposes.

  6. Assessing the stock market volatility for different sectors in Malaysia by using standard deviation and EWMA methods

    NASA Astrophysics Data System (ADS)

    Saad, Shakila; Ahmad, Noryati; Jaffar, Maheran Mohd

    2017-11-01

    Nowadays, the study on volatility concept especially in stock market has gained so much attention from a group of people engaged in financial and economic sectors. The applications of volatility concept in financial economics can be seen in valuation of option pricing, estimation of financial derivatives, hedging the investment risk and etc. There are various ways to measure the volatility value. However for this study, two methods are used; the simple standard deviation and Exponentially Weighted Moving Average (EWMA). The focus of this study is to measure the volatility on three different sectors of business in Malaysia, called primary, secondary and tertiary by using both methods. The daily and annual volatilities of different business sector based on stock prices for the period of 1 January 2014 to December 2014 have been calculated in this study. Result shows that different patterns of the closing stock prices and return give different volatility values when calculating using simple method and EWMA method.

  7. [An attempt for standardization of serum CA19-9 levels, in order to dissolve the gap between three different methods].

    PubMed

    Hayashi, Kuniki; Hoshino, Tadashi; Yanai, Mitsuru; Tsuchiya, Tatsuyuki; Kumasaka, Kazunari; Kawano, Kinya

    2004-06-01

    It is well known that serious method-related differences exist in results of serum CA19-9, and the necessity of standardization has been pointed out. In this study, differences of serum tumor marker CA19-9 levels obtained by various immunoassay kits (CLEIA, FEIA, LPIA and RIA) were evaluated in sixty-seven clinical samples and five calibrators and the possibility to improve the inter-methodological differences were observed not only for clinical samples but also for calibrators. We supposed an assumed standard material using by a calibrator. We calculated the serum levels of CA19-9 when using the assumed standard material for three different measurement methods. We approximate the CA19-9 values using by this method. It is suggested that the obtained CA19-9 values could be approximated by recalculation with the assumed standard material would be able to correct between-method and between-laboratory discrepancies in particular systematic errors.

  8. Standardised Benchmarking in the Quest for Orthologs

    PubMed Central

    Altenhoff, Adrian M.; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A.; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P.; Schreiber, Fabian; Sousa da Silva, Alan; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Juhl Jensen, Lars; Martin, Maria J.; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E.; Thomas, Paul D.; Sonnhammer, Erik; Dessimoz, Christophe

    2016-01-01

    The identification of evolutionarily related genes across different species—orthologs in particular—forms the backbone of many comparative, evolutionary, and functional genomic analyses. Achieving high accuracy in orthology inference is thus essential. Yet the true evolutionary history of genes, required to ascertain orthology, is generally unknown. Furthermore, orthologs are used for very different applications across different phyla, with different requirements in terms of the precision-recall trade-off. As a result, assessing the performance of orthology inference methods remains difficult for both users and method developers. Here, we present a community effort to establish standards in orthology benchmarking and facilitate orthology benchmarking through an automated web-based service (http://orthology.benchmarkservice.org). Using this new service, we characterise the performance of 15 well-established orthology inference methods and resources on a battery of 20 different benchmarks. Standardised benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimal requirement for new tools and resources, and guides the development of more accurate orthology inference methods. PMID:27043882

  9. A probabilistic method for testing and estimating selection differences between populations

    PubMed Central

    He, Yungang; Wang, Minxian; Huang, Xin; Li, Ran; Xu, Hongyang; Xu, Shuhua; Jin, Li

    2015-01-01

    Human populations around the world encounter various environmental challenges and, consequently, develop genetic adaptations to different selection forces. Identifying the differences in natural selection between populations is critical for understanding the roles of specific genetic variants in evolutionary adaptation. Although numerous methods have been developed to detect genetic loci under recent directional selection, a probabilistic solution for testing and quantifying selection differences between populations is lacking. Here we report the development of a probabilistic method for testing and estimating selection differences between populations. By use of a probabilistic model of genetic drift and selection, we showed that logarithm odds ratios of allele frequencies provide estimates of the differences in selection coefficients between populations. The estimates approximate a normal distribution, and variance can be estimated using genome-wide variants. This allows us to quantify differences in selection coefficients and to determine the confidence intervals of the estimate. Our work also revealed the link between genetic association testing and hypothesis testing of selection differences. It therefore supplies a solution for hypothesis testing of selection differences. This method was applied to a genome-wide data analysis of Han and Tibetan populations. The results confirmed that both the EPAS1 and EGLN1 genes are under statistically different selection in Han and Tibetan populations. We further estimated differences in the selection coefficients for genetic variants involved in melanin formation and determined their confidence intervals between continental population groups. Application of the method to empirical data demonstrated the outstanding capability of this novel approach for testing and quantifying differences in natural selection. PMID:26463656

  10. 78 FR 9 - Airworthiness Directives; The Boeing Company Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-02

    ... AD adds repetitive inspections for cracking using different inspection methods and inspecting... cracking using different inspection methods and would inspect additional areas, and corrective actions if... an acceptable method for accomplishing the inspections in areas covered by non-terminating repairs as...

  11. Educating Instructional Designers: Different Methods for Different Outcomes.

    ERIC Educational Resources Information Center

    Rowland, Gordon; And Others

    1994-01-01

    Suggests new methods of teaching instructional design based on literature reviews of other design fields including engineering, architecture, interior design, media design, and medicine. Methods discussed include public presentations, visiting experts, competitions, artifacts, case studies, design studios, and internships and apprenticeships.…

  12. New non-invasive safe, quick, economical method of detecting various cancers was found using QRS complex or rising part of T-wave of recorded ECGs. Cancers can be screened along with their biochemical parameters & therapeutic effects of any cancer treatments can be evaluated using recorded ECGs of the same individual.

    PubMed

    Omura, Yoshiaki; Lu, Dominic; O'Young, Brian; Jones, Marilyn; Nihrane, Abdallah; Duvvi, Harsha; Shimotsuura, Yasuhiro; Ohki, Motomu

    2015-01-01

    There are many methods of detecting cancers including detection of cancer markers by blood test, (which is invasive, time consuming and relatively expensive), detection of cancers by non-invasive methods such as X-Ray, CT scan, and MRI & PET Scan (which are non-invasive and quick but very expensive). Our research was performed to develop new non-invasive, safe, quick economical method of detecting cancers and the 1st author already developed for clinically important non-invasive new methods including early stage of present method using his method of localizing accurate organ representation areas of face, eyebrows, upper lip, lower lip, surface and dorsal part of the tongue, surface backs, and palm side of the hands. This accurate localization of the organ representation area of the different parts of the body was performed using electromagnetic field resonance phenomenon between 2 identical molecules or tissues based on our US patented non-invasive method in 1993. Since year 2000, we developed the following non-invasive diagnostic methods that can be quickly identified by the patented simple non-invasive method without using expensive or bulky instrument at any office or field where there is no electricity or instrument available. The following are examples of non-invasive quick method of diagnosis and treatment of cancers using different approaches: 1) Soft red laser beam scanning of different parts of body; 2) By speaking voice; 3) Visible and invisible characteristic abnormalities on different organ representation areas of the different parts of the body, and 4) Mouth, Hand, and Foot Writings of both right and left side of the body. As a consequence of our latest research, we were able to develop a simple method of detecting cancer from existing recorded electrocardiograms. In this article, we are going to describe the method and result of clinical applications on many different cancers of different organs including lung, esophagus, breast, stomach, colon, uterus, ovary, prostate gland, as well as common bone marrow related malignancies such as Hodgkin's Lymphoma, Non-Hodgkin's Lymphoma, Multiple Myeloma as well as Leukemia.

  13. Laboratory based instruction in Pakistan: Comparative evaluation of three laboratory instruction methods in biological science at higher secondary school level

    NASA Astrophysics Data System (ADS)

    Cheema, Tabinda Shahid

    This study of laboratory based instruction at higher secondary school level was an attempt to gain some insight into the effectiveness of three laboratory instruction methods: cooperative group instruction method, individualised instruction method and lecture demonstration method on biology achievement and retention. A Randomised subjects, Pre-test Post-test Comparative Methods Design was applied. Three groups of students from a year 11 class in Pakistan conducted experiments using the different laboratory instruction methods. Pre-tests, achievement tests after the experiments and retention tests one month later were administered. Results showed no significant difference between the groups on total achievement and retention, nor was there any significant difference on knowledge and comprehension test scores or skills performance. Future research investigating a similar problem is suggested.

  14. Chemometrics Methods for Specificity, Authenticity and Traceability Analysis of Olive Oils: Principles, Classifications and Applications

    PubMed Central

    Messai, Habib; Farman, Muhammad; Sarraj-Laabidi, Abir; Hammami-Semmar, Asma; Semmar, Nabil

    2016-01-01

    Background. Olive oils (OOs) show high chemical variability due to several factors of genetic, environmental and anthropic types. Genetic and environmental factors are responsible for natural compositions and polymorphic diversification resulting in different varietal patterns and phenotypes. Anthropic factors, however, are at the origin of different blends’ preparation leading to normative, labelled or adulterated commercial products. Control of complex OO samples requires their (i) characterization by specific markers; (ii) authentication by fingerprint patterns; and (iii) monitoring by traceability analysis. Methods. These quality control and management aims require the use of several multivariate statistical tools: specificity highlighting requires ordination methods; authentication checking calls for classification and pattern recognition methods; traceability analysis implies the use of network-based approaches able to separate or extract mixed information and memorized signals from complex matrices. Results. This chapter presents a review of different chemometrics methods applied for the control of OO variability from metabolic and physical-chemical measured characteristics. The different chemometrics methods are illustrated by different study cases on monovarietal and blended OO originated from different countries. Conclusion. Chemometrics tools offer multiple ways for quantitative evaluations and qualitative control of complex chemical variability of OO in relation to several intrinsic and extrinsic factors. PMID:28231172

  15. Research on the Calculation Method of Optical Path Difference of the Shanghai Tian Ma Telescope

    NASA Astrophysics Data System (ADS)

    Dong, J.; Fu, L.; Jiang, Y. B.; Liu, Q. H.; Gou, W.; Yan, F.

    2016-03-01

    Based on the Shanghai Tian Ma Telescope (TM), an optical path difference calculation method of the shaped Cassegrain antenna is presented in the paper. Firstly, the mathematical model of the TM optics is established based on the antenna reciprocity theorem. Secondly, the TM sub-reflector and main reflector are fitted by the Non-Uniform Rational B-Splines (NURBS). Finally, the method of optical path difference calculation is implemented, and the expanding application of the Ruze optical path difference formulas in the TM is researched. The method can be used to calculate the optical path difference distributions across the aperture field of the TM due to misalignment like the axial and lateral displacements of the feed and sub-reflector, or the tilt of the sub-reflector. When the misalignment quantity is small, the expanding Ruze optical path difference formulas can be used to calculate the optical path difference quickly. The paper supports the real-time measurement and adjustment of the TM structure. The research has universality, and can provide reference for the optical path difference calculation of other radio telescopes with shaped surfaces.

  16. Assessment of four methods to estimate surface UV radiation using satellite data, by comparison with ground measurements from four stations in Europe

    NASA Astrophysics Data System (ADS)

    Arola, Antti; Kalliskota, S.; den Outer, P. N.; Edvardsen, K.; Hansen, G.; Koskela, T.; Martin, T. J.; Matthijsen, J.; Meerkoetter, R.; Peeters, P.; Seckmeyer, G.; Simon, P. C.; Slaper, H.; Taalas, P.; Verdebout, J.

    2002-08-01

    Four different satellite-UV mapping methods are assessed by comparing them against ground-based measurements. The study includes most of the variability found in geographical, meteorological and atmospheric conditions. Three of the methods did not show any significant systematic bias, except during snow cover. The mean difference (bias) in daily doses for the Rijksinstituut voor Volksgezondheid en Milieu (RIVM) and Joint Research Centre (JRC) methods was found to be less than 10% with a RMS difference of the order of 30%. The Deutsches Zentrum für Luft- und Raumfahrt (DLR) method was assessed for a few selected months, and the accuracy was similar to the RIVM and JRC methods. It was additionally used to demonstrate how spatial averaging of high-resolution cloud data improves the estimation of UV daily doses. For the Institut d'Aéronomie Spatiale de Belgique (IASB) method the differences were somewhat higher, because of their original cloud algorithm. The mean difference in daily doses for IASB was about 30% or more, depending on the station, while the RMS difference was about 60%. The cloud algorithm of IASB has been replaced recently, and as a result the accuracy of the IASB method has improved. Evidence is found that further research and development should focus on the improvement of the cloud parameterization. Estimation of daily exposures is likely to be improved if additional time-resolved cloudiness information is available for the satellite-based methods. It is also demonstrated that further development work should be carried out on the treatment of albedo of snow-covered surfaces.

  17. A comparison of five approaches to measurement of anatomic knee alignment from radiographs.

    PubMed

    McDaniel, G; Mitchell, K L; Charles, C; Kraus, V B

    2010-02-01

    The recent recognition of the correlation of the hip-knee-ankle angle (HKA) with femur-tibia angle (FTA) on a standard knee radiograph has led to the increasing inclusion of FTA assessments in OA studies due to its clinical relevance, cost effectiveness and minimal radiation exposure. Our goal was to investigate the performance metrics of currently used methods of FTA measurement to determine whether a specific protocol could be recommended based on these results. Inter- and intra-rater reliability of FTA measurements were determined by intraclass correlation coefficient (ICC) of two independent analysts. Minimal detectable differences were determined and the correlation of FTA and HKA was analyzed by linear regression. Differences among methods of measuring HKA were assessed by ANOVA. All five methods of FTA measurement demonstrated high precision by inter- and intra-rater reproducibility (ICCs>or=0.93). All five methods displayed good accuracy, but after correction for the offset of FTA from HKA, the femoral notch landmark method was the least accurate. However, the methods differed according to their minimal detectable differences; the FTA methods utilizing the center of the base of the tibial spines or the center of the tibial plateau as knee center landmarks yielded the smallest minimal detectable differences (1.25 degrees and 1.72 degrees, respectively). All methods of FTA were highly reproducible, but varied in their accuracy and sensitivity to detect meaningful differences. Based on these parameters we recommend standardizing measurement angles with vertices at the base of the tibial spines or the center of the tibia and comparing single-point and two-point methods in larger studies. Copyright 2009 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.

  18. Stochastic rainfall synthesis for urban applications using different regionalization methods

    NASA Astrophysics Data System (ADS)

    Callau Poduje, A. C.; Leimbach, S.; Haberlandt, U.

    2017-12-01

    The proper design and efficient operation of urban drainage systems require long and continuous rainfall series in a high temporal resolution. Unfortunately, these time series are usually available in a few locations and it is therefore suitable to develop a stochastic precipitation model to generate rainfall in locations without observations. The model presented is based on an alternating renewal process and involves an external and an internal structure. The members of these structures are described by probability distributions which are site specific. Different regionalization methods based on site descriptors are presented which are used for estimating the distributions for locations without observations. Regional frequency analysis, multiple linear regressions and a vine-copula method are applied for this purpose. An area located in the north-west of Germany is used to compare the different methods and involves a total of 81 stations with 5 min rainfall records. The site descriptors include information available for the whole region: position, topography and hydrometeorologic characteristics which are estimated from long term observations. The methods are compared directly by cross validation of different rainfall statistics. Given that the model is stochastic the evaluation is performed based on ensembles of many long synthetic time series which are compared with observed ones. The performance is as well indirectly evaluated by setting up a fictional urban hydrological system to test the capability of the different methods regarding flooding and overflow characteristics. The results show a good representation of the seasonal variability and good performance in reproducing the sample statistics of the rainfall characteristics. The copula based method shows to be the most robust of the three methods. Advantages and disadvantages of the different methods are presented and discussed.

  19. Evaluation of Four Methods for Predicting Carbon Stocks of Korean Pine Plantations in Heilongjiang Province, China

    PubMed Central

    Gao, Huilin; Dong, Lihu; Li, Fengri; Zhang, Lianjun

    2015-01-01

    A total of 89 trees of Korean pine (Pinus koraiensis) were destructively sampled from the plantations in Heilongjiang Province, P.R. China. The sample trees were measured and calculated for the biomass and carbon stocks of tree components (i.e., stem, branch, foliage and root). Both compatible biomass and carbon stock models were developed with the total biomass and total carbon stocks as the constraints, respectively. Four methods were used to evaluate the carbon stocks of tree components. The first method predicted carbon stocks directly by the compatible carbon stocks models (Method 1). The other three methods indirectly predicted the carbon stocks in two steps: (1) estimating the biomass by the compatible biomass models, and (2) multiplying the estimated biomass by three different carbon conversion factors (i.e., carbon conversion factor 0.5 (Method 2), average carbon concentration of the sample trees (Method 3), and average carbon concentration of each tree component (Method 4)). The prediction errors of estimating the carbon stocks were compared and tested for the differences between the four methods. The results showed that the compatible biomass and carbon models with tree diameter (D) as the sole independent variable performed well so that Method 1 was the best method for predicting the carbon stocks of tree components and total. There were significant differences among the four methods for the carbon stock of stem. Method 2 produced the largest error, especially for stem and total. Methods 3 and Method 4 were slightly worse than Method 1, but the differences were not statistically significant. In practice, the indirect method using the mean carbon concentration of individual trees was sufficient to obtain accurate carbon stocks estimation if carbon stocks models are not available. PMID:26659257

  20. Comparative study between recent methods manipulating ratio spectra and classical methods based on two-wavelength selection for the determination of binary mixture of antazoline hydrochloride and tetryzoline hydrochloride.

    PubMed

    Abdel-Halim, Lamia M; Abd-El Rahman, Mohamed K; Ramadan, Nesrin K; El Sanabary, Hoda F A; Salem, Maissa Y

    2016-04-15

    A comparative study was developed between two classical spectrophotometric methods (dual wavelength method and Vierordt's method) and two recent methods manipulating ratio spectra (ratio difference method and first derivative of ratio spectra method) for simultaneous determination of Antazoline hydrochloride (AN) and Tetryzoline hydrochloride (TZ) in their combined pharmaceutical formulation and in the presence of benzalkonium chloride as a preservative without preliminary separation. The dual wavelength method depends on choosing two wavelengths for each drug in a way so that the difference in absorbance at those two wavelengths is zero for the other drug. While Vierordt's method, is based upon measuring the absorbance and the absorptivity values of the two drugs at their λ(max) (248.0 and 219.0 nm for AN and TZ, respectively), followed by substitution in the corresponding Vierordt's equation. Recent methods manipulating ratio spectra depend on either measuring the difference in amplitudes of ratio spectra between 255.5 and 269.5 nm for AN and 220.0 and 273.0 nm for TZ in case of ratio difference method or computing first derivative of the ratio spectra for each drug then measuring the peak amplitude at 250.0 nm for AN and at 224.0 nm for TZ in case of first derivative of ratio spectrophotometry. The specificity of the developed methods was investigated by analyzing different laboratory prepared mixtures of the two drugs. All methods were applied successfully for the determination of the selected drugs in their combined dosage form proving that the classical spectrophotometric methods can still be used successfully in analysis of binary mixture using minimal data manipulation rather than recent methods which require relatively more steps. Furthermore, validation of the proposed methods was performed according to ICH guidelines; accuracy, precision and repeatability are found to be within the acceptable limits. Statistical studies showed that the methods can be competitively applied in quality control laboratories. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Vulnerability to cavitation in Olea europaea current-year shoots: further evidence of an open-vessel artifact associated with centrifuge and air-injection techniques.

    PubMed

    Torres-Ruiz, José M; Cochard, Hervé; Mayr, Stefan; Beikircher, Barbara; Diaz-Espejo, Antonio; Rodriguez-Dominguez, Celia M; Badel, Eric; Fernández, José Enrique

    2014-11-01

    Different methods have been devised to analyze vulnerability to cavitation of plants. Although a good agreement between them is usually found, some discrepancies have been reported when measuring samples from long-vesseled species. The aim of this study was to evaluate possible artifacts derived from different methods and sample sizes. Current-year shoot segments of mature olive trees (Olea europaea), a long-vesseled species, were used to generate vulnerability curves (VCs) by bench dehydration, pressure collar and both static- and flow-centrifuge methods. For the latter, two different rotors were used to test possible effects of the rotor design on the curves. Indeed, high-resolution computed tomography (HRCT) images were used to evaluate the functional status of xylem at different water potentials. Measurements of native embolism were used to validate the methods used. The pressure collar and the two centrifugal methods showed greater vulnerability to cavitation than the dehydration method. The shift in vulnerability thresholds in centrifuge methods was more pronounced in shorter samples, supporting the open-vessel artifact hypothesis as a higher proportion of vessels were open in short samples. The two different rotor designs used for the flow-centrifuge method revealed similar vulnerability to cavitation. Only the bench dehydration or HRCT methods produced VCs that agreed with native levels of embolism and water potential values measured in the field. © 2014 Scandinavian Plant Physiology Society.

  2. A factorial design experiment as a pilot study for noninvasive genetic sampling.

    PubMed

    Renan, Sharon; Speyer, Edith; Shahar, Naama; Gueta, Tomer; Templeton, Alan R; Bar-David, Shirli

    2012-11-01

    Noninvasive genetic sampling has increasingly been used in ecological and conservation studies during the last decade. A major part of the noninvasive genetic literature is dedicated to the search for optimal protocols, by comparing different methods of collection, preservation and extraction of DNA from noninvasive materials. However, the lack of quantitative comparisons among these studies and the possibility that different methods are optimal for different systems make it difficult to decide which protocol to use. Moreover, most studies that have compared different methods focused on a single factor - collection, preservation or extraction - while there could be interactions between these factors. We designed a factorial experiment, as a pilot study, aimed at exploring the effect of several collection, preservation and extraction methods, and the interactions between them, on the quality and amplification success of DNA obtained from Asiatic wild ass (Equus hemionus) faeces in Israel. The amplification success rates of one mitochondrial DNA and four microsatellite markers differed substantially as a function of collection, preservation and extraction methods and their interactions. The most efficient combination for our system integrated the use of swabs as a collection method with preservation at -20 °C and with the Qiagen DNA Stool Kit with modifications as the DNA extraction method. The significant interaction found between the collection, preservation methods and the extraction methods reinforces the importance of conducting a factorial design experiment, rather than examining each factor separately, as a pilot study before initiating a full-scale noninvasive research project. © 2012 Blackwell Publishing Ltd.

  3. Statistical evaluation of fatty acid profile and cholesterol content in fish (common carp) lipids obtained by different sample preparation procedures.

    PubMed

    Spiric, Aurelija; Trbovic, Dejana; Vranic, Danijela; Djinovic, Jasna; Petronijevic, Radivoj; Matekalo-Sverak, Vesna

    2010-07-05

    Studies performed on lipid extraction from animal and fish tissues do not provide information on its influence on fatty acid composition of the extracted lipids as well as on cholesterol content. Data presented in this paper indicate the impact of extraction procedures on fatty acid profile of fish lipids extracted by the modified Soxhlet and ASE (accelerated solvent extraction) procedure. Cholesterol was also determined by direct saponification method, too. Student's paired t-test used for comparison of the total fat content in carp fish population obtained by two extraction methods shows that differences between values of the total fat content determined by ASE and modified Soxhlet method are not statistically significant. Values obtained by three different methods (direct saponification, ASE and modified Soxhlet method), used for determination of cholesterol content in carp, were compared by one-way analysis of variance (ANOVA). The obtained results show that modified Soxhlet method gives results which differ significantly from the results obtained by direct saponification and ASE method. However the results obtained by direct saponification and ASE method do not differ significantly from each other. The highest quantities for cholesterol (37.65 to 65.44 mg/100 g) in the analyzed fish muscle were obtained by applying direct saponification method, as less destructive one, followed by ASE (34.16 to 52.60 mg/100 g) and modified Soxhlet extraction method (10.73 to 30.83 mg/100 g). Modified Soxhlet method for extraction of fish lipids gives higher values for n-6 fatty acids than ASE method (t(paired)=3.22 t(c)=2.36), while there is no statistically significant difference in the n-3 content levels between the methods (t(paired)=1.31). The UNSFA/SFA ratio obtained by using modified Soxhlet method is also higher than the ratio obtained using ASE method (t(paired)=4.88 t(c)=2.36). Results of Principal Component Analysis (PCA) showed that the highest positive impact to the second principal component (PC2) is recorded by C18:3 n-3, and C20:3 n-6, being present in a higher amount in the samples treated by the modified Soxhlet extraction, while C22:5 n-3, C20:3 n-3, C22:1 and C20:4, C16 and C18 negatively influence the score values of the PC2, showing significantly increased level in the samples treated by ASE method. Hotelling's paired T-square test used on the first three principal components for confirmation of differences in individual fatty acid content obtained by ASE and Soxhlet method in carp muscle showed statistically significant difference between these two data sets (T(2)=161.308, p<0.001). Copyright 2010 Elsevier B.V. All rights reserved.

  4. High order spectral difference lattice Boltzmann method for incompressible hydrodynamics

    NASA Astrophysics Data System (ADS)

    Li, Weidong

    2017-09-01

    This work presents a lattice Boltzmann equation (LBE) based high order spectral difference method for incompressible flows. In the present method, the spectral difference (SD) method is adopted to discretize the convection and collision term of the LBE to obtain high order (≥3) accuracy. Because the SD scheme represents the solution as cell local polynomials and the solution polynomials have good tensor-product property, the present spectral difference lattice Boltzmann method (SD-LBM) can be implemented on arbitrary unstructured quadrilateral meshes for effective and efficient treatment of complex geometries. Thanks to only first oder PDEs involved in the LBE, no special techniques, such as hybridizable discontinuous Galerkin method (HDG), local discontinuous Galerkin method (LDG) and so on, are needed to discrete diffusion term, and thus, it simplifies the algorithm and implementation of the high order spectral difference method for simulating viscous flows. The proposed SD-LBM is validated with four incompressible flow benchmarks in two-dimensions: (a) the Poiseuille flow driven by a constant body force; (b) the lid-driven cavity flow without singularity at the two top corners-Burggraf flow; and (c) the unsteady Taylor-Green vortex flow; (d) the Blasius boundary-layer flow past a flat plate. Computational results are compared with analytical solutions of these cases and convergence studies of these cases are also given. The designed accuracy of the proposed SD-LBM is clearly verified.

  5. The legal aspects of parental rights in assisted reproductive technology.

    PubMed

    Ciccarelli, John K; Ciccarelli, Janice C

    2005-03-01

    This paper provides an overview of the different legal approaches that are used in various jurisdictions to determine parental rights and obligations of the parties involved in third party assisted reproduction. Additionally, the paper explores the differing legal models that are used depending on the method of surrogacy being utilized. The data demonstrates that a given method of surrogacy may well result in different procedures and outcomes regarding parental rights in different jurisdictions. This suggests the need for a uniform method to resolve parental rights where assisted reproductive technology is involved.

  6. 20170824 - Enhancing the Application of Alternative Methods Through Global Cooperation (WC10)

    EPA Science Inventory

    Progress towards the development and translation of alternative testing methods to safety-related decision making is a common goal that crosses organizational, stakeholder, and international boundaries. The challenge is that different organizations have different missions, differ...

  7. Influence of the Extractive Method on the Recovery of Phenolic Compounds in Different Parts of Hymenaea martiana Hayne

    PubMed Central

    Oliveira, Fernanda Granja da Silva; de Lima-Saraiva, Sarah Raquel Gomes; Oliveira, Ana Paula; Rabêlo, Suzana Vieira; Rolim, Larissa Araújo; Almeida, Jackson Roberto Guedes da Silva

    2016-01-01

    Background: Popularly known as “jatobá,” Hymenaea martiana Hayne is a medicinal plant widely used in the Brazilian Northeast for the treatment of various diseases. Objective: The aim of this study was to evaluate the influence of different extractive methods in the production of phenolic compounds from different parts of H. martiana. Materials and Methods: The leaves, bark, fruits, and seeds were dried, pulverized, and submitted to maceration, ultrasound, and percolation extractive methods, which were evaluated for yield, visual aspects, qualitative phytochemical screening, phenolic compound content, and total flavonoids. Results: The highest results of yield were obtained from the maceration of the leaves, which may be related to the contact time between the plant drug and solvent. The visual aspects of the extracts presented some differences between the extractive methods. The phytochemical screening showed consistent data with other studies of the genus. Both the vegetal part as the different extractive methods influenced significantly the levels of phenolic compounds, and the highest content was found in the maceration of the barks, even more than the content found previously. No differences between the levels of total flavonoids were significant. The highest concentration of total flavonoids was found in the ultrasound of the barks, followed by maceration on this drug. According to the results, the barks of H. martiana presented the highest total flavonoid contents. Conclusion: The results demonstrate that both the vegetable and the different extractive methods influenced significantly various parameters obtained in the various extracts, demonstrating the importance of systematic comparative studies for the development of pharmaceuticals and cosmetics. SUMMARY The phytochemical screening showed consistent data with other studies of the genus HymenaeaBoth the vegetable part and the different extractive methods influenced significantly various parameters obtained in the various extracts, including the levels of phenolic compoundsThe barks of H. martiana presented the highest total phenolic and flavonoid contents. PMID:27695267

  8. [A clinical study on different decompression methods in cervical spondylosis].

    PubMed

    Ma, Xun; Zhao, Xiao-fei; Zhao, Yi-bo

    2009-04-15

    To analyze the different decompression methods to treat cervical spondylosis based on imageological evaluation. Two hundred and sixty three consecutive patients with cervical spondylosis between Nov. 2004 and Oct. 2007 were involved in this study. Patients were distributed to different operation groups based on the preoperative imageological evaluation, including anterior or posterior decompression methods. The Anterior method is to use the discectomy of one to three segments, autogenous iliac graft or titanium mesh or cage fusion and titanium plate fixation, or subtotal vertebrectomy of one to two segments autogenous iliac graft or titanium mesh fusion and titanium plate fixation, or discectomy plus subtotal vertebrectomy, The posterior expansive single open door laminoplasty and other operation types. All the patients were divided into different groups by the preoperative imageological evaluation, age, sex and course of diseases. Then we collected each group's preoperative and postoperative JOA scores and mean improvement rate to evaluate the postoperative effect by different decompression methods. Two hundred and thirty five patients were followed up with a mean period of 18 months (range, 4 to 36 months). JOA scores of all patients were improved by different degrees after operations. Anterior and posterior decompression methods both can achieve higher mean improvement rates. There were no significant differences in mean improvement rates between anterior groups, and so did male and female (P > 0.05). The effect will decrease as age increases or the course of disease prolongs. Statistical significance existed among the different age groups and between course groups (P < 0.05). Anterior and posterior decompression methods both can achieve good effect. The key point is to choose the surgical indication correctly, decompress thoroughly, and make the fusion reliable and fixation firm. In regard to the patients' imageological evaluation, the methods should be differentiated. The anterior operation type included discectomy of one to three segments, subtotal vertebrectomy of one to two segments and discectomy plus subtotal vertebra ectomy.

  9. [The Misgav-Ladach method for cesarean section compared to the Pfannenstiel technique].

    PubMed

    Studziński, Zbigniew

    2002-08-01

    The aim of the study was to evaluate the outcome of two different methods of cesarean section. To determine whether the Misgav-Ladach caesarean technique can offer benefits when compared with conventional Pfannenstiel caesarean section technique. This study describes operative details and the postoperative course of 110 patients who underwent caesarean section in May 2000 to December 2000 in Department of Gynecology and Obstetrics in Regional Hospital in Slupsk, Poland. One group (50 women) was operated with the Misgav-Ladach method for caesarean section and the other group (60 women) with Pfannenstiel method. Operating time was significantly different between the two methods, with an average of 20.2 minutes with the Misgav-Ladach method and 47.3 minutes with the Pfannenstiel method (p < 0.001). Time of child delivery was with average 1.1 minutes with the Misgav-Ladach method and 3.8 minutes with the Pfannenstiel method (p < 0.001). The amount of blood loss different significantly, with 336 ml and 483 ml respectively (p < 0.001). No significant difference was found in Apgar scores. No difference was found in overall postoperative complications, wound infection, febrile illness, febrile morbidity, wound dehiscence affected by the new technique. Significantly less suture material was used during Misgav-Ladach caesarean section compared to Pfannenstiel technique (p < 0.001). The Misgav-Ladach method of caesarean section has advantages over the Pfannenstiel technique by being significantly quicker to perform, with the reduced amounts of bleeding and suture material. The women were satisfied with the appearance of their scars. In this study no negative effects of the new operation technique were discovered.

  10. Nonlinear mixed effects dose response modeling in high throughput drug screens: application to melanoma cell line analysis.

    PubMed

    Ding, Kuan-Fu; Petricoin, Emanuel F; Finlay, Darren; Yin, Hongwei; Hendricks, William P D; Sereduk, Chris; Kiefer, Jeffrey; Sekulic, Aleksandar; LoRusso, Patricia M; Vuori, Kristiina; Trent, Jeffrey M; Schork, Nicholas J

    2018-01-12

    Cancer cell lines are often used in high throughput drug screens (HTS) to explore the relationship between cell line characteristics and responsiveness to different therapies. Many current analysis methods infer relationships by focusing on one aspect of cell line drug-specific dose-response curves (DRCs), the concentration causing 50% inhibition of a phenotypic endpoint (IC 50 ). Such methods may overlook DRC features and do not simultaneously leverage information about drug response patterns across cell lines, potentially increasing false positive and negative rates in drug response associations. We consider the application of two methods, each rooted in nonlinear mixed effects (NLME) models, that test the relationship relationships between estimated cell line DRCs and factors that might mitigate response. Both methods leverage estimation and testing techniques that consider the simultaneous analysis of different cell lines to draw inferences about any one cell line. One of the methods is designed to provide an omnibus test of the differences between cell line DRCs that is not focused on any one aspect of the DRC (such as the IC 50 value). We simulated different settings and compared the different methods on the simulated data. We also compared the proposed methods against traditional IC 50 -based methods using 40 melanoma cell lines whose transcriptomes, proteomes, and, importantly, BRAF and related mutation profiles were available. Ultimately, we find that the NLME-based methods are more robust, powerful and, for the omnibus test, more flexible, than traditional methods. Their application to the melanoma cell lines reveals insights into factors that may be clinically useful.

  11. Effectiveness of different tutorial recitation teaching methods and its implications for TA training

    NASA Astrophysics Data System (ADS)

    Koenig, Kathleen M.; Endorf, Robert J.; Braun, Gregory A.

    2007-06-01

    We present results from a comparative study of student understanding for students who attended recitation classes that used different teaching methods. Student volunteers from our introductory calculus-based physics course attended a special recitation class that was taught using one of four different teaching methods. A total of 272 students were divided into approximately equal groups for each method. Students in each class were taught the same topic, “Changes in Energy and Momentum,” from Tutorials in Introductory Physics. The different teaching methods varied in the amount of student and teacher engagement. Student understanding was evaluated through pre- and post-tests. Our results demonstrate the importance of the instructor’s role in teaching recitation classes. The most effective teaching method was for students working in cooperative learning groups with the instructors questioning the groups using Socratic dialogue. In addition, we investigated student preferences for modes of instruction through an open-ended survey. Our results provide guidance and evidence for the teaching methods that should be emphasized in training course instructors.

  12. A comparative study of different aspects of manipulating ratio spectra applied for ternary mixtures: Derivative spectrophotometry versus wavelet transform

    NASA Astrophysics Data System (ADS)

    Salem, Hesham; Lotfy, Hayam M.; Hassan, Nagiba Y.; El-Zeiny, Mohamed B.; Saleh, Sarah S.

    2015-01-01

    This work represents a comparative study of different aspects of manipulating ratio spectra, which are: double divisor ratio spectra derivative (DR-DD), area under curve of derivative ratio (DR-AUC) and its novel approach, namely area under the curve correction method (AUCCM) applied for overlapped spectra; successive derivative of ratio spectra (SDR) and continuous wavelet transform (CWT) methods. The proposed methods represent different aspects of manipulating ratio spectra of the ternary mixture of Ofloxacin (OFX), Prednisolone acetate (PA) and Tetryzoline HCl (TZH) combined in eye drops in the presence of benzalkonium chloride as a preservative. The proposed methods were checked using laboratory-prepared mixtures and were successfully applied for the analysis of pharmaceutical formulation containing the cited drugs. The proposed methods were validated according to the ICH guidelines. A comparative study was conducted between those methods regarding simplicity, limitation and sensitivity. The obtained results were statistically compared with those obtained from the reported HPLC method, showing no significant difference with respect to accuracy and precision.

  13. A comparative study of different aspects of manipulating ratio spectra applied for ternary mixtures: derivative spectrophotometry versus wavelet transform.

    PubMed

    Salem, Hesham; Lotfy, Hayam M; Hassan, Nagiba Y; El-Zeiny, Mohamed B; Saleh, Sarah S

    2015-01-25

    This work represents a comparative study of different aspects of manipulating ratio spectra, which are: double divisor ratio spectra derivative (DR-DD), area under curve of derivative ratio (DR-AUC) and its novel approach, namely area under the curve correction method (AUCCM) applied for overlapped spectra; successive derivative of ratio spectra (SDR) and continuous wavelet transform (CWT) methods. The proposed methods represent different aspects of manipulating ratio spectra of the ternary mixture of Ofloxacin (OFX), Prednisolone acetate (PA) and Tetryzoline HCl (TZH) combined in eye drops in the presence of benzalkonium chloride as a preservative. The proposed methods were checked using laboratory-prepared mixtures and were successfully applied for the analysis of pharmaceutical formulation containing the cited drugs. The proposed methods were validated according to the ICH guidelines. A comparative study was conducted between those methods regarding simplicity, limitation and sensitivity. The obtained results were statistically compared with those obtained from the reported HPLC method, showing no significant difference with respect to accuracy and precision. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. State Recognition of High Voltage Isolation Switch Based on Background Difference and Iterative Search

    NASA Astrophysics Data System (ADS)

    Xu, Jiayuan; Yu, Chengtao; Bo, Bin; Xue, Yu; Xu, Changfu; Chaminda, P. R. Dushantha; Hu, Chengbo; Peng, Kai

    2018-03-01

    The automatic recognition of the high voltage isolation switch by remote video monitoring is an effective means to ensure the safety of the personnel and the equipment. The existing methods mainly include two ways: improving monitoring accuracy and adopting target detection technology through equipment transformation. Such a method is often applied to specific scenarios, with limited application scope and high cost. To solve this problem, a high voltage isolation switch state recognition method based on background difference and iterative search is proposed in this paper. The initial position of the switch is detected in real time through the background difference method. When the switch starts to open and close, the target tracking algorithm is used to track the motion trajectory of the switch. The opening and closing state of the switch is determined according to the angle variation of the switch tracking point and the center line. The effectiveness of the method is verified by experiments on different switched video frames of switching states. Compared with the traditional methods, this method is more robust and effective.

  15. A time-spectral approach to numerical weather prediction

    NASA Astrophysics Data System (ADS)

    Scheffel, Jan; Lindvall, Kristoffer; Yik, Hiu Fai

    2018-05-01

    Finite difference methods are traditionally used for modelling the time domain in numerical weather prediction (NWP). Time-spectral solution is an attractive alternative for reasons of accuracy and efficiency and because time step limitations associated with causal CFL-like criteria, typical for explicit finite difference methods, are avoided. In this work, the Lorenz 1984 chaotic equations are solved using the time-spectral algorithm GWRM (Generalized Weighted Residual Method). Comparisons of accuracy and efficiency are carried out for both explicit and implicit time-stepping algorithms. It is found that the efficiency of the GWRM compares well with these methods, in particular at high accuracy. For perturbative scenarios, the GWRM was found to be as much as four times faster than the finite difference methods. A primary reason is that the GWRM time intervals typically are two orders of magnitude larger than those of the finite difference methods. The GWRM has the additional advantage to produce analytical solutions in the form of Chebyshev series expansions. The results are encouraging for pursuing further studies, including spatial dependence, of the relevance of time-spectral methods for NWP modelling.

  16. Reviewing RAWP. Variations in admission rates: implications for equitable allocation of resources.

    PubMed Central

    Bevan, G; Ingram, R

    1987-01-01

    The review of the Resource Allocation Working Party (RAWP) formula by the National Health Service Management Board has considered the method used to account for cross boundary flows between health authorities. There is no consensus on how this should be done subregionally, as it raises the unresolved problem of the best method of estimating the size of catchment populations. Different methods produce different population sizes when the admission rates of individuals living in different districts vary. The National Health Service/Department of Health and Social Security acute services working group on performance indicators recently considered the assumptions made by different methods in terms of admission thresholds set by hospital clinicians. More complicated methods of assessing catchment areas seem to offer little advantage over the simplest method, but none of the methods answer the underlying questions of what truly determines admission rates and whether higher admission rates are better than lower ones. Empirical research into variations in admission rates and their relation to outcomes is important for determining the fair allocation of resources in future. PMID:3120865

  17. Interpretation of biological and mechanical variations between the Lowry versus Bradford method for protein quantification.

    PubMed

    Lu, Tzong-Shi; Yiao, Szu-Yu; Lim, Kenneth; Jensen, Roderick V; Hsiao, Li-Li

    2010-07-01

    The identification of differences in protein expression resulting from methodical variations is an essential component to the interpretation of true, biologically significant results. We used the Lowry and Bradford methods- two most commonly used methods for protein quantification, to assess whether differential protein expressions are a result of true biological or methodical variations. MATERIAL #ENTITYSTARTX00026; Differential protein expression patterns was assessed by western blot following protein quantification by the Lowry and Bradford methods. We have observed significant variations in protein concentrations following assessment with the Lowry versus Bradford methods, using identical samples. Greater variations in protein concentration readings were observed over time and in samples with higher concentrations, with the Bradford method. Identical samples quantified using both methods yielded significantly different expression patterns on Western blot. We show for the first time that methodical variations observed in these protein assay techniques, can potentially translate into differential protein expression patterns, that can be falsely taken to be biologically significant. Our study therefore highlights the pivotal need to carefully consider methodical approaches to protein quantification in techniques that report quantitative differences.

  18. Estimation of CO2 emissions from waste incinerators: Comparison of three methods.

    PubMed

    Lee, Hyeyoung; Yi, Seung-Muk; Holsen, Thomas M; Seo, Yong-Seok; Choi, Eunhwa

    2018-03-01

    Climate-relevant CO 2 emissions from waste incineration were compared using three methods: making use of CO 2 concentration data, converting O 2 concentration and waste characteristic data, and using a mass balance method following Intergovernmental Panel on Climate Change (IPCC) guidelines. For the first two methods, CO 2 and O 2 concentrations were measured continuously from 24 to 86 days. The O 2 conversion method in comparison to the direct CO 2 measurement method had a 4.8% mean difference in daily CO 2 emissions for four incinerators where analyzed waste composition data were available. However, the IPCC method had a higher difference of 13% relative to the direct CO 2 measurement method. For three incinerators using designed values for waste composition, the O 2 conversion and IPCC methods in comparison to the direct CO 2 measurement method had mean differences of 7.5% and 89%, respectively. Therefore, the use of O 2 concentration data measured for monitoring air pollutant emissions is an effective method for estimating CO 2 emissions resulting from waste incineration. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. The choice of statistical methods for comparisons of dosimetric data in radiotherapy.

    PubMed

    Chaikh, Abdulhamid; Giraud, Jean-Yves; Perrin, Emmanuel; Bresciani, Jean-Pierre; Balosso, Jacques

    2014-09-18

    Novel irradiation techniques are continuously introduced in radiotherapy to optimize the accuracy, the security and the clinical outcome of treatments. These changes could raise the question of discontinuity in dosimetric presentation and the subsequent need for practice adjustments in case of significant modifications. This study proposes a comprehensive approach to compare different techniques and tests whether their respective dose calculation algorithms give rise to statistically significant differences in the treatment doses for the patient. Statistical investigation principles are presented in the framework of a clinical example based on 62 fields of radiotherapy for lung cancer. The delivered doses in monitor units were calculated using three different dose calculation methods: the reference method accounts the dose without tissues density corrections using Pencil Beam Convolution (PBC) algorithm, whereas new methods calculate the dose with tissues density correction for 1D and 3D using Modified Batho (MB) method and Equivalent Tissue air ratio (ETAR) method, respectively. The normality of the data and the homogeneity of variance between groups were tested using Shapiro-Wilks and Levene test, respectively, then non-parametric statistical tests were performed. Specifically, the dose means estimated by the different calculation methods were compared using Friedman's test and Wilcoxon signed-rank test. In addition, the correlation between the doses calculated by the three methods was assessed using Spearman's rank and Kendall's rank tests. The Friedman's test showed a significant effect on the calculation method for the delivered dose of lung cancer patients (p <0.001). The density correction methods yielded to lower doses as compared to PBC by on average (-5 ± 4.4 SD) for MB and (-4.7 ± 5 SD) for ETAR. Post-hoc Wilcoxon signed-rank test of paired comparisons indicated that the delivered dose was significantly reduced using density-corrected methods as compared to the reference method. Spearman's and Kendall's rank tests indicated a positive correlation between the doses calculated with the different methods. This paper illustrates and justifies the use of statistical tests and graphical representations for dosimetric comparisons in radiotherapy. The statistical analysis shows the significance of dose differences resulting from two or more techniques in radiotherapy.

  20. Accessibility to primary health care in Belgium: an evaluation of policies awarding financial assistance in shortage areas.

    PubMed

    Dewulf, Bart; Neutens, Tijs; De Weerdt, Yves; Van de Weghe, Nico

    2013-08-22

    In many countries, financial assistance is awarded to physicians who settle in an area that is designated as a shortage area to prevent unequal accessibility to primary health care. Today, however, policy makers use fairly simple methods to define health care accessibility, with physician-to-population ratios (PPRs) within predefined administrative boundaries being overwhelmingly favoured. Our purpose is to verify whether these simple methods are accurate enough for adequately designating medical shortage areas and explore how these perform relative to more advanced GIS-based methods. Using a geographical information system (GIS), we conduct a nation-wide study of accessibility to primary care physicians in Belgium using four different methods: PPR, distance to closest physician, cumulative opportunity, and floating catchment area (FCA) methods. The official method used by policy makers in Belgium (calculating PPR per physician zone) offers only a crude representation of health care accessibility, especially because large contiguous areas (physician zones) are considered. We found substantial differences in the number and spatial distribution of medical shortage areas when applying different methods. The assessment of spatial health care accessibility and concomitant policy initiatives are affected by and dependent on the methodology used. The major disadvantage of PPR methods is its aggregated approach, masking subtle local variations. Some simple GIS methods overcome this issue, but have limitations in terms of conceptualisation of physician interaction and distance decay. Conceptually, the enhanced 2-step floating catchment area (E2SFCA) method, an advanced FCA method, was found to be most appropriate for supporting areal health care policies, since this method is able to calculate accessibility at a small scale (e.g., census tracts), takes interaction between physicians into account, and considers distance decay. While at present in health care research methodological differences and modifiable areal unit problems have remained largely overlooked, this manuscript shows that these aspects have a significant influence on the insights obtained. Hence, it is important for policy makers to ascertain to what extent their policy evaluations hold under different scales of analysis and when different methods are used.

  1. Comparison of the applicability of Demirjian and Willems methods for dental age estimation in children from the Thrace region, Turkey.

    PubMed

    Ozveren, N; Serindere, G

    2018-04-01

    Dental age (DA) estimation is frequently used in the fields of orthodontics, paediatric dentistry and forensic science. DA estimation methods use radiology, and are reliable and non-destructive according to the literature. The Demirjian method is currently the most frequently used method, but recently, the Willems method was reported to have given results that were more accurate for some regions. The aim of this study was to detect and compare the accuracy of DA estimation methods for children and adolescents from the Thrace region, Turkey. The mean difference between the chronological age (CA) and the DA was selected as the primary outcome measure, and the difference range according to sex and age group was selected as the secondary outcome. Panoramic radiographs (n=766) from a Thrace region population (380 males and 386 females) ranging in age from 6 to 14.99 years old were evaluated. DA was calculated using both the Demirjian and the Willems methods. The mean CA of the subjects was 11.39±2.34 years (males=11.08±2.42 years and females=11.70±2.23 years). The mean difference values between the CA and the DA (CA-DA) using the Demirjian method and the Willems method were -0.87 and -0.17 for females, respectively, and -1.04 and -0.40 for males, respectively. For the different age groups, the differences between the CA and the DA calculated using the Demirjian method (CA-DA) ranged from -0.53 to -1.46 years for males and from -0.19 to -1.20 years for females, while the mean differences between the CA and the DA calculated by the Willems method (CA-DA) ranged from -0.19 to -0.50 years for males and from 0.20 to -0.49 years for females. The results suggest that the Willems method produced more accurate results for almost all age groups of both sexes, and it is better suited for children from the Thrace region of Turkey, than the Demirjian method. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. [Analysis of dynamic changes of flavonoids and alkaloids during different drying process of Morus alba leaves].

    PubMed

    Bai, Yong-liang; Duan, Jin-ao; Su, Shu-lan; Qian, Ye-fei; Qian, Da-wei; Ouyang, Zhen

    2014-07-01

    To find out dynamic changes of flavonoids and alkaloids in Morus alba leaves by analyzing influence of different drying method and drying degrees, in order to provide evidence for quality evaluation of Morus alba leaves. Different drying methods, programmed temperature methods and constant temperature methods were adopted to dry Morus alba leaves samples respectively. Contents of flavonoids and alkaloids were analyzed by HPLC-PDA and LC-TQ/MS respectively. It's shown obviously that the content of flavonoids were influenced heavily by different drying methods. Methods that suitable for flavonoids were freezing-dried > shade-dried > dried > sun-dried > microwave-dried > infrared-dried; Methods that suitable for alkaloids were freezing-dried > shade-dried > dried > sun-dried > infrared-dried > microwave-dried. The 55 -65 degrees C group was shown to be the lowest in both flavonoids and DNJ while the 85 - 95 degrees C group was shown to be the best for DNJ. For fagomine, the 45 degrees C group was shown to be the lowest concentrations while the 95 - 105 degrees C group was shown to be the highest. Samples with different moisture were shown to be different in content of flavonoids and alkaloids. And samples with 10% moisture contain highest flavonoids while those with 30% - 50% moisture contain lowest flavonoids. Content of DNJ and fagomine raised as moisture decreasing. In addition, the 55 - 65 degrees C group was better than the 95 -105 degrees C one in alkaloids content. The results provide optimal drying methods and condition for drying Morus alba leaves, and foundations for uncovering biochemical transform of Morus alba leaves.

  3. A collaborative design method to support integrated care. An ICT development method containing continuous user validation improves the entire care process and the individual work situation

    PubMed Central

    Scandurra, Isabella; Hägglund, Maria

    2009-01-01

    Introduction Integrated care involves different professionals, belonging to different care provider organizations and requires immediate and ubiquitous access to patient-oriented information, supporting an integrated view on the care process [1]. Purpose To present a method for development of usable and work process-oriented information and communication technology (ICT) systems for integrated care. Theory and method Based on Human-computer Interaction Science and in particular Participatory Design [2], we present a new collaborative design method in the context of health information systems (HIS) development [3]. This method implies a thorough analysis of the entire interdisciplinary cooperative work and a transformation of the results into technical specifications, via user validated scenarios, prototypes and use cases, ultimately leading to the development of appropriate ICT for the variety of occurring work situations for different user groups, or professions, in integrated care. Results and conclusions Application of the method in homecare of the elderly resulted in an HIS that was well adapted to the intended user groups. Conducted in multi-disciplinary seminars, the method captured and validated user needs and system requirements for different professionals, work situations, and environments not only for current work; it also aimed to improve collaboration in future (ICT supported) work processes. A holistic view of the entire care process was obtained and supported through different views of the HIS for different user groups, resulting in improved work in the entire care process as well as for each collaborating profession [4].

  4. Analytical approaches to the determination of phosphorus partitioning patterns in sediments.

    PubMed

    Pardo, P; Rauret, G; López-Sánchez, J F

    2003-04-01

    Three methods for phosphorus fractionation in sediments based on chemical extractions have been applied to fourteen aquatic sediment samples of different origin and characteristics. Two of the methods used different approaches to obtain the inorganic fractions. The Hieltjes and Lijklema procedure (HL) uses strong acids or bases, whereas the Golterman procedure (G) uses chelating reagents. The third one, the Standards, Measurements and Testing (SMT) protocol, was proposed in the frame of the SMT Programme (European Commission) which aimed to provide harmonisation and the validation of such methodologies. This harmonised procedure was also used for the certification of the extractable phosphorus contents in a sediment certified reference material (CRM BCR 684). Principal component analysis (PCA) was used to group sediments according to their composition and the three extraction methods were applied to the samples including CRM BCR 684. The data obtained show that there is some correlation between the results from the three methods when considering the organic and the residual fractions together. The SMT and the HL methods are the most comparable, whereas the G method, using a different type of reagent, yields different distribution patterns depending on sample composition. In relation to the inorganic phosphorus, the three methods give similar information, although the distribution between non-apatite and apatite fractions can be different.

  5. Reprogramming Methods Do Not Affect Gene Expression Profile of Human Induced Pluripotent Stem Cells.

    PubMed

    Trevisan, Marta; Desole, Giovanna; Costanzi, Giulia; Lavezzo, Enrico; Palù, Giorgio; Barzon, Luisa

    2017-01-20

    Induced pluripotent stem cells (iPSCs) are pluripotent cells derived from adult somatic cells. After the pioneering work by Yamanaka, who first generated iPSCs by retroviral transduction of four reprogramming factors, several alternative methods to obtain iPSCs have been developed in order to increase the yield and safety of the process. However, the question remains open on whether the different reprogramming methods can influence the pluripotency features of the derived lines. In this study, three different strategies, based on retroviral vectors, episomal vectors, and Sendai virus vectors, were applied to derive iPSCs from human fibroblasts. The reprogramming efficiency of the methods based on episomal and Sendai virus vectors was higher than that of the retroviral vector-based approach. All human iPSC clones derived with the different methods showed the typical features of pluripotent stem cells, including the expression of alkaline phosphatase and stemness maker genes, and could give rise to the three germ layer derivatives upon embryoid bodies assay. Microarray analysis confirmed the presence of typical stem cell gene expression profiles in all iPSC clones and did not identify any significant difference among reprogramming methods. In conclusion, the use of different reprogramming methods is equivalent and does not affect gene expression profile of the derived human iPSCs.

  6. Forest regulation methods and silvicultural systems: what are they?

    Treesearch

    Ivan L. Sander; Burnell C. Fischer

    1989-01-01

    "Forest regulation methods" and "silvicultural systems" are important forest resource management concepts but there is much confusion about them. They often mean different things to different individuals. Confusion exists in part because "forest regulation methods" and "silvicultural systems" often use the same terminology. Also...

  7. Comparison of two on-orbit attitude sensor alignment methods

    NASA Technical Reports Server (NTRS)

    Krack, Kenneth; Lambertson, Michael; Markley, F. Landis

    1990-01-01

    Compared here are two methods of on-orbit alignment of vector attitude sensors. The first method uses the angular difference between simultaneous measurements from two or more sensors. These angles are compared to the angular differences between the respective reference positions of the sensed objects. The alignments of the sensors are adjusted to minimize the difference between the two sets of angles. In the second method, the sensor alignment is part of a state vector that includes the attitude. The alignments are adjusted along with the attitude to minimize all observation residuals. It is shown that the latter method can result in much less alignment uncertainty when gyroscopes are used for attitude propagation during the alignment estimation. The additional information for this increased accuracy comes from knowledge of relative attitude obtained from the spacecraft gyroscopes. The theoretical calculations of this difference in accuracy are presented. Also presented are numerical estimates of the alignment uncertainties of the fixed-head star trackers on the Extreme Ultraviolet Explorer spacecraft using both methods.

  8. A modified approach to estimating sample size for simple logistic regression with one continuous covariate.

    PubMed

    Novikov, I; Fund, N; Freedman, L S

    2010-01-15

    Different methods for the calculation of sample size for simple logistic regression (LR) with one normally distributed continuous covariate give different results. Sometimes the difference can be large. Furthermore, some methods require the user to specify the prevalence of cases when the covariate equals its population mean, rather than the more natural population prevalence. We focus on two commonly used methods and show through simulations that the power for a given sample size may differ substantially from the nominal value for one method, especially when the covariate effect is large, while the other method performs poorly if the user provides the population prevalence instead of the required parameter. We propose a modification of the method of Hsieh et al. that requires specification of the population prevalence and that employs Schouten's sample size formula for a t-test with unequal variances and group sizes. This approach appears to increase the accuracy of the sample size estimates for LR with one continuous covariate.

  9. Pre-coding method and apparatus for multiple source or time-shifted single source data and corresponding inverse post-decoding method and apparatus

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu (Inventor)

    1997-01-01

    A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.

  10. Pre-coding method and apparatus for multiple source or time-shifted single source data and corresponding inverse post-decoding method and apparatus

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu (Inventor)

    1998-01-01

    A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.

  11. Evaluation of different distortion correction methods and interpolation techniques for an automated classification of celiac disease☆

    PubMed Central

    Gadermayr, M.; Liedlgruber, M.; Uhl, A.; Vécsei, A.

    2013-01-01

    Due to the optics used in endoscopes, a typical degradation observed in endoscopic images are barrel-type distortions. In this work we investigate the impact of methods used to correct such distortions in images on the classification accuracy in the context of automated celiac disease classification. For this purpose we compare various different distortion correction methods and apply them to endoscopic images, which are subsequently classified. Since the interpolation used in such methods is also assumed to have an influence on the resulting classification accuracies, we also investigate different interpolation methods and their impact on the classification performance. In order to be able to make solid statements about the benefit of distortion correction we use various different feature extraction methods used to obtain features for the classification. Our experiments show that it is not possible to make a clear statement about the usefulness of distortion correction methods in the context of an automated diagnosis of celiac disease. This is mainly due to the fact that an eventual benefit of distortion correction highly depends on the feature extraction method used for the classification. PMID:23981585

  12. A Comparison of Methods to Analyze Aquatic Heterotrophic Flagellates of Different Taxonomic Groups.

    PubMed

    Jeuck, Alexandra; Nitsche, Frank; Wylezich, Claudia; Wirth, Olaf; Bergfeld, Tanja; Brutscher, Fabienne; Hennemann, Melanie; Monir, Shahla; Scherwaß, Anja; Troll, Nicole; Arndt, Hartmut

    2017-08-01

    Heterotrophic flagellates contribute significantly to the matter flux in aquatic and terrestrial ecosystems. Still today their quantification and taxonomic classification bear several problems in field studies, though these methodological problems seem to be increasingly ignored in current ecological studies. Here we describe and test different methods, the live-counting technique, different fixation techniques, cultivation methods like the liquid aliquot method (LAM), and a molecular single cell survey called aliquot PCR (aPCR). All these methods have been tested either using aquatic field samples or cultures of freshwater and marine taxa. Each of the described methods has its advantages and disadvantages, which have to be considered in every single case. With the live-counting technique a detection of living cells up to morphospecies level is possible. Fixation of cells and staining methods are advantageous due to the possible long-term storage and observation of samples. Cultivation methods (LAM) offer the possibility of subsequent molecular analyses, and aPCR tools might complete the deficiency of LAM in terms of the missing detection of non-cultivable flagellates. In summary, we propose a combination of several investigation techniques reducing the gap between the different methodological problems. Copyright © 2017 Elsevier GmbH. All rights reserved.

  13. Evaluation of different methods for determining growing degree-day thresholds in apricot cultivars

    NASA Astrophysics Data System (ADS)

    Ruml, Mirjana; Vuković, Ana; Milatović, Dragan

    2010-07-01

    The aim of this study was to examine different methods for determining growing degree-day (GDD) threshold temperatures for two phenological stages (full bloom and harvest) and select the optimal thresholds for a greater number of apricot ( Prunus armeniaca L.) cultivars grown in the Belgrade region. A 10-year data series were used to conduct the study. Several commonly used methods to determine the threshold temperatures from field observation were evaluated: (1) the least standard deviation in GDD; (2) the least standard deviation in days; (3) the least coefficient of variation in GDD; (4) regression coefficient; (5) the least standard deviation in days with a mean temperature above the threshold; (6) the least coefficient of variation in days with a mean temperature above the threshold; and (7) the smallest root mean square error between the observed and predicted number of days. In addition, two methods for calculating daily GDD, and two methods for calculating daily mean air temperatures were tested to emphasize the differences that can arise by different interpretations of basic GDD equation. The best agreement with observations was attained by method (7). The lower threshold temperature obtained by this method differed among cultivars from -5.6 to -1.7°C for full bloom, and from -0.5 to 6.6°C for harvest. However, the “Null” method (lower threshold set to 0°C) and “Fixed Value” method (lower threshold set to -2°C for full bloom and to 3°C for harvest) gave very good results. The limitations of the widely used method (1) and methods (5) and (6), which generally performed worst, are discussed in the paper.

  14. Comparability of river suspended-sediment sampling and laboratory analysis methods

    USGS Publications Warehouse

    Groten, Joel T.; Johnson, Gregory D.

    2018-03-06

    Accurate measurements of suspended sediment, a leading water-quality impairment in many Minnesota rivers, are important for managing and protecting water resources; however, water-quality standards for suspended sediment in Minnesota are based on grab field sampling and total suspended solids (TSS) laboratory analysis methods that have underrepresented concentrations of suspended sediment in rivers compared to U.S. Geological Survey equal-width-increment or equal-discharge-increment (EWDI) field sampling and suspended sediment concentration (SSC) laboratory analysis methods. Because of this underrepresentation, the U.S. Geological Survey, in collaboration with the Minnesota Pollution Control Agency, collected concurrent grab and EWDI samples at eight sites to compare results obtained using different combinations of field sampling and laboratory analysis methods.Study results determined that grab field sampling and TSS laboratory analysis results were biased substantially low compared to EWDI sampling and SSC laboratory analysis results, respectively. Differences in both field sampling and laboratory analysis methods caused grab and TSS methods to be biased substantially low. The difference in laboratory analysis methods was slightly greater than field sampling methods.Sand-sized particles had a strong effect on the comparability of the field sampling and laboratory analysis methods. These results indicated that grab field sampling and TSS laboratory analysis methods fail to capture most of the sand being transported by the stream. The results indicate there is less of a difference among samples collected with grab field sampling and analyzed for TSS and concentration of fines in SSC. Even though differences are present, the presence of strong correlations between SSC and TSS concentrations provides the opportunity to develop site specific relations to address transport processes not captured by grab field sampling and TSS laboratory analysis methods.

  15. Two spectrophotometric methods for simultaneous determination of some antihyperlipidemic drugs

    PubMed Central

    Abdelwahab, Nada S.; El-Zeiny, Badr A.; Tohamy, Salwa I.

    2012-01-01

    Two simple, accurate, precise and economic spectrophotometric methods have been developed for simultaneous determination of Atorvastatin calcium (ATR) and Ezetimibe (EZ) in their bulk powder and pharmaceutical dosage form. Method (I) is based on dual wavelength analysis while method (II) is the mean centering of ratio spectra spectrophotometric (MCR) method. In method (I), two wavelengths were selected for each drug in such a way that the difference in absorbance was zero for the second drug. At wavelengths 226.6 and 244 nm EZ had equal absorbance values; therefore, these two wavelengths have been used to determine ATR; on a similar basis 228.6 and 262.8 nm were selected to determine EZ in their binary mixtures. In method II, the absorption spectra of both ATR and EZ with different concentrations were recorded over the range 200–350, divided by the spectrum of suitable divisor of both ATR and EZ and then the obtained ratio spectra were mean centered. The concentrations of active components were then determined from the calibration graphs obtained by measuring the amplitudes at 215–260 nm (peak to peak) for both ATR and EZ. Accuracy and precision of the developed methods have been tested; in addition recovery studies have been carried out in order to confirm their accuracy. On the other hand, selectivities of the methods were tested by application for determination of different synthetic mixtures containing different ratios of the studied drugs. The developed methods have been successfully used for determination of ATR and EZ in their combined dosage form and statistical comparison of the developed methods with the reported spectrophotometric one using F and Student's t-tests showed no significant difference regarding both accuracy and precision. PMID:29403754

  16. Two spectrophotometric methods for simultaneous determination of some antihyperlipidemic drugs.

    PubMed

    Abdelwahab, Nada S; El-Zeiny, Badr A; Tohamy, Salwa I

    2012-08-01

    Two simple, accurate, precise and economic spectrophotometric methods have been developed for simultaneous determination of Atorvastatin calcium (ATR) and Ezetimibe (EZ) in their bulk powder and pharmaceutical dosage form. Method (I) is based on dual wavelength analysis while method (II) is the mean centering of ratio spectra spectrophotometric (MCR) method. In method (I), two wavelengths were selected for each drug in such a way that the difference in absorbance was zero for the second drug. At wavelengths 226.6 and 244 nm EZ had equal absorbance values; therefore, these two wavelengths have been used to determine ATR; on a similar basis 228.6 and 262.8 nm were selected to determine EZ in their binary mixtures. In method II, the absorption spectra of both ATR and EZ with different concentrations were recorded over the range 200-350, divided by the spectrum of suitable divisor of both ATR and EZ and then the obtained ratio spectra were mean centered. The concentrations of active components were then determined from the calibration graphs obtained by measuring the amplitudes at 215-260 nm (peak to peak) for both ATR and EZ. Accuracy and precision of the developed methods have been tested; in addition recovery studies have been carried out in order to confirm their accuracy. On the other hand, selectivities of the methods were tested by application for determination of different synthetic mixtures containing different ratios of the studied drugs. The developed methods have been successfully used for determination of ATR and EZ in their combined dosage form and statistical comparison of the developed methods with the reported spectrophotometric one using F and Student's t -tests showed no significant difference regarding both accuracy and precision.

  17. Novel two wavelength spectrophotometric methods for simultaneous determination of binary mixtures with severely overlapping spectra

    NASA Astrophysics Data System (ADS)

    Lotfy, Hayam M.; Saleh, Sarah S.; Hassan, Nagiba Y.; Salem, Hesham

    2015-02-01

    This work presents the application of different spectrophotometric techniques based on two wavelengths for the determination of severely overlapped spectral components in a binary mixture without prior separation. Four novel spectrophotometric methods were developed namely: induced dual wavelength method (IDW), dual wavelength resolution technique (DWRT), advanced amplitude modulation method (AAM) and induced amplitude modulation method (IAM). The results of the novel methods were compared to that of three well-established methods which were: dual wavelength method (DW), Vierordt's method (VD) and bivariate method (BV). The developed methods were applied for the analysis of the binary mixture of hydrocortisone acetate (HCA) and fusidic acid (FSA) formulated as topical cream accompanied by the determination of methyl paraben and propyl paraben present as preservatives. The specificity of the novel methods was investigated by analyzing laboratory prepared mixtures and the combined dosage form. The methods were validated as per ICH guidelines where accuracy, repeatability, inter-day precision and robustness were found to be within the acceptable limits. The results obtained from the proposed methods were statistically compared with official ones where no significant difference was observed. No difference was observed between the obtained results when compared to the reported HPLC method, which proved that the developed methods could be alternative to HPLC techniques in quality control laboratories.

  18. Computational time analysis of the numerical solution of 3D electrostatic Poisson's equation

    NASA Astrophysics Data System (ADS)

    Kamboh, Shakeel Ahmed; Labadin, Jane; Rigit, Andrew Ragai Henri; Ling, Tech Chaw; Amur, Khuda Bux; Chaudhary, Muhammad Tayyab

    2015-05-01

    3D Poisson's equation is solved numerically to simulate the electric potential in a prototype design of electrohydrodynamic (EHD) ion-drag micropump. Finite difference method (FDM) is employed to discretize the governing equation. The system of linear equations resulting from FDM is solved iteratively by using the sequential Jacobi (SJ) and sequential Gauss-Seidel (SGS) methods, simulation results are also compared to examine the difference between the results. The main objective was to analyze the computational time required by both the methods with respect to different grid sizes and parallelize the Jacobi method to reduce the computational time. In common, the SGS method is faster than the SJ method but the data parallelism of Jacobi method may produce good speedup over SGS method. In this study, the feasibility of using parallel Jacobi (PJ) method is attempted in relation to SGS method. MATLAB Parallel/Distributed computing environment is used and a parallel code for SJ method is implemented. It was found that for small grid size the SGS method remains dominant over SJ method and PJ method while for large grid size both the sequential methods may take nearly too much processing time to converge. Yet, the PJ method reduces computational time to some extent for large grid sizes.

  19. Comparison between iteration schemes for three-dimensional coordinate-transformed saturated-unsaturated flow model

    NASA Astrophysics Data System (ADS)

    An, Hyunuk; Ichikawa, Yutaka; Tachikawa, Yasuto; Shiiba, Michiharu

    2012-11-01

    SummaryThree different iteration methods for a three-dimensional coordinate-transformed saturated-unsaturated flow model are compared in this study. The Picard and Newton iteration methods are the common approaches for solving Richards' equation. The Picard method is simple to implement and cost-efficient (on an individual iteration basis). However it converges slower than the Newton method. On the other hand, although the Newton method converges faster, it is more complex to implement and consumes more CPU resources per iteration than the Picard method. The comparison of the two methods in finite-element model (FEM) for saturated-unsaturated flow has been well evaluated in previous studies. However, two iteration methods might exhibit different behavior in the coordinate-transformed finite-difference model (FDM). In addition, the Newton-Krylov method could be a suitable alternative for the coordinate-transformed FDM because it requires the evaluation of a 19-point stencil matrix. The formation of a 19-point stencil is quite a complex and laborious procedure. Instead, the Newton-Krylov method calculates the matrix-vector product, which can be easily approximated by calculating the differences of the original nonlinear function. In this respect, the Newton-Krylov method might be the most appropriate iteration method for coordinate-transformed FDM. However, this method involves the additional cost of taking an approximation at each Krylov iteration in the Newton-Krylov method. In this paper, we evaluated the efficiency and robustness of three iteration methods—the Picard, Newton, and Newton-Krylov methods—for simulating saturated-unsaturated flow through porous media using a three-dimensional coordinate-transformed FDM.

  20. Determination of the pure silicon monocarbide content of silicon carbide and products based on silicon carbide

    NASA Technical Reports Server (NTRS)

    Prost, L.; Pauillac, A.

    1978-01-01

    Experience has shown that different methods of analysis of SiC products give different results. Methods identified as AFNOR, FEPA, and manufacturer P, currently used to detect SiC, free C, free Si, free Fe, and SiO2 are reviewed. The AFNOR method gives lower SiC content, attributed to destruction of SiC by grinding. Two products sent to independent labs for analysis by the AFNOR and FEPA methods showed somewhat different results, especially for SiC, SiO2, and Al2O3 content, whereas an X-ray analysis showed a SiC content approximately 10 points lower than by chemical methods.

  1. Spectrophotometric Methods for Simultaneous Determination of Oxytetracycline HCl and Flunixin Meglumine in Their Veterinary Pharmaceutical Formulation.

    PubMed

    Merey, Hanan A; Abd-Elmonem, Mahmmoud S; Nazlawy, Hagar N; Zaazaa, Hala E

    2017-01-01

    Four precise, accurate, selective, and sensitive UV-spectrophotometric methods were developed and validated for the simultaneous determination of a binary mixture of Oxytetracycline HCl (OXY) and Flunixin Meglumine (FLU). The first method, dual wavelength (DW), depends on measuring the difference in absorbance (ΔA 273.4-327 nm) for the determination of OXY where FLU is zero while FLU is determined at ΔA 251.7-275.7 nm. The second method, first-derivative spectrophotometric method (1D), depends on measuring the peak amplitude of the first derivative selectively at 377 and 266.7 nm for the determination of OXY and FLU, respectively. The third method, ratio difference method, depends on the difference in amplitudes of the ratio spectra at ΔP 286.5-324.8 nm and ΔP 249.6-286.3 nm for the determination of OXY and FLU, respectively. The fourth method, first derivative of ratio spectra method (1DD), depends on measuring the amplitude peak to peak of the first derivative of ratio spectra at 296.7 to 369 nm and 259.1 to 304.7 nm for the determination of OXY and FLU, respectively. Different factors affecting the applied spectrophotometric methods were studied. The proposed methods were validated according to ICH guidelines. Satisfactory results were obtained for determination of both drugs in laboratory prepared mixture and pharmaceutical dosage form. The developed methods are compared favourably with the official ones.

  2. The Demirjian versus the Willems method for dental age estimation in different populations: A meta-analysis of published studies

    PubMed Central

    2017-01-01

    Background The accuracy of radiographic methods for dental age estimation is important for biological growth research and forensic applications. Accuracy of the two most commonly used systems (Demirjian and Willems) has been evaluated with conflicting results. This study investigates the accuracies of these methods for dental age estimation in different populations. Methods A search of PubMed, Scopus, Ovid, Database of Open Access Journals and Google Scholar was undertaken. Eligible studies published before December 28, 2016 were reviewed and analyzed. Meta-analysis was performed on 28 published articles using the Demirjian and/or Willems methods to estimate chronological age in 14,109 children (6,581 males, 7,528 females) age 3–18 years in studies using Demirjian’s method and 10,832 children (5,176 males, 5,656 females) age 4–18 years in studies using Willems’ method. The weighted mean difference at 95% confidence interval was used to assess accuracies of the two methods in predicting the chronological age. Results The Demirjian method significantly overestimated chronological age (p<0.05) in males age 3–15 and females age 4–16 when studies were pooled by age cohorts and sex. The majority of studies using Willems’ method did not report significant overestimation of ages in either sex. Overall, Demirjian’s method significantly overestimated chronological age compared to the Willems method (p<0.05). The weighted mean difference for the Demirjian method was 0.62 for males and 0.72 for females, while that of the Willems method was 0.26 for males and 0.29 for females. Conclusion The Willems method provides more accurate estimation of chronological age in different populations, while Demirjian’s method has a broad application in terms of determining maturity scores. However, accuracy of Demirjian age estimations is confounded by population variation when converting maturity scores to dental ages. For highest accuracy of age estimation, population-specific standards, rather than a universal standard or methods developed on other populations, need to be employed. PMID:29117240

  3. The Effects of Process Oriented Guided Inquiry Learning on Secondary Student ACT Science Scores

    NASA Astrophysics Data System (ADS)

    Judd, William Lindsey

    The purpose of this study was to examine any significant difference on secondary school chemistry students' ACT Science Test scores between students taught by the Process Oriented Guided Inquiry Learning (POGIL) method versus students taught by traditional, teacher-centered pedagogy. This study also examined any difference between students taught by the POGIL method versus students taught by traditional, teacher-centered pedagogy in regard to the three different types of questions on the ACT Science Test: data representation, research summaries, and conflicting viewpoints. The sample consisted of sophomore-level students at two private, suburban Christian schools. A pretest-posttest design was used to compare the mean difference in scores from ACT issued sample test booklets before and after each group had received instruction via the POGIL method or more traditional methods. This study found that there was no significant difference in the mean difference of test scores between the two groups. This study also found that there was not a significant difference in the mean difference of scores in regard to the three different types of questions on the ACT Science Test. Further implications of this study are discussed.

  4. Assessing muscular oxygenation during incremental exercise using near-infrared spectroscopy: comparison of three different methods.

    PubMed

    Agbangla, N F; Audiffren, M; Albinet, C T

    2017-12-20

    Using continuous-wave near-infrared spectroscopy (NIRS), this study compared three different methods, namely the slope method (SM), the amplitude method (AM), and the area under the curve (AUC) method to determine the variations of intramuscular oxygenation level as a function of workload. Ten right-handed subjects (22+/-4 years) performed one isometric contraction at each of three different workloads (30 %, 50 % and 90 % of maximal voluntary strength) during a period of twenty seconds. Changes in oxyhemoglobin (delta[HbO(2)]) and deoxyhemoglobin (delta[HHb]) concentrations in the superficial flexor of fingers were recorded using continuous-wave NIRS. The results showed a strong consistency between the three methods, with standardized Cronbach alphas of 0.87 for delta[HHb] and 0.95 for delta[HbO(2)]. No significant differences between the three methods were observed concerning delta[HHb] as a function of workload. However, only the SM showed sufficient sensitivity to detect a significant decrease in delta[HbO(2)] between 30 % and 50 % of workload (p<0.01). Among these three methods, the SM appeared to be the only method that was well adapted and sensitive enough to determine slight changes in delta[HbO(2)]. Theoretical and methodological implications of these results are discussed.

  5. Comparing different methods for assessing contaminant bioavailability during sediment remediation.

    PubMed

    Jia, Fang; Liao, Chunyang; Xue, Jiaying; Taylor, Allison; Gan, Jay

    2016-12-15

    Sediment contamination by persistent organic pollutants from historical episodes is widespread and remediation is often needed to clean up severely contaminated sites. Measuring contaminant bioavailability in a before-and-after manner lends to improved assessment of remediation effectiveness. However, a number of bioavailability measurement methods have been developed, posing a challenge in method selection for practitioners. In this study, three different bioavailability measurement methods, i.e., solid phase microextraction (SPME), Tenax desorption, and isotope dilution method (IDM), were compared in evaluating changes in bioavailability of DDT and its degradates in sediment following simulated remediation treatments. When compared to the unamended sediments, all three methods predicted essentially the same degrees of changes in bioavailability after amendment with activated carbon, charcoal or sand. After normalizing over the unamended control, measurements by different methods were linearly correlated with each other, with slopes close to 1. The same observation was further made with a Superfund site marine sediment. This finding suggests that different methods may be used in evaluating remediation efficiency. However, Tenax desorption or IDM consistently offered better sensitivity than SPME in detecting bioavailability changes. Results from this study highlight the value of considering bioavailability when evaluating remediation effectiveness and provide guidance on the selection of bioavailability measurement methods in such assessments. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. A Comparative Analysis of Pitch Detection Methods Under the Influence of Different Noise Conditions.

    PubMed

    Sukhostat, Lyudmila; Imamverdiyev, Yadigar

    2015-07-01

    Pitch is one of the most important components in various speech processing systems. The aim of this study was to evaluate different pitch detection methods in terms of various noise conditions. Prospective study. For evaluation of pitch detection algorithms, time-domain, frequency-domain, and hybrid methods were considered by using Keele and CSTR speech databases. Each of them has its own advantages and disadvantages. Experiments have shown that BaNa method achieves the highest pitch detection accuracy. The development of methods for pitch detection, which are robust to additive noise at different signal-to-noise ratio, is an important field of research with many opportunities for enhancement the modern methods. Copyright © 2015 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  7. Investigation of diffusion length distribution on polycrystalline silicon wafers via photoluminescence methods

    PubMed Central

    Lou, Shishu; Zhu, Huishi; Hu, Shaoxu; Zhao, Chunhua; Han, Peide

    2015-01-01

    Characterization of the diffusion length of solar cells in space has been widely studied using various methods, but few studies have focused on a fast, simple way to obtain the quantified diffusion length distribution on a silicon wafer. In this work, we present two different facile methods of doing this by fitting photoluminescence images taken in two different wavelength ranges or from different sides. These methods, which are based on measuring the ratio of two photoluminescence images, yield absolute values of the diffusion length and are less sensitive to the inhomogeneity of the incident laser beam. A theoretical simulation and experimental demonstration of this method are presented. The diffusion length distributions on a polycrystalline silicon wafer obtained by the two methods show good agreement. PMID:26364565

  8. Comparative Results of Using Different Methods for Discovery of Microorganisms in very Ancient Layers of the Central Antarctic Glacier above the Lake Vostok

    NASA Technical Reports Server (NTRS)

    Abyzov, S. S.; Hoover, R. B.; Imura, S.; Mitskevich, I. N.; Naganuma, T.; Poglazova, M. N.; Ivanov, M. V.

    2002-01-01

    The ice sheet of the Central Antarctic is considered by the scientific community worldwide, as a model to elaborate on different methods to search for life outside Earth. This became especially significant in connection with the discovery of the underglacial lake in the vicinity of the Russian Antarctic Station Vostok. Lake Vostok is considered by many scientists as an analog of the ice covered seas of Jupiter's satellite Europa. According to the opinion of many researchers there is the possibility that relict forms of microorganisms, well preserved since the Ice Age, may be present in this lake. Investigations throughout the thickness of the ice sheet above Lake Vostok show the presence of microorganisms belonging to different well-known taxonomic groups, even in the very ancient horizons near close to floor of the glacier. Different methods were used to search for microorganisms that are rarely found in the deep ancient layers of an ice sheet. The method of aseptic sampling from the ice cores and the results of controlled sterile conditions in all stages when conducting these investigations, are described in detail in previous reports. Primary investigations tried the usual methods of sowing samples onto different nutrient media, and the result was that only a few microorganisms grew on the media used. The possibility of isolating the organisms obtained for further investigations, by using modern methods including DNA-analysis, appears to be the preferred method. Further investigations of the very ancient layers of the ice sheet by radioisotopic, luminescence, and scanning electron microscopy methods at different modifications, revealed the quantity and morphological diversity of the cells of microorganisms that were distributed on the different horizons. Investigations over many years have shown that the microflora in the very ancient strata of the Antarctic ice cover, nearest to the bedrock, support the effectiveness of using a combination of different methods to search for signs of life in ancient icy formations, which might play a role in the long-term preservation and transportation of microbial life throughout the Universe.

  9. Comparative results of using different methods for discovery of microorganisms in very ancient layers of the Central Antartic Glacier above the Lake Vostok

    NASA Astrophysics Data System (ADS)

    Abyzov, S.; Hoover, R.; Imura, S.; Mitskevich, I.; Naganuma, T.; Poglazova, M.; Ivanov, M.

    The ice sheet of the Central Antarctic is considered by world-wide scientific community as a model for elaboration of different methods for search of the life outside of the Earth. This problem became especially significant in connection with discovery the under glacial lake in the vicinity of the Russian Antarctic Station Vostok. This lake, later named "Lake Vostok" is considered by many scientists as an analog ice covered seas of Jupiter's satellite Europa. According to the opinion of many researchers there is great possibility of presence in this lake of relict forms of microorganisms well preserved since Ice Age period. The investigations through out the thickness of the ice sheet above the Lake Vostok shows the presence of microorganisms belonging to well-known different taxonomic groups even in the very ancient horizons close to floor of the glacier. Different methods were used for search of microorganisms which were rarely found in the deep ancient layers of the ice sheet. The method of aseptic sampling from the ice cores and results of control sterile conditions in all stages of conducting of these investigations are described in detail in previous reports. Primary investigations used try usual methods of sowing samples onto the different nutrient media permitted to obtain only a few part of the microorganisms which grow on the media used. The possibility of isolation of obtained organisms for further investigations by using modern methods including DNA-analysis appears to be preferential importance of this method. In the further investigations of the very ancient layers of the ice sheet by radioisotopic, luminescence and scanning electron microscopy methods of different modifications, were determined as quantity of microorganisms distributed on its different horizons, as well as the morphological diversity of obtained cells of microorganisms. Experience of many years standing investigations of micro flora in the very ancient strata of the Antarctic ice cover close to the bedrock testified the effectiveness of combination of different methods for search for signs of life in ancient icy formations evidently which may preserve and transport life in the Universe.

  10. Dosimetric comparison of lung stereotactic body radiotherapy treatment plans using averaged computed tomography and end-exhalation computed tomography images: Evaluation of the effect of different dose-calculation algorithms and prescription methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitsuyoshi, Takamasa; Nakamura, Mitsuhiro, E-mail: m_nkmr@kuhp.kyoto-u.ac.jp; Matsuo, Yukinori

    The purpose of this article is to quantitatively evaluate differences in dose distributions calculated using various computed tomography (CT) datasets, dose-calculation algorithms, and prescription methods in stereotactic body radiotherapy (SBRT) for patients with early-stage lung cancer. Data on 29 patients with early-stage lung cancer treated with SBRT were retrospectively analyzed. Averaged CT (Ave-CT) and expiratory CT (Ex-CT) images were reconstructed for each patient using 4-dimensional CT data. Dose distributions were initially calculated using the Ave-CT images and recalculated (in the same monitor units [MUs]) by employing Ex-CT images with the same beam arrangements. The dose-volume parameters, including D{sub 95}, D{submore » 90}, D{sub 50}, and D{sub 2} of the planning target volume (PTV), were compared between the 2 image sets. To explore the influence of dose-calculation algorithms and prescription methods on the differences in dose distributions evident between Ave-CT and Ex-CT images, we calculated dose distributions using the following 3 different algorithms: x-ray Voxel Monte Carlo (XVMC), Acuros XB (AXB), and the anisotropic analytical algorithm (AAA). We also used 2 different dose-prescription methods; the isocenter prescription and the PTV periphery prescription methods. All differences in PTV dose-volume parameters calculated using Ave-CT and Ex-CT data were within 3 percentage points (%pts) employing the isocenter prescription method, and within 1.5%pts using the PTV periphery prescription method, irrespective of which of the 3 algorithms (XVMC, AXB, and AAA) was employed. The frequencies of dose-volume parameters differing by >1%pt when the XVMC and AXB were used were greater than those associated with the use of the AAA, regardless of the dose-prescription method employed. All differences in PTV dose-volume parameters calculated using Ave-CT and Ex-CT data on patients who underwent lung SBRT were within 3%pts, regardless of the dose-calculation algorithm or the dose-prescription method employed.« less

  11. Three Dimensional Time Dependent Stochastic Method for Cosmic-ray Modulation

    NASA Astrophysics Data System (ADS)

    Pei, C.; Bieber, J. W.; Burger, R. A.; Clem, J. M.

    2009-12-01

    A proper understanding of the different behavior of intensities of galactic cosmic rays in different solar cycle phases requires solving the modulation equation with time dependence. We present a detailed description of our newly developed stochastic approach for cosmic ray modulation which we believe is the first attempt to solve the time dependent Parker equation in 3D evolving from our 3D steady state stochastic approach, which has been benchmarked extensively by using the finite difference method. Our 3D stochastic method is different from other stochastic approaches in literature (Ball et al 2005, Miyake et al 2005, and Florinski 2008) in several ways. For example, we employ spherical coordinates which makes the code much more efficient by reducing coordinate transformations. What's more, our stochastic differential equations are different from others because our map from Parker's original equation to the Fokker-Planck equation extends the method used by Jokipii and Levy 1977 while others don't although all 3D stochastic methods are essentially based on Ito formula. The advantage of the stochastic approach is that it also gives the probability information of travel times and path lengths of cosmic rays besides the intensities. We show that excellent agreement exists between solutions obtained by our steady state stochastic method and by the traditional finite difference method. We also show time dependent solutions for an idealized heliosphere which has a Parker magnetic field, a planar current sheet, and a simple initial condition.

  12. Helicopter Fatigue. A Review of Current Requirements and Substantiation Procedures

    DTIC Science & Technology

    1979-01-01

    which the applications differ between contractors based cn their individual experience. Load Application: The ideal method of measuring flight loads would... method is different for the parts mainly dimensioned by high cycle fatigue (rotors and gearboxes) and for those subjected to low cycle fatigue (e.g...into damage per hour. Z.. A 58 2.3. Calculation of the service life Two methods are available, both with advantages and drawbacks. They only differ by

  13. Compositions and methods for detecting single nucleotide polymorphisms

    DOEpatents

    Yeh, Hsin-Chih; Werner, James; Martinez, Jennifer S.

    2016-11-22

    Described herein are nucleic acid based probes and methods for discriminating and detecting single nucleotide variants in nucleic acid molecules (e.g., DNA). The methods include use of a pair of probes can be used to detect and identify polymorphisms, for example single nucleotide polymorphism in DNA. The pair of probes emit a different fluorescent wavelength of light depending on the association and alignment of the probes when hybridized to a target nucleic acid molecule. Each pair of probes is capable of discriminating at least two different nucleic acid molecules that differ by at least a single nucleotide difference. The methods can probes can be used, for example, for detection of DNA polymorphisms that are indicative of a particular disease or condition.

  14. Image scanning fluorescence emission difference microscopy based on a detector array.

    PubMed

    Li, Y; Liu, S; Liu, D; Sun, S; Kuang, C; Ding, Z; Liu, X

    2017-06-01

    We propose a novel imaging method that enables the enhancement of three-dimensional resolution of confocal microscopy significantly and achieve experimentally a new fluorescence emission difference method for the first time, based on the parallel detection with a detector array. Following the principles of photon reassignment in image scanning microscopy, images captured by the detector array were arranged. And by selecting appropriate reassign patterns, the imaging result with enhanced resolution can be achieved with the method of fluorescence emission difference. Two specific methods are proposed in this paper, showing that the difference between an image scanning microscopy image and a confocal image will achieve an improvement of transverse resolution by approximately 43% compared with that in confocal microscopy, and the axial resolution can also be enhanced by at least 22% experimentally and 35% theoretically. Moreover, the methods presented in this paper can improve the lateral resolution by around 10% than fluorescence emission difference and 15% than Airyscan. The mechanism of our methods is verified by numerical simulations and experimental results, and it has significant potential in biomedical applications. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  15. Evaluation of Methods for the Extraction of DNA from Drinking Water Distribution System Biofilms

    PubMed Central

    Hwang, Chiachi; Ling, Fangqiong; Andersen, Gary L.; LeChevallier, Mark W.; Liu, Wen-Tso

    2012-01-01

    While drinking water biofilms have been characterized in various drinking water distribution systems (DWDS), little is known about the impact of different DNA extraction methods on the subsequent analysis of microbial communities in drinking water biofilms. Since different DNA extraction methods have been shown to affect the outcome of microbial community analysis in other environments, it is necessary to select a DNA extraction method prior to the application of molecular tools to characterize the complex microbial ecology of the DWDS. This study compared the quantity and quality of DNA yields from selected DWDS bacteria with different cell wall properties using five widely used DNA extraction methods. These were further selected and evaluated for their efficiency and reproducibility of DNA extraction from DWDS samples. Terminal restriction fragment length analysis and the 454 pyrosequencing technique were used to interpret the differences in microbial community structure and composition, respectively, from extracted DNA. Such assessments serve as a concrete step towards the determination of an optimal DNA extraction method for drinking water biofilms, which can then provide a reliable comparison of the meta-analysis results obtained in different laboratories. PMID:22075624

  16. Development and Validation of Different Ultraviolet-Spectrophotometric Methods for the Estimation of Besifloxacin in Different Simulated Body Fluids.

    PubMed

    Singh, C L; Singh, A; Kumar, S; Kumar, M; Sharma, P K; Majumdar, D K

    2015-01-01

    In the present study a simple, accurate, precise, economical and specific UV-spectrophotometric method for estimation of besifloxacin in bulk and in different pharmaceutical formulation has been developed. The drug shows maximum λmax289 nm in distilled water, simulated tears and phosphate buffer saline. The linearity range of developed methods were in the range of 3-30 μg/ml of drug with a correlation coefficient (r(2)) 0.9992, 0.9989 and 0.9984 with respect to distilled water, simulated tears and phosphate buffer saline, respectively. Reproducibility by repeating methods as %RSD were found to be less than 2%. The limit of detection in different media was found to be 0.62, 0.72 and 0.88 μg/ml, respectively. The limit of quantification was found to be 1.88, 2.10, 2.60 μg/ml, respectively. The proposed method was validated statically according to International Conference on Harmonization guidelines with respect to specificity, linearity, range, accuracy, precision and robustness. The proposed methods of validation were found to be accurate and highly specific for the estimation of besifloxacin in different pharmaceutical formulations.

  17. Developmental Competence and Epigenetic Profile of Porcine Embryos Produced by Two Different Cloning Methods.

    PubMed

    Liu, Ying; Lucas-Hahn, Andrea; Petersen, Bjoern; Li, Rong; Hermann, Doris; Hassel, Petra; Ziegler, Maren; Larsen, Knud; Niemann, Heiner; Callesen, Henrik

    2017-06-01

    The "Dolly" based cloning (classical nuclear transfer, [CNT]) and the handmade cloning (HMC) are methods that are nowadays routinely used for somatic cloning of large domestic species. Both cloning protocols share several similarities, but differ with regard to the required in vitro culture, which in turn results in different time intervals until embryo transfer. It is not yet known whether the differences between cloned embryos from the two protocols are due to the cloning methods themselves or the in vitro culture, as some studies have shown detrimental effects of in vitro culture on conventionally produced embryos. The goal of this study was to unravel putative differences between two cloning methods, with regard to developmental competence, expression profile of a panel of developmentally important genes and epigenetic profile of porcine cloned embryos produced by either CNT or HMC, either with (D5 or D6) or without (D0) in vitro culture. Embryos cloned by these two methods had a similar morphological appearance on D0, but displayed different cleavage rates and different quality of blastocysts, with HMC embryos showing higher blastocyst rates (HMC vs. CNT: 35% vs. 10%, p < 0.05) and cell numbers per blastocyst (HMC vs. CNT: 31 vs. 23 on D5 and 42 vs. 18 on D6, p < 0.05) compared to CNT embryos. With regard to histone acetylation and gene expression, CNT and HMC derived cloned embryos were similar on D0, but differed on D6. In conclusion, both cloning methods and the in vitro culture may affect porcine embryo development and epigenetic profile. The two cloning methods essentially produce embryos of similar quality on D0 and after 5 days in vitro culture, but thereafter both histone acetylation and gene expression differ between the two types of cloned embryos.

  18. THE PSTD ALGORITHM: A TIME-DOMAIN METHOD REQUIRING ONLY TWO CELLS PER WAVELENGTH. (R825225)

    EPA Science Inventory

    A pseudospectral time-domain (PSTD) method is developed for solutions of Maxwell's equations. It uses the fast Fourier transform (FFT), instead of finite differences on conventional finite-difference-time-domain (FDTD) methods, to represent spatial derivatives. Because the Fourie...

  19. Integrating rangeland and pastureland assessment methods into a national grazingland assessment approach

    USDA-ARS?s Scientific Manuscript database

    Grazingland resource allocation and decision making at the national scale need to be based on comparable metrics. However, in the USA, rangelands and pasturelands have traditionally been assessed using different methods and indicators. These differences in assessment methods limit the ability to con...

  20. Accessibility of long-term family planning methods: a comparison study between Output Based Approach (OBA) clients verses non-OBA clients in the voucher supported facilities in Kenya.

    PubMed

    Oyugi, Boniface; Kioko, Urbanus; Kaboro, Stephen Mbugua; Gikonyo, Shadrack; Okumu, Clarice; Ogola-Munene, Sarah; Kalsi, Shaminder; Thiani, Simon; Korir, Julius; Odundo, Paul; Baltazaar, Billy; Ranji, Moses; Muraguri, Nicholas; Nzioka, Charles

    2017-03-27

    The study seeks to evaluate the difference in access of long-term family planning (LTFP) methods among the output based approach (OBA) and non-OBA clients within the OBA facility. The study utilises a quasi experimental design. A two tailed unpaired t-test with unequal variance is used to test for the significance variation in the mean access. The difference in difference (DiD) estimates of program effect on long term family planning methods is done to estimate the causal effect by exploiting the group level difference on two or more dimensions. The study also uses a linear regression model to evaluate the predictors of choice of long-term family planning methods. Data was analysed using SPSS version 17. All the methods (Bilateral tubal ligation-BTL, Vasectomy, intrauterine contraceptive device -IUCD, Implants, and Total or combined long-term family planning methods -LTFP) showed a statistical significant difference in the mean utilization between OBA versus non-OBA clients. The difference in difference estimates reveal that the difference in access between OBA and non OBA clients can significantly be attributed to the implementation of the OBA program for intrauterine contraceptive device (p = 0.002), Implants (p = 0.004), and total or combined long-term family planning methods (p = 0.001). The county of residence is a significant determinant of access to all long-term family planning methods except vasectomy and the year of registration is a significant determinant of access especially for implants and total or combined long-term family planning methods. The management level and facility type does not play a role in determining the type of long-term family planning method preferred; however, non-governmental organisations (NGOs) as management level influences the choice of all methods (Bilateral tubal ligation, intrauterine contraceptive device, Implants, and combined methods) except vasectomy. The adjusted R 2 value, representing the percentage of the variance explained by various models, is larger than 18% for implants and total or combined long-term family planning. The study showed that the voucher services in Kenya has been effective in providing long-term family planning services and improving access of care provided to women of reproductive age. Therefore, voucher scheme can be used as a tool for bridging the gap of unmet needs of family planning in Kenya and could potentially be more effective if rolled out to other counties.

  1. Practical dose point-based methods to characterize dose distribution in a stationary elliptical body phantom for a cone-beam C-arm CT system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Jang-Hwan, E-mail: jhchoi21@stanford.edu; Constantin, Dragos; Ganguly, Arundhuti

    2015-08-15

    Purpose: To propose new dose point measurement-based metrics to characterize the dose distributions and the mean dose from a single partial rotation of an automatic exposure control-enabled, C-arm-based, wide cone angle computed tomography system over a stationary, large, body-shaped phantom. Methods: A small 0.6 cm{sup 3} ion chamber (IC) was used to measure the radiation dose in an elliptical body-shaped phantom made of tissue-equivalent material. The IC was placed at 23 well-distributed holes in the central and peripheral regions of the phantom and dose was recorded for six acquisition protocols with different combinations of minimum kVp (109 and 125 kVp)more » and z-collimator aperture (full: 22.2 cm; medium: 14.0 cm; small: 8.4 cm). Monte Carlo (MC) simulations were carried out to generate complete 2D dose distributions in the central plane (z = 0). The MC model was validated at the 23 dose points against IC experimental data. The planar dose distributions were then estimated using subsets of the point dose measurements using two proposed methods: (1) the proximity-based weighting method (method 1) and (2) the dose point surface fitting method (method 2). Twenty-eight different dose point distributions with six different point number cases (4, 5, 6, 7, 14, and 23 dose points) were evaluated to determine the optimal number of dose points and their placement in the phantom. The performances of the methods were determined by comparing their results with those of the validated MC simulations. The performances of the methods in the presence of measurement uncertainties were evaluated. Results: The 5-, 6-, and 7-point cases had differences below 2%, ranging from 1.0% to 1.7% for both methods, which is a performance comparable to that of the methods with a relatively large number of points, i.e., the 14- and 23-point cases. However, with the 4-point case, the performances of the two methods decreased sharply. Among the 4-, 5-, 6-, and 7-point cases, the 7-point case (1.0% [±0.6%] difference) and the 6-point case (0.7% [±0.6%] difference) performed best for method 1 and method 2, respectively. Moreover, method 2 demonstrated high-fidelity surface reconstruction with as few as 5 points, showing pixelwise absolute differences of 3.80 mGy (±0.32 mGy). Although the performance was shown to be sensitive to the phantom displacement from the isocenter, the performance changed by less than 2% for shifts up to 2 cm in the x- and y-axes in the central phantom plane. Conclusions: With as few as five points, method 1 and method 2 were able to compute the mean dose with reasonable accuracy, demonstrating differences of 1.7% (±1.2%) and 1.3% (±1.0%), respectively. A larger number of points do not necessarily guarantee better performance of the methods; optimal choice of point placement is necessary. The performance of the methods is sensitive to the alignment of the center of the body phantom relative to the isocenter. In body applications where dose distributions are important, method 2 is a better choice than method 1, as it reconstructs the dose surface with high fidelity, using as few as five points.« less

  2. Recent Developments in Computational Techniques for Applied Hydrodynamics.

    DTIC Science & Technology

    1979-12-07

    by block number) Numerical Method Fluids Incompressible Flow Finite Difference Methods Poisson Equation Convective Equations -MABSTRACT (Continue on...weaknesses of the different approaches are analyzed. Finite - difference techniques have particularly attractive properties in this framework. Hence it will...be worthwhile to correct, at least partially, the difficulties from which Eulerian and Lagrangian finite - difference techniques suffer, discussed in

  3. Moisture Transport in Composites during Repair Work,

    DTIC Science & Technology

    1983-09-01

    4 * FINITE DIFFERENCE EQUATIONS. .. . . .. . .. .. .. .. .. 6 INI I A ANBOUNAAYYCONDITIONS................ 7 REASONABLE FIRST...DURING DRYING AND CURING . . . ........ 9 5 CONVERGENCE OF FINITE DIFFERENCE METHOD USING DIFFERENT At . . .. 12 6 CONVERGENCE OF FDA METHOD FOR SAME At...transport we will use a finite difference approach, changing the Fickian equation to a finite number of linear algebraic equations that can be solved by

  4. Integrated method for chaotic time series analysis

    DOEpatents

    Hively, Lee M.; Ng, Esmond G.

    1998-01-01

    Methods and apparatus for automatically detecting differences between similar but different states in a nonlinear process monitor nonlinear data. Steps include: acquiring the data; digitizing the data; obtaining nonlinear measures of the data via chaotic time series analysis; obtaining time serial trends in the nonlinear measures; and determining by comparison whether differences between similar but different states are indicated.

  5. Characterization of Graphite Oxide and Reduced Graphene Oxide Obtained from Different Graphite Precursors and Oxidized by Different Methods Using Raman Spectroscopy.

    PubMed

    Muzyka, Roksana; Drewniak, Sabina; Pustelny, Tadeusz; Chrubasik, Maciej; Gryglewicz, Grażyna

    2018-06-21

    In this paper, the influences of the graphite precursor and the oxidation method on the resulting reduced graphene oxide (especially its composition and morphology) are shown. Three types of graphite were used to prepare samples for analysis, and each of the precursors was oxidized by two different methods (all samples were reduced by the same method of thermal reduction). Each obtained graphite oxide and reduced graphene oxide was analysed by X-ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS) and Raman spectroscopy (RS).

  6. Measuring the distance between multiple sequence alignments.

    PubMed

    Blackburne, Benjamin P; Whelan, Simon

    2012-02-15

    Multiple sequence alignment (MSA) is a core method in bioinformatics. The accuracy of such alignments may influence the success of downstream analyses such as phylogenetic inference, protein structure prediction, and functional prediction. The importance of MSA has lead to the proliferation of MSA methods, with different objective functions and heuristics to search for the optimal MSA. Different methods of inferring MSAs produce different results in all but the most trivial cases. By measuring the differences between inferred alignments, we may be able to develop an understanding of how these differences (i) relate to the objective functions and heuristics used in MSA methods, and (ii) affect downstream analyses. We introduce four metrics to compare MSAs, which include the position in a sequence where a gap occurs or the location on a phylogenetic tree where an insertion or deletion (indel) event occurs. We use both real and synthetic data to explore the information given by these metrics and demonstrate how the different metrics in combination can yield more information about MSA methods and the differences between them. MetAl is a free software implementation of these metrics in Haskell. Source and binaries for Windows, Linux and Mac OS X are available from http://kumiho.smith.man.ac.uk/whelan/software/metal/.

  7. Virtual and stereoscopic anatomy: when virtual reality meets medical education.

    PubMed

    de Faria, Jose Weber Vieira; Teixeira, Manoel Jacobsen; de Moura Sousa Júnior, Leonardo; Otoch, Jose Pinhata; Figueiredo, Eberval Gadelha

    2016-11-01

    OBJECTIVE The authors sought to construct, implement, and evaluate an interactive and stereoscopic resource for teaching neuroanatomy, accessible from personal computers. METHODS Forty fresh brains (80 hemispheres) were dissected. Images of areas of interest were captured using a manual turntable and processed and stored in a 5337-image database. Pedagogic evaluation was performed in 84 graduate medical students, divided into 3 groups: 1 (conventional method), 2 (interactive nonstereoscopic), and 3 (interactive and stereoscopic). The method was evaluated through a written theory test and a lab practicum. RESULTS Groups 2 and 3 showed the highest mean scores in pedagogic evaluations and differed significantly from Group 1 (p < 0.05). Group 2 did not differ statistically from Group 3 (p > 0.05). Size effects, measured as differences in scores before and after lectures, indicate the effectiveness of the method. ANOVA results showed significant difference (p < 0.05) between groups, and the Tukey test showed statistical differences between Group 1 and the other 2 groups (p < 0.05). No statistical differences between Groups 2 and 3 were found in the practicum. However, there were significant differences when Groups 2 and 3 were compared with Group 1 (p < 0.05). CONCLUSIONS The authors conclude that this method promoted further improvement in knowledge for students and fostered significantly higher learning when compared with traditional teaching resources.

  8. Spine surgeon's kinematics during discectomy, part II: operating table height and visualization methods, including microscope.

    PubMed

    Park, Jeong Yoon; Kim, Kyung Hyun; Kuh, Sung Uk; Chin, Dong Kyu; Kim, Keun Su; Cho, Yong Eun

    2014-05-01

    Surgeon spine angle during surgery was studied ergonomically and the kinematics of the surgeon's spine was related with musculoskeletal fatigue and pain. Spine angles varied depending on operation table height and visualization method, and in a previous paper we showed that the use of a loupe and a table height at the midpoint between the umbilicus and the sternum are optimal for reducing musculoskeletal loading. However, no studies have previously included a microscope as a possible visualization method. The objective of this study is to assess differences in surgeon spine angles depending on operating table height and visualization method, including microscope. We enrolled 18 experienced spine surgeons for this study, who each performed a discectomy using a spine surgery simulator. Three different methods were used to visualize the surgical field (naked eye, loupe, microscope) and three different operating table heights (anterior superior iliac spine, umbilicus, the midpoint between the umbilicus and the sternum) were studied. Whole spine angles were compared for three different views during the discectomy simulation: midline, ipsilateral, and contralateral. A 16-camera optoelectronic motion analysis system was used, and 16 markers were placed from the head to the pelvis. Lumbar lordosis, thoracic kyphosis, cervical lordosis, and occipital angle were compared between the different operating table heights and visualization methods as well as a natural standing position. Whole spine angles differed significantly depending on visualization method. All parameters were closer to natural standing values when discectomy was performed with a microscope, and there were no differences between the naked eye and the loupe. Whole spine angles were also found to differ from the natural standing position depending on operating table height, and became closer to natural standing position values as the operating table height increased, independent of the visualization method. When using a microscope, lumbar lordosis, thoracic kyphosis, and cervical lordosis showed no differences according to table heights above the umbilicus. This study suggests that the use of a microscope and a table height above the umbilicus are optimal for reducing surgeon musculoskeletal fatigue.

  9. Repeatability in Color Measurements of a Spectrophotometer using Different Positioning Devices.

    PubMed

    Hemming, Michael; Kwon, So Ran; Qian, Fang

    2015-12-01

    This study aimed to evaluate the repeatability of color measurements of an intraoral spectrophotometer with the use of three different methods by two operators. A total of 60 teeth were obtained, comprising 30 human maxillary teeth [central incisors (n = 10); canines (n = 10); molars (n = 10)] and 30 artificial teeth [lateral incisors (n = 10); premolar (n = 20)]. Multiple repeated color measurements were obtained from each tooth using three measuring methods by each of the two operators. Five typodonts with alternating artificial and human teeth were made. Measurements were taken by two operators with the Vita EasyShade spectrophotometer using the custom tray (CT), custom jig (CJ) and free hand (FH) method, twice, at an interval of 2 to 7 days. Friedman test was used to detect difference among the three color measuring methods. Post hoc Wilcoxon signed-rank test with Bonferroni correction applied was used for pair-wise comparison of color measurements among the three methods. Additionally, a paired-sample t-test was used to assess a significant difference between the two duplicated measurements made on the same tooth by the same operator for each color parameter and measuring method. For operator A, mean (SD) overall color change-ΔE* (SD) perceived for FH, CT and CJ were 2.21(2.00), 2.39 (1.58) and 2.86 (1.92), respectively. There was statistically significant difference in perceived ΔE* in FH vs CJ (p = 0.0107). However, there were no significant differences between FH and CT (p = 0.2829) or between CT and CJ (p = 0.1159). For operator B mean ΔE* (SD) for FH, CT and CJ were 3.24 (3.46), 1.95 (1.19) and 2.45 (1.56), respectively. There was a significant difference between FH and CT (p = 0.0031). However, there were no statistically significant differences in ΔE* in FH vs CJ (p = 0.3696) or CT vs CJ (p = 0.0809). The repeatability of color measurements was different among the three measuring methods by operators. Overall, the CT method worked well for both operators. The use of a custom tray with apertures can improve the repeatability of color measurements of an intraoral spectrophotometer.

  10. Estimation of Slow Crack Growth Parameters for Constant Stress-Rate Test Data of Advanced Ceramics and Glass by the Individual Data and Arithmetic Mean Methods

    NASA Technical Reports Server (NTRS)

    Choi, Sung R.; Salem, Jonathan A.; Holland, Frederic A.

    1997-01-01

    The two estimation methods, individual data and arithmetic mean methods, were used to determine the slow crack growth (SCG) parameters (n and D) of advanced ceramics and glass from a large number of room- and elevated-temperature constant stress-rate ('dynamic fatigue') test data. For ceramic materials with Weibull modulus greater than 10, the difference in the SCG parameters between the two estimation methods was negligible; whereas, for glass specimens exhibiting Weibull modulus of about 3, the difference was amplified, resulting in a maximum difference of 16 and 13 %, respectively, in n and D. Of the two SCG parameters, the parameter n was more sensitive to the estimation method than the other. The coefficient of variation in n was found to be somewhat greater in the individual data method than in the arithmetic mean method.

  11. The spa typing of methicillin-resistant Staphylococcus aureus isolates by High Resolution Melting (HRM) analysis.

    PubMed

    Fasihi, Yasser; Fooladi, Saba; Mohammadi, Mohammad Ali; Emaneini, Mohammad; Kalantar-Neyestanaki, Davood

    2017-09-06

    Molecular typing is an important tool for control and prevention of infection. A suitable molecular typing method for epidemiological investigation must be easy to perform, highly reproducible, inexpensive, rapid and easy to interpret. In this study, two molecular typing methods including the conventional PCR-sequencing method and high resolution melting (HRM) analysis were used for staphylococcal protein A (spa) typing of 30 Methicillin-resistant Staphylococcus aureus (MRSA) isolates recovered from clinical samples. Based on PCR-sequencing method results, 16 different spa types were identified among the 30 MRSA isolates. Among the 16 different spa types, 14 spa types separated by HRM method. Two spa types including t4718 and t2894 were not separated from each other. According to our results, spa typing based on HRM analysis method is very rapid, easy to perform and cost-effective, but this method must be standardized for different regions, spa types, and real-time machinery.

  12. Method selection for sustainability assessments: The case of recovery of resources from waste water.

    PubMed

    Zijp, M C; Waaijers-van der Loop, S L; Heijungs, R; Broeren, M L M; Peeters, R; Van Nieuwenhuijzen, A; Shen, L; Heugens, E H W; Posthuma, L

    2017-07-15

    Sustainability assessments provide scientific support in decision procedures towards sustainable solutions. However, in order to contribute in identifying and choosing sustainable solutions, the sustainability assessment has to fit the decision context. Two complicating factors exist. First, different stakeholders tend to have different views on what a sustainability assessment should encompass. Second, a plethora of sustainability assessment methods exist, due to the multi-dimensional characteristic of the concept. Different methods provide other representations of sustainability. Based on a literature review, we present a protocol to facilitate method selection together with stakeholders. The protocol guides the exploration of i) the decision context, ii) the different views of stakeholders and iii) the selection of pertinent assessment methods. In addition, we present an online tool for method selection. This tool identifies assessment methods that meet the specifications obtained with the protocol, and currently contains characteristics of 30 sustainability assessment methods. The utility of the protocol and the tool are tested in a case study on the recovery of resources from domestic waste water. In several iterations, a combination of methods was selected, followed by execution of the selected sustainability assessment methods. The assessment results can be used in the first phase of the decision procedure that leads to a strategic choice for sustainable resource recovery from waste water in the Netherlands. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Automatic 3D kidney segmentation based on shape constrained GC-OAAM

    NASA Astrophysics Data System (ADS)

    Chen, Xinjian; Summers, Ronald M.; Yao, Jianhua

    2011-03-01

    The kidney can be classified into three main tissue types: renal cortex, renal medulla and renal pelvis (or collecting system). Dysfunction of different renal tissue types may cause different kidney diseases. Therefore, accurate and efficient segmentation of kidney into different tissue types plays a very important role in clinical research. In this paper, we propose an automatic 3D kidney segmentation method which segments the kidney into the three different tissue types: renal cortex, medulla and pelvis. The proposed method synergistically combines active appearance model (AAM), live wire (LW) and graph cut (GC) methods, GC-OAAM for short. Our method consists of two main steps. First, a pseudo 3D segmentation method is employed for kidney initialization in which the segmentation is performed slice-by-slice via a multi-object oriented active appearance model (OAAM) method. An improved iterative model refinement algorithm is proposed for the AAM optimization, which synergistically combines the AAM and LW method. Multi-object strategy is applied to help the object initialization. The 3D model constraints are applied to the initialization result. Second, the object shape information generated from the initialization step is integrated into the GC cost computation. A multi-label GC method is used to segment the kidney into cortex, medulla and pelvis. The proposed method was tested on 19 clinical arterial phase CT data sets. The preliminary results showed the feasibility and efficiency of the proposed method.

  14. SU-E-I-08: Investigation of Deconvolution Methods for Blocker-Based CBCT Scatter Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, C; Jin, M; Ouyang, L

    2015-06-15

    Purpose: To investigate whether deconvolution methods can improve the scatter estimation under different blurring and noise conditions for blocker-based scatter correction methods for cone-beam X-ray computed tomography (CBCT). Methods: An “ideal” projection image with scatter was first simulated for blocker-based CBCT data acquisition by assuming no blurring effect and no noise. The ideal image was then convolved with long-tail point spread functions (PSF) with different widths to mimic the blurring effect from the finite focal spot and detector response. Different levels of noise were also added. Three deconvolution Methods: 1) inverse filtering; 2) Wiener; and 3) Richardson-Lucy, were used tomore » recover the scatter signal in the blocked region. The root mean square error (RMSE) of estimated scatter serves as a quantitative measure for the performance of different methods under different blurring and noise conditions. Results: Due to the blurring effect, the scatter signal in the blocked region is contaminated by the primary signal in the unblocked region. The direct use of the signal in the blocked region to estimate scatter (“direct method”) leads to large RMSE values, which increase with the increased width of PSF and increased noise. The inverse filtering is very sensitive to noise and practically useless. The Wiener and Richardson-Lucy deconvolution methods significantly improve scatter estimation compared to the direct method. For a typical medium PSF and medium noise condition, both methods (∼20 RMSE) can achieve 4-fold improvement over the direct method (∼80 RMSE). The Wiener method deals better with large noise and Richardson-Lucy works better on wide PSF. Conclusion: We investigated several deconvolution methods to recover the scatter signal in the blocked region for blocker-based scatter correction for CBCT. Our simulation results demonstrate that Wiener and Richardson-Lucy deconvolution can significantly improve the scatter estimation compared to the direct method.« less

  15. The influence of different black carbon and sulfate mixing methods on their optical and radiative properties

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Zhou, Chen; Wang, Zhili; Zhao, Shuyun; Li, Jiangnan

    2015-08-01

    Three different internal mixing methods (Core-Shell, Maxwell-Garnett, and Bruggeman) and one external mixing method are used to study the impact of mixing methods of black carbon (BC) with sulfate aerosol on their optical properties, radiative flux, and heating rate. The optical properties of a mixture of BC and sulfate aerosol particles are considered for three typical bands. The results show that mixing methods, the volume ratio of BC to sulfate, and relative humidity have a strong influence on the optical properties of mixed aerosols. Compared to internal mixing, external mixing underestimates the particle mass absorption coefficient by 20-70% and the particle mass scattering coefficient by up to 50%, whereas it overestimates the particle single scattering albedo by 20-50% in most cases. However, the asymmetry parameter is strongly sensitive to the equivalent particle radius, but is only weakly sensitive to the different mixing methods. Of the internal methods, there is less than 2% difference in all optical properties between the Maxwell-Garnett and Bruggeman methods in all bands; however, the differences between the Core-Shell and Maxwell-Garnett/Bruggeman methods are usually larger than 15% in the ultraviolet and visible bands. A sensitivity test is conducted with the Beijing Climate Center Radiation transfer model (BCC-RAD) using a simulated BC concentration that is typical of east-central China and a sulfate volume ratio of 75%. The results show that the internal mixing methods could reduce the radiative flux more effectively because they produce a higher absorption. The annual mean instantaneous radiative force due to BC-sulfate aerosol is about -3.18 W/m2 for the external method and -6.91 W/m2 for the internal methods at the surface, and -3.03/-1.56/-1.85 W/m2 for the external/Core-Shell/(Maxwell-Garnett/Bruggeman) methods, respectively, at the tropopause.

  16. Towards the optimal fusion of high-resolution Digital Elevation Models for detailed urban flood assessment

    NASA Astrophysics Data System (ADS)

    Leitão, J. P.; de Sousa, L. M.

    2018-06-01

    Newly available, more detailed and accurate elevation data sets, such as Digital Elevation Models (DEMs) generated on the basis of imagery from terrestrial LiDAR (Light Detection and Ranging) systems or Unmanned Aerial Vehicles (UAVs), can be used to improve flood-model input data and consequently increase the accuracy of the flood modelling results. This paper presents the first application of the MBlend merging method and assesses the impact of combining different DEMs on flood modelling results. It was demonstrated that different raster merging methods can have different and substantial impacts on these results. In addition to the influence associated with the method used to merge the original DEMs, the magnitude of the impact also depends on (i) the systematic horizontal and vertical differences of the DEMs, and (ii) the orientation between the DEM boundary and the terrain slope. The greater water depth and flow velocity differences between the flood modelling results obtained using the reference DEM and the merged DEMs ranged from -9.845 to 0.002 m, and from 0.003 to 0.024 m s-1 respectively; these differences can have a significant impact on flood hazard estimates. In most of the cases investigated in this study, the differences from the reference DEM results were smaller for the MBlend method than for the results of the two conventional methods. This study highlighted the importance of DEM merging when conducting flood modelling and provided hints on the best DEM merging methods to use.

  17. [Analyzing and modeling methods of near infrared spectroscopy for in-situ prediction of oil yield from oil shale].

    PubMed

    Liu, Jie; Zhang, Fu-Dong; Teng, Fei; Li, Jun; Wang, Zhi-Hong

    2014-10-01

    In order to in-situ detect the oil yield of oil shale, based on portable near infrared spectroscopy analytical technology, with 66 rock core samples from No. 2 well drilling of Fuyu oil shale base in Jilin, the modeling and analyzing methods for in-situ detection were researched. By the developed portable spectrometer, 3 data formats (reflectance, absorbance and K-M function) spectra were acquired. With 4 different modeling data optimization methods: principal component-mahalanobis distance (PCA-MD) for eliminating abnormal samples, uninformative variables elimination (UVE) for wavelength selection and their combina- tions: PCA-MD + UVE and UVE + PCA-MD, 2 modeling methods: partial least square (PLS) and back propagation artificial neural network (BPANN), and the same data pre-processing, the modeling and analyzing experiment were performed to determine the optimum analysis model and method. The results show that the data format, modeling data optimization method and modeling method all affect the analysis precision of model. Results show that whether or not using the optimization method, reflectance or K-M function is the proper spectrum format of the modeling database for two modeling methods. Using two different modeling methods and four different data optimization methods, the model precisions of the same modeling database are different. For PLS modeling method, the PCA-MD and UVE + PCA-MD data optimization methods can improve the modeling precision of database using K-M function spectrum data format. For BPANN modeling method, UVE, UVE + PCA-MD and PCA- MD + UVE data optimization methods can improve the modeling precision of database using any of the 3 spectrum data formats. In addition to using the reflectance spectra and PCA-MD data optimization method, modeling precision by BPANN method is better than that by PLS method. And modeling with reflectance spectra, UVE optimization method and BPANN modeling method, the model gets the highest analysis precision, its correlation coefficient (Rp) is 0.92, and its standard error of prediction (SEP) is 0.69%.

  18. Comparison of three different prehospital wrapping methods for preventing hypothermia - a crossover study in humans

    PubMed Central

    2011-01-01

    Background Accidental hypothermia increases mortality and morbidity in trauma patients. Various methods for insulating and wrapping hypothermic patients are used worldwide. The aim of this study was to compare the thermal insulating effects and comfort of bubble wrap, ambulance blankets / quilts, and Hibler's method, a low-cost method combining a plastic outer layer with an insulating layer. Methods Eight volunteers were dressed in moistened clothing, exposed to a cold and windy environment then wrapped using one of the three different insulation methods in random order on three different days. They were rested quietly on their back for 60 minutes in a cold climatic chamber. Skin temperature, rectal temperature, oxygen consumption were measured, and metabolic heat production was calculated. A questionnaire was used for a subjective evaluation of comfort, thermal sensation, and shivering. Results Skin temperature was significantly higher 15 minutes after wrapping using Hibler's method compared with wrapping with ambulance blankets / quilts or bubble wrap. There were no differences in core temperature between the three insulating methods. The subjects reported more shivering, they felt colder, were more uncomfortable, and had an increased heat production when using bubble wrap compared with the other two methods. Hibler's method was the volunteers preferred method for preventing hypothermia. Bubble wrap was the least effective insulating method, and seemed to require significantly higher heat production to compensate for increased heat loss. Conclusions This study demonstrated that a combination of vapour tight layer and an additional dry insulating layer (Hibler's method) is the most efficient wrapping method to prevent heat loss, as shown by increased skin temperatures, lower metabolic rate and better thermal comfort. This should then be the method of choice when wrapping a wet patient at risk of developing hypothermia in prehospital environments. PMID:21699720

  19. The Split Coefficient Matrix method for hyperbolic systems of gasdynamic equations

    NASA Technical Reports Server (NTRS)

    Chakravarthy, S. R.; Anderson, D. A.; Salas, M. D.

    1980-01-01

    The Split Coefficient Matrix (SCM) finite difference method for solving hyperbolic systems of equations is presented. This new method is based on the mathematical theory of characteristics. The development of the method from characteristic theory is presented. Boundary point calculation procedures consistent with the SCM method used at interior points are explained. The split coefficient matrices that define the method for steady supersonic and unsteady inviscid flows are given for several examples. The SCM method is used to compute several flow fields to demonstrate its accuracy and versatility. The similarities and differences between the SCM method and the lambda-scheme are discussed.

  20. A Multifunctional Interface Method for Coupling Finite Element and Finite Difference Methods: Two-Dimensional Scalar-Field Problems

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.

    2002-01-01

    A multifunctional interface method with capabilities for variable-fidelity modeling and multiple method analysis is presented. The methodology provides an effective capability by which domains with diverse idealizations can be modeled independently to exploit the advantages of one approach over another. The multifunctional method is used to couple independently discretized subdomains, and it is used to couple the finite element and the finite difference methods. The method is based on a weighted residual variational method and is presented for two-dimensional scalar-field problems. A verification test problem and a benchmark application are presented, and the computational implications are discussed.

  1. Divergence preserving discrete surface integral methods for Maxwell's curl equations using non-orthogonal unstructured grids

    NASA Technical Reports Server (NTRS)

    Madsen, Niel K.

    1992-01-01

    Several new discrete surface integral (DSI) methods for solving Maxwell's equations in the time-domain are presented. These methods, which allow the use of general nonorthogonal mixed-polyhedral unstructured grids, are direct generalizations of the canonical staggered-grid finite difference method. These methods are conservative in that they locally preserve divergence or charge. Employing mixed polyhedral cells, (hexahedral, tetrahedral, etc.) these methods allow more accurate modeling of non-rectangular structures and objects because the traditional stair-stepped boundary approximations associated with the orthogonal grid based finite difference methods can be avoided. Numerical results demonstrating the accuracy of these new methods are presented.

  2. Evaluation of algorithm methods for fluorescence spectra of cancerous and normal human tissues

    NASA Astrophysics Data System (ADS)

    Pu, Yang; Wang, Wubao; Alfano, Robert R.

    2016-03-01

    The paper focus on the various algorithms on to unravel the fluorescence spectra by unmixing methods to identify cancerous and normal human tissues from the measured fluorescence spectroscopy. The biochemical or morphologic changes that cause fluorescence spectra variations would appear earlier than the histological approach; therefore, fluorescence spectroscopy holds a great promise as clinical tool for diagnosing early stage of carcinomas and other deceases for in vivo use. The method can further identify tissue biomarkers by decomposing the spectral contributions of different fluorescent molecules of interest. In this work, we investigate the performance of blind source un-mixing methods (backward model) and spectral fitting approaches (forward model) in decomposing the contributions of key fluorescent molecules from the tissue mixture background when certain selected excitation wavelength is applied. Pairs of adenocarcinoma as well as normal tissues confirmed by pathologist were excited by selective wavelength of 340 nm. The emission spectra of resected fresh tissue were used to evaluate the relative changes of collagen, reduced nicotinamide adenine dinucleotide (NADH), and Flavin by various spectral un-mixing methods. Two categories of algorithms: forward methods and Blind Source Separation [such as Principal Component Analysis (PCA) and Independent Component Analysis (ICA), and Nonnegative Matrix Factorization (NMF)] will be introduced and evaluated. The purpose of the spectral analysis is to discard the redundant information which conceals the difference between these two types of tissues, but keep their diagnostically significance. The facts predicted by different methods were compared to the gold standard of histopathology. The results indicate that these key fluorophores within tissue, e.g. tryptophan, collagen, and NADH, and flavin, show differences of relative contents of fluorophores among different types of human cancer and normal tissues. The sensitivity, specificity, and receiver operating characteristic (ROC) are finally employed as the criteria to evaluate the efficacy of these methods in cancer detection. The underlying physical and biological basis for these optical approaches will be discussed with examples. This ex vivo preliminary trial demonstrates that these different criteria from different methods can distinguish carcinoma from normal tissues with good sensitivity and specificity while among them, we found that ICA appears to be the superior method in predication accuracy.

  3. A probabilistic method for testing and estimating selection differences between populations.

    PubMed

    He, Yungang; Wang, Minxian; Huang, Xin; Li, Ran; Xu, Hongyang; Xu, Shuhua; Jin, Li

    2015-12-01

    Human populations around the world encounter various environmental challenges and, consequently, develop genetic adaptations to different selection forces. Identifying the differences in natural selection between populations is critical for understanding the roles of specific genetic variants in evolutionary adaptation. Although numerous methods have been developed to detect genetic loci under recent directional selection, a probabilistic solution for testing and quantifying selection differences between populations is lacking. Here we report the development of a probabilistic method for testing and estimating selection differences between populations. By use of a probabilistic model of genetic drift and selection, we showed that logarithm odds ratios of allele frequencies provide estimates of the differences in selection coefficients between populations. The estimates approximate a normal distribution, and variance can be estimated using genome-wide variants. This allows us to quantify differences in selection coefficients and to determine the confidence intervals of the estimate. Our work also revealed the link between genetic association testing and hypothesis testing of selection differences. It therefore supplies a solution for hypothesis testing of selection differences. This method was applied to a genome-wide data analysis of Han and Tibetan populations. The results confirmed that both the EPAS1 and EGLN1 genes are under statistically different selection in Han and Tibetan populations. We further estimated differences in the selection coefficients for genetic variants involved in melanin formation and determined their confidence intervals between continental population groups. Application of the method to empirical data demonstrated the outstanding capability of this novel approach for testing and quantifying differences in natural selection. © 2015 He et al.; Published by Cold Spring Harbor Laboratory Press.

  4. Alternatives for Measuring the Unexplained Wage Gap.

    ERIC Educational Resources Information Center

    Toutkoushian, Robert K.; Hoffman, Emily P.

    2002-01-01

    Reviews several different methods that analysts can use to measure gender- and race-based pay differences for academic employees, and how they are interrelated. Discusses the advantages and disadvantages of each method, and shows how they can give rise to different estimates of pay disparity. (EV)

  5. Pest measurement and management

    USDA-ARS?s Scientific Manuscript database

    Pest scouting, whether it is done only with ground scouting methods or using remote sensing with some ground-truthing, is an important tool to aid site-specific crop management. Different pests may be monitored at different times and using different methods. Remote sensing has the potential to provi...

  6. Effects of alpha-amylase reaction mechanisms on analysis of resistant-starch contents.

    PubMed

    Moore, Samuel A; Ai, Yongfeng; Chang, Fengdan; Jane, Jay-lin

    2015-01-22

    This study aimed to understand differences in the resistant starch (RS) contents of native and modified starches obtained using two standard methods of RS content analysis: AOAC Method 991.43 and 2002.02. The largest differences were observed in native potato starch, cross-linked wheat distarch phosphate, and high-amylose corn starch stearic-acid complex (RS5) between using AOAC Method 991.43 with Bacillus licheniformis α-amylase (BL) and AOAC Method 2002.02 with porcine pancreatic α-amylase (PPA). To determine possible reasons for these differences, we hydrolyzed raw-starch granules with BL and PPA with equal activity at pH 6.9 and 37°C for up to 84 h and observed the starch granules displayed distinct morphological differences after the hydrolysis. Starches hydrolyzed by BL showed erosion on the surface of the granules; those hydrolyzed by PPA showed pitting on granule surfaces. These results suggested that enzyme reaction mechanisms, including the sizes of the binding sites and the reaction patterns of the two enzymes, contributed to the differences in the RS contents obtained using different methods of RS analysis. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Qualitative and quantitative evaluation of the genomic DNA extracted from GMO and non-GMO foodstuffs with four different extraction methods.

    PubMed

    Peano, Clelia; Samson, Maria Cristina; Palmieri, Luisa; Gulli, Mariolina; Marmiroli, Nelson

    2004-11-17

    The presence of DNA in foodstuffs derived from or containing genetically modified organisms (GMO) is the basic requirement for labeling of GMO foods in Council Directive 2001/18/CE (Off. J. Eur. Communities 2001, L1 06/2). In this work, four different methods for DNA extraction were evaluated and compared. To rank the different methods, the quality and quantity of DNA extracted from standards, containing known percentages of GMO material and from different food products, were considered. The food products analyzed derived from both soybean and maize and were chosen on the basis of the mechanical, technological, and chemical treatment they had been subjected to during processing. Degree of DNA degradation at various stages of food production was evaluated through the amplification of different DNA fragments belonging to the endogenous genes of both maize and soybean. Genomic DNA was extracted from Roundup Ready soybean and maize MON810 standard flours, according to four different methods, and quantified by real-time Polymerase Chain Reaction (PCR), with the aim of determining the influence of the extraction methods on the DNA quantification through real-time PCR.

  8. Normative Data for an Instrumental Assessment of the Upper-Limb Functionality.

    PubMed

    Caimmi, Marco; Guanziroli, Eleonora; Malosio, Matteo; Pedrocchi, Nicola; Vicentini, Federico; Molinari Tosatti, Lorenzo; Molteni, Franco

    2015-01-01

    Upper-limb movement analysis is important to monitor objectively rehabilitation interventions, contributing to improving the overall treatments outcomes. Simple, fast, easy-to-use, and applicable methods are required to allow routinely functional evaluation of patients with different pathologies and clinical conditions. This paper describes the Reaching and Hand-to-Mouth Evaluation Method, a fast procedure to assess the upper-limb motor control and functional ability, providing a set of normative data from 42 healthy subjects of different ages, evaluated for both the dominant and the nondominant limb motor performance. Sixteen of them were reevaluated after two weeks to perform test-retest reliability analysis. Data were clustered into three subgroups of different ages to test the method sensitivity to motor control differences. Experimental data show notable test-retest reliability in all tasks. Data from older and younger subjects show significant differences in the measures related to the ability for coordination thus showing the high sensitivity of the method to motor control differences. The presented method, provided with control data from healthy subjects, appears to be a suitable and reliable tool for the upper-limb functional assessment in the clinical environment.

  9. Normative Data for an Instrumental Assessment of the Upper-Limb Functionality

    PubMed Central

    Caimmi, Marco; Guanziroli, Eleonora; Malosio, Matteo; Pedrocchi, Nicola; Vicentini, Federico; Molinari Tosatti, Lorenzo; Molteni, Franco

    2015-01-01

    Upper-limb movement analysis is important to monitor objectively rehabilitation interventions, contributing to improving the overall treatments outcomes. Simple, fast, easy-to-use, and applicable methods are required to allow routinely functional evaluation of patients with different pathologies and clinical conditions. This paper describes the Reaching and Hand-to-Mouth Evaluation Method, a fast procedure to assess the upper-limb motor control and functional ability, providing a set of normative data from 42 healthy subjects of different ages, evaluated for both the dominant and the nondominant limb motor performance. Sixteen of them were reevaluated after two weeks to perform test-retest reliability analysis. Data were clustered into three subgroups of different ages to test the method sensitivity to motor control differences. Experimental data show notable test-retest reliability in all tasks. Data from older and younger subjects show significant differences in the measures related to the ability for coordination thus showing the high sensitivity of the method to motor control differences. The presented method, provided with control data from healthy subjects, appears to be a suitable and reliable tool for the upper-limb functional assessment in the clinical environment. PMID:26539500

  10. Comparison of risk assessment procedures used in OCRA and ULRA methods

    PubMed Central

    Roman-Liu, Danuta; Groborz, Anna; Tokarski, Tomasz

    2013-01-01

    The aim of this study was to analyse the convergence of two methods by comparing exposure and the assessed risk of developing musculoskeletal disorders at 18 repetitive task workstations. The already established occupational repetitive actions (OCRA) and the recently developed upper limb risk assessment (ULRA) produce correlated results (R = 0.84, p = 0.0001). A discussion of the factors that influence the values of the OCRA index and ULRA's repetitive task indicator shows that both similarities and differences in the results produced by the two methods can arise from the concepts that underlie them. The assessment procedure and mathematical calculations that the basic parameters are subjected to are crucial to the results of risk assessment. The way the basic parameters are defined influences the assessment of exposure and risk assessment to a lesser degree. The analysis also proved that not always do great differences in load indicator values result in differences in risk zones. Practitioner Summary: We focused on comparing methods that, even though based on different concepts, serve the same purpose. The results proved that different methods with different assumptions can produce similar assessment of upper limb load; sharp criteria in risk assessment are not the best solution. PMID:24041375

  11. Chemometrics Methods for Specificity, Authenticity and Traceability Analysis of Olive Oils: Principles, Classifications and Applications.

    PubMed

    Messai, Habib; Farman, Muhammad; Sarraj-Laabidi, Abir; Hammami-Semmar, Asma; Semmar, Nabil

    2016-11-17

    Olive oils (OOs) show high chemical variability due to several factors of genetic, environmental and anthropic types. Genetic and environmental factors are responsible for natural compositions and polymorphic diversification resulting in different varietal patterns and phenotypes. Anthropic factors, however, are at the origin of different blends' preparation leading to normative, labelled or adulterated commercial products. Control of complex OO samples requires their (i) characterization by specific markers; (ii) authentication by fingerprint patterns; and (iii) monitoring by traceability analysis. These quality control and management aims require the use of several multivariate statistical tools: specificity highlighting requires ordination methods; authentication checking calls for classification and pattern recognition methods; traceability analysis implies the use of network-based approaches able to separate or extract mixed information and memorized signals from complex matrices. This chapter presents a review of different chemometrics methods applied for the control of OO variability from metabolic and physical-chemical measured characteristics. The different chemometrics methods are illustrated by different study cases on monovarietal and blended OO originated from different countries. Chemometrics tools offer multiple ways for quantitative evaluations and qualitative control of complex chemical variability of OO in relation to several intrinsic and extrinsic factors.

  12. Multi-task linear programming discriminant analysis for the identification of progressive MCI individuals.

    PubMed

    Yu, Guan; Liu, Yufeng; Thung, Kim-Han; Shen, Dinggang

    2014-01-01

    Accurately identifying mild cognitive impairment (MCI) individuals who will progress to Alzheimer's disease (AD) is very important for making early interventions. Many classification methods focus on integrating multiple imaging modalities such as magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET). However, the main challenge for MCI classification using multiple imaging modalities is the existence of a lot of missing data in many subjects. For example, in the Alzheimer's Disease Neuroimaging Initiative (ADNI) study, almost half of the subjects do not have PET images. In this paper, we propose a new and flexible binary classification method, namely Multi-task Linear Programming Discriminant (MLPD) analysis, for the incomplete multi-source feature learning. Specifically, we decompose the classification problem into different classification tasks, i.e., one for each combination of available data sources. To solve all different classification tasks jointly, our proposed MLPD method links them together by constraining them to achieve the similar estimated mean difference between the two classes (under classification) for those shared features. Compared with the state-of-the-art incomplete Multi-Source Feature (iMSF) learning method, instead of constraining different classification tasks to choose a common feature subset for those shared features, MLPD can flexibly and adaptively choose different feature subsets for different classification tasks. Furthermore, our proposed MLPD method can be efficiently implemented by linear programming. To validate our MLPD method, we perform experiments on the ADNI baseline dataset with the incomplete MRI and PET images from 167 progressive MCI (pMCI) subjects and 226 stable MCI (sMCI) subjects. We further compared our method with the iMSF method (using incomplete MRI and PET images) and also the single-task classification method (using only MRI or only subjects with both MRI and PET images). Experimental results show very promising performance of our proposed MLPD method.

  13. Multi-Task Linear Programming Discriminant Analysis for the Identification of Progressive MCI Individuals

    PubMed Central

    Yu, Guan; Liu, Yufeng; Thung, Kim-Han; Shen, Dinggang

    2014-01-01

    Accurately identifying mild cognitive impairment (MCI) individuals who will progress to Alzheimer's disease (AD) is very important for making early interventions. Many classification methods focus on integrating multiple imaging modalities such as magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET). However, the main challenge for MCI classification using multiple imaging modalities is the existence of a lot of missing data in many subjects. For example, in the Alzheimer's Disease Neuroimaging Initiative (ADNI) study, almost half of the subjects do not have PET images. In this paper, we propose a new and flexible binary classification method, namely Multi-task Linear Programming Discriminant (MLPD) analysis, for the incomplete multi-source feature learning. Specifically, we decompose the classification problem into different classification tasks, i.e., one for each combination of available data sources. To solve all different classification tasks jointly, our proposed MLPD method links them together by constraining them to achieve the similar estimated mean difference between the two classes (under classification) for those shared features. Compared with the state-of-the-art incomplete Multi-Source Feature (iMSF) learning method, instead of constraining different classification tasks to choose a common feature subset for those shared features, MLPD can flexibly and adaptively choose different feature subsets for different classification tasks. Furthermore, our proposed MLPD method can be efficiently implemented by linear programming. To validate our MLPD method, we perform experiments on the ADNI baseline dataset with the incomplete MRI and PET images from 167 progressive MCI (pMCI) subjects and 226 stable MCI (sMCI) subjects. We further compared our method with the iMSF method (using incomplete MRI and PET images) and also the single-task classification method (using only MRI or only subjects with both MRI and PET images). Experimental results show very promising performance of our proposed MLPD method. PMID:24820966

  14. Measurement of biochemical oxygen demand of the leachates.

    PubMed

    Fulazzaky, Mohamad Ali

    2013-06-01

    Biochemical oxygen demand (BOD) of the leachates originally from the different types of landfill sites was studied based on the data measured using the two manometric methods. The measurements of BOD using the dilution method were carried out to assess the typical physicochemical and biological characteristics of the leachates together with some other parameters. The linear regression analysis was used to predict rate constants for biochemical reactions and ultimate BOD values of the different leachates. The rate of a biochemical reaction implicated in microbial biodegradation of pollutants depends on the leachate characteristics, mass of contaminant in the leachate, and nature of the leachate. Character of leachate samples for BOD analysis of using the different methods may differ significantly during the experimental period, resulting in different BOD values. This work intends to verify effect of the different dilutions for the manometric method tests on the BOD concentrations of the leachate samples to contribute to the assessment of reaction rate and microbial consumption of oxygen.

  15. Measuring Glial Metabolism in Repetitive Brain Trauma and Alzheimer’s Disease

    DTIC Science & Technology

    2016-09-01

    Six methods: Single value decomposition (SVD), wavelet, sliding window, sliding window with Gaussian weighting, spline and spectral improvements...comparison of a range of different denoising methods for dynamic MRS. Six denoising methods were considered: Single value decomposition (SVD), wavelet...project by improving the software required for the data analysis by developing six different denoising methods. He also assisted with the testing

  16. Variance Difference between Maximum Likelihood Estimation Method and Expected A Posteriori Estimation Method Viewed from Number of Test Items

    ERIC Educational Resources Information Center

    Mahmud, Jumailiyah; Sutikno, Muzayanah; Naga, Dali S.

    2016-01-01

    The aim of this study is to determine variance difference between maximum likelihood and expected A posteriori estimation methods viewed from number of test items of aptitude test. The variance presents an accuracy generated by both maximum likelihood and Bayes estimation methods. The test consists of three subtests, each with 40 multiple-choice…

  17. Atmospheric Blocking and Intercomparison of Objective Detection Methods: Flow Field Characteristics

    NASA Astrophysics Data System (ADS)

    Pinheiro, M. C.; Ullrich, P. A.; Grotjahn, R.

    2017-12-01

    A number of objective methods for identifying and quantifying atmospheric blocking have been developed over the last couple of decades, but there is variable consensus on the resultant blocking climatology. This project examines blocking climatologies as produced by three different methods: two anomaly-based methods, and the geopotential height gradient method of Tibaldi and Molteni (1990). The results highlight the differences in blocking that arise from the choice of detection method, with emphasis on the physical characteristics of the flow field and the subsequent effects on the blocking patterns that emerge.

  18. Non-standard finite difference and Chebyshev collocation methods for solving fractional diffusion equation

    NASA Astrophysics Data System (ADS)

    Agarwal, P.; El-Sayed, A. A.

    2018-06-01

    In this paper, a new numerical technique for solving the fractional order diffusion equation is introduced. This technique basically depends on the Non-Standard finite difference method (NSFD) and Chebyshev collocation method, where the fractional derivatives are described in terms of the Caputo sense. The Chebyshev collocation method with the (NSFD) method is used to convert the problem into a system of algebraic equations. These equations solved numerically using Newton's iteration method. The applicability, reliability, and efficiency of the presented technique are demonstrated through some given numerical examples.

  19. A comparison of the finite difference and finite element methods for heat transfer calculations

    NASA Technical Reports Server (NTRS)

    Emery, A. F.; Mortazavi, H. R.

    1982-01-01

    The finite difference method and finite element method for heat transfer calculations are compared by describing their bases and their application to some common heat transfer problems. In general it is noted that neither method is clearly superior, and in many instances, the choice is quite arbitrary and depends more upon the codes available and upon the personal preference of the analyst than upon any well defined advantages of one method. Classes of problems for which one method or the other is better suited are defined.

  20. Combination of the discontinuous Galerkin method with finite differences for simulation of seismic wave propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lisitsa, Vadim, E-mail: lisitsavv@ipgg.sbras.ru; Novosibirsk State University, Novosibirsk; Tcheverda, Vladimir

    We present an algorithm for the numerical simulation of seismic wave propagation in models with a complex near surface part and free surface topography. The approach is based on the combination of finite differences with the discontinuous Galerkin method. The discontinuous Galerkin method can be used on polyhedral meshes; thus, it is easy to handle the complex surfaces in the models. However, this approach is computationally intense in comparison with finite differences. Finite differences are computationally efficient, but in general, they require rectangular grids, leading to the stair-step approximation of the interfaces, which causes strong diffraction of the wavefield. Inmore » this research we present a hybrid algorithm where the discontinuous Galerkin method is used in a relatively small upper part of the model and finite differences are applied to the main part of the model.« less

  1. [Heart rate variability study based on a novel RdR RR Intervals Scatter Plot].

    PubMed

    Lu, Hongwei; Lu, Xiuyun; Wang, Chunfang; Hua, Youyuan; Tian, Jiajia; Liu, Shihai

    2014-08-01

    On the basis of Poincare scatter plot and first order difference scatter plot, a novel heart rate variability (HRV) analysis method based on scatter plots of RR intervals and first order difference of RR intervals (namely, RdR) was proposed. The abscissa of the RdR scatter plot, the x-axis, is RR intervals and the ordinate, y-axis, is the difference between successive RR intervals. The RdR scatter plot includes the information of RR intervals and the difference between successive RR intervals, which captures more HRV information. By RdR scatter plot analysis of some records of MIT-BIH arrhythmias database, we found that the scatter plot of uncoupled premature ventricular contraction (PVC), coupled ventricular bigeminy and ventricular trigeminy PVC had specific graphic characteristics. The RdR scatter plot method has higher detecting performance than the Poincare scatter plot method, and simpler and more intuitive than the first order difference method.

  2. Periodic solutions of second-order nonlinear difference equations containing a small parameter. IV - Multi-discrete time method

    NASA Technical Reports Server (NTRS)

    Mickens, Ronald E.

    1987-01-01

    It is shown that a discrete multi-time method can be constructed to obtain approximations to the periodic solutions of a special class of second-order nonlinear difference equations containing a small parameter. Three examples illustrating the method are presented.

  3. 40 CFR Appendix A to Part 63 - Test Methods

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... components by a different analyst). 3.3Surrogate Reference Materials. The analyst may use surrogate compounds... the variance of the proposed method is significantly different from that of the validated method by... variables can be determined in eight experiments rather than 128 (W.J. Youden, Statistical Manual of the...

  4. Wood versus metal in airplane construction

    NASA Technical Reports Server (NTRS)

    Seehase, H

    1923-01-01

    The aim of this article is to present, in broad outline, a scientific method for solving the problem, "Wood or Metal." It will be shown that structural methods have by no means reached their final perfection. The strength of the different materials is discussed as well as different construction methods.

  5. Single Laboratory Comparison of Host-Specific PCR Assays for the Detection of Bovine Fecal Pollution

    EPA Science Inventory

    There are numerous PCR-based methods available to detect bovine fecal pollution in ambient waters. Each method targets a different gene and microorganism leading to differences in method performance, making it difficult to determine which approach is most suitable for field appl...

  6. Influence of Three Different Methods of Teaching Physics on the Gain in Students' Development of Reasoning

    ERIC Educational Resources Information Center

    Marusic, Mirko; Slisko, Josip

    2012-01-01

    The Lawson Classroom Test of Scientific Reasoning (LCTSR) was used to gauge the relative effectiveness of three different methods of pedagogy, "Reading, Presenting, and Questioning" (RPQ), "Experimenting and Discussion" (ED), and "Traditional Methods" (TM), on increasing students' level of scientific thinking. The…

  7. Infusing Mathematics Content into a Methods Course: Impacting Content Knowledge for Teaching

    ERIC Educational Resources Information Center

    Burton, Megan; Daane, C. J.; Giesen, Judy

    2008-01-01

    This study compared content knowledge for teaching mathematics differences between elementary pre-service teachers in a traditional versus an experimental mathematics methods course. The experimental course replaced 20 minutes of traditional methods, each class, with an intervention of elementary mathematics content. The difference between groups…

  8. 78 FR 47307 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-05

    ... obtain data on the pedagogical methods of National Guard Youth ChalleNGe program teachers. The data will be used by DoD to evaluate how differences in classroom teaching methods impact program outcomes. The... September 4, 2013. Title; Associated Form; and OMB Number: How Differences in Pedagogical Methods Impact...

  9. Testing Different Model Building Procedures Using Multiple Regression.

    ERIC Educational Resources Information Center

    Thayer, Jerome D.

    The stepwise regression method of selecting predictors for computer assisted multiple regression analysis was compared with forward, backward, and best subsets regression, using 16 data sets. The results indicated the stepwise method was preferred because of its practical nature, when the models chosen by different selection methods were similar…

  10. On the Inclusion of Difference Equation Problems and Z Transform Methods in Sophomore Differential Equation Classes

    ERIC Educational Resources Information Center

    Savoye, Philippe

    2009-01-01

    In recent years, I started covering difference equations and z transform methods in my introductory differential equations course. This allowed my students to extend the "classical" methods for (ordinary differential equation) ODE's to discrete time problems arising in many applications.

  11. Sensitivity of different Trypanosoma vivax specific primers for the diagnosis of livestock trypanosomosis using different DNA extraction methods.

    PubMed

    Gonzales, J L; Loza, A; Chacon, E

    2006-03-15

    There are several T. vivax specific primers developed for PCR diagnosis. Most of these primers were validated under different DNA extraction methods and study designs leading to heterogeneity of results. The objective of the present study was to validate PCR as a diagnostic test for T. vivax trypanosomosis by means of determining the test sensitivity of different published specific primers with different sample preparations. Four different DNA extraction methods were used to test the sensitivity of PCR with four different primer sets. DNA was extracted directly from whole blood samples, blood dried on filter papers or blood dried on FTA cards. The results showed that the sensitivity of PCR with each primer set was highly dependant of the sample preparation and DNA extraction method. The highest sensitivities for all the primers tested were determined using DNA extracted from whole blood samples, while the lowest sensitivities were obtained when DNA was extracted from filter paper preparations. To conclude, the obtained results are discussed and a protocol for diagnosis and surveillance for T. vivax trypanosomosis is recommended.

  12. The Roche Immunoturbidimetric Albumin Method on Cobas c 501 Gives Higher Values Than the Abbott and Roche BCP Methods When Analyzing Patient Plasma Samples.

    PubMed

    Helmersson-Karlqvist, Johanna; Flodin, Mats; Havelka, Aleksandra Mandic; Xu, Xiao Yan; Larsson, Anders

    2016-09-01

    Serum/plasma albumin is an important and widely used laboratory marker and it is important that we measure albumin correctly without bias. We had indications that the immunoturbidimetric method on Cobas c 501 and the bromocresol purple (BCP) method on Architect 16000 differed, so we decided to study these methods more closely. A total of 1,951 patient requests with albumin measured with both the Architect BCP and Cobas immunoturbidimetric methods were extracted from the laboratory system. A comparison with fresh plasma samples was also performed that included immunoturbidimetric and BCP methods on Cobas c 501 and analysis of the international protein calibrator ERM-DA470k/IFCC. The median difference between the Abbott BCP and Roche immunoturbidimetric methods was 3.3 g/l and the Roche method overestimated ERM-DA470k/IFCC by 2.2 g/l. The Roche immunoturbidimetric method gave higher values than the Roche BCP method: y = 1.111x - 0.739, R² = 0.971. The Roche immunoturbidimetric albumin method gives clearly higher values than the Abbott and Roche BCP methods when analyzing fresh patient samples. The differences between the two methods were similar at normal and low albumin levels. © 2016 Wiley Periodicals, Inc.

  13. Exploration of Analysis Methods for Diagnostic Imaging Tests: Problems with ROC AUC and Confidence Scores in CT Colonography

    PubMed Central

    Mallett, Susan; Halligan, Steve; Collins, Gary S.; Altman, Doug G.

    2014-01-01

    Background Different methods of evaluating diagnostic performance when comparing diagnostic tests may lead to different results. We compared two such approaches, sensitivity and specificity with area under the Receiver Operating Characteristic Curve (ROC AUC) for the evaluation of CT colonography for the detection of polyps, either with or without computer assisted detection. Methods In a multireader multicase study of 10 readers and 107 cases we compared sensitivity and specificity, using radiological reporting of the presence or absence of polyps, to ROC AUC calculated from confidence scores concerning the presence of polyps. Both methods were assessed against a reference standard. Here we focus on five readers, selected to illustrate issues in design and analysis. We compared diagnostic measures within readers, showing that differences in results are due to statistical methods. Results Reader performance varied widely depending on whether sensitivity and specificity or ROC AUC was used. There were problems using confidence scores; in assigning scores to all cases; in use of zero scores when no polyps were identified; the bimodal non-normal distribution of scores; fitting ROC curves due to extrapolation beyond the study data; and the undue influence of a few false positive results. Variation due to use of different ROC methods exceeded differences between test results for ROC AUC. Conclusions The confidence scores recorded in our study violated many assumptions of ROC AUC methods, rendering these methods inappropriate. The problems we identified will apply to other detection studies using confidence scores. We found sensitivity and specificity were a more reliable and clinically appropriate method to compare diagnostic tests. PMID:25353643

  14. Learning methods and strategies of anatomy among medical students in two different Institutions in Riyadh, Saudi Arabia.

    PubMed

    Al-Mohrej, Omar A; Al-Ayedh, Noura K; Masuadi, Emad M; Al-Kenani, Nader S

    2017-04-01

    Anatomy instructors adopt individual teaching methods and strategies to convey anatomical information to medical students for learning. Students also exhibit their own individual learning preferences. Instructional methods preferences vary between both instructors and students across different institutions. In attempt to bridge the gap between teaching methods and the students' learning preferences, this study aimed to identify students' learning methods and different strategies of studying anatomy in two different Saudi medical schools in Riyadh. A cross-sectional study, conducted in Saudi Arabia in April 2015, utilized a three-section questionnaire, which was distributed to a consecutive sample of 883 medical students to explore their methods and strategies in learning and teaching anatomy in two separate institutions in Riyadh, Saudi Arabia. Medical students' learning styles and preferences were found to be predominantly affected by different cultural backgrounds, gender, and level of study. Many students found it easier to understand and remember anatomy components using study aids. In addition, almost half of the students felt confident to ask their teachers questions after class. The study also showed that more than half of the students found it easier to study by concentrating on a particular part of the body rather than systems. Students' methods of learning were distributed equally between memorizing facts and learning by hands-on dissection. In addition, the study showed that two thirds of the students felt satisfied with their learning method and believed it was well suited for anatomy. There is no single teaching method which proves beneficial; instructors should be flexible in their teaching in order to optimize students' academic achievements.

  15. A method for addressing differences in concentrations of fipronil and three degradates obtained by two different laboratory methods

    USGS Publications Warehouse

    Crawford, Charles G.; Martin, Jeffrey D.

    2017-07-21

    In October 2012, the U.S. Geological Survey (USGS) began measuring the concentration of the pesticide fipronil and three of its degradates (desulfinylfipronil, fipronil sulfide, and fipronil sulfone) by a new laboratory method using direct aqueous-injection liquid chromatography tandem mass spectrometry (DAI LC–MS/MS). This method replaced the previous method—in use since 2002—that used gas chromatography/mass spectrometry (GC/MS). The performance of the two methods is not comparable for fipronil and the three degradates. Concentrations of these four chemical compounds determined by the DAI LC–MS/MS method are substantially lower than the GC/MS method. A method was developed to correct for the difference in concentrations obtained by the two laboratory methods based on a methods comparison field study done in 2012. Environmental and field matrix spike samples to be analyzed by both methods from 48 stream sites from across the United States were sampled approximately three times each for this study. These data were used to develop a relation between the two laboratory methods for each compound using regression analysis. The relations were used to calibrate data obtained by the older method to the new method in order to remove any biases attributable to differences in the methods. The coefficients of the equations obtained from the regressions were used to calibrate over 16,600 observations of fipronil, as well as the three degradates determined by the GC/MS method retrieved from the USGS National Water Information System. The calibrated values were then compared to over 7,800 observations of fipronil and to the three degradates determined by the DAI LC–MS/MS method also retrieved from the National Water Information System. The original and calibrated values from the GC/MS method, along with measures of uncertainty in the calibrated values and the original values from the DAI LC–MS/MS method, are provided in an accompanying data release.

  16. Comparative measurements of ambient atmospheric concentrations of ice nucleating particles using multiple immersion freezing methods and a continuous flow diffusion chamber

    NASA Astrophysics Data System (ADS)

    DeMott, Paul J.; Hill, Thomas C. J.; Petters, Markus D.; Bertram, Allan K.; Tobo, Yutaka; Mason, Ryan H.; Suski, Kaitlyn J.; McCluskey, Christina S.; Levin, Ezra J. T.; Schill, Gregory P.; Boose, Yvonne; Rauker, Anne Marie; Miller, Anna J.; Zaragoza, Jake; Rocci, Katherine; Rothfuss, Nicholas E.; Taylor, Hans P.; Hader, John D.; Chou, Cedric; Huffman, J. Alex; Pöschl, Ulrich; Prenni, Anthony J.; Kreidenweis, Sonia M.

    2017-09-01

    A number of new measurement methods for ice nucleating particles (INPs) have been introduced in recent years, and it is important to address how these methods compare. Laboratory comparisons of instruments sampling major INP types are common, but few comparisons have occurred for ambient aerosol measurements exploring the utility, consistency and complementarity of different methods to cover the large dynamic range of INP concentrations that exists in the atmosphere. In this study, we assess the comparability of four offline immersion freezing measurement methods (Colorado State University ice spectrometer, IS; North Carolina State University cold stage, CS; National Institute for Polar Research Cryogenic Refrigerator Applied to Freezing Test, CRAFT; University of British Columbia micro-orifice uniform deposit impactor-droplet freezing technique, MOUDI-DFT) and an online method (continuous flow diffusion chamber, CFDC) used in a manner deemed to promote/maximize immersion freezing, for the detection of INPs in ambient aerosols at different locations and in different sampling scenarios. We also investigated the comparability of different aerosol collection methods used with offline immersion freezing instruments. Excellent agreement between all methods could be obtained for several cases of co-sampling with perfect temporal overlap. Even for sampling periods that were not fully equivalent, the deviations between atmospheric INP number concentrations measured with different methods were mostly less than 1 order of magnitude. In some cases, however, the deviations were larger and not explicable without sampling and measurement artifacts. Overall, the immersion freezing methods seem to effectively capture INPs that activate as single particles in the modestly supercooled temperature regime (> -20 °C), although more comparisons are needed in this temperature regime that is difficult to access with online methods. Relative to the CFDC method, three immersion freezing methods that disperse particles into a bulk liquid (IS, CS, CRAFT) exhibit a positive bias in measured INP number concentrations below -20 °C, increasing with decreasing temperature. This bias was present but much less pronounced for a method that condenses separate water droplets onto limited numbers of particles prior to cooling and freezing (MOUDI-DFT). Potential reasons for the observed differences are discussed, and further investigations proposed to elucidate the role of all factors involved.

  17. Finding consistent patterns: A nonparametric approach for identifying differential expression in RNA-Seq data

    PubMed Central

    Li, Jun; Tibshirani, Robert

    2015-01-01

    We discuss the identification of features that are associated with an outcome in RNA-Sequencing (RNA-Seq) and other sequencing-based comparative genomic experiments. RNA-Seq data takes the form of counts, so models based on the normal distribution are generally unsuitable. The problem is especially challenging because different sequencing experiments may generate quite different total numbers of reads, or ‘sequencing depths’. Existing methods for this problem are based on Poisson or negative binomial models: they are useful but can be heavily influenced by ‘outliers’ in the data. We introduce a simple, nonparametric method with resampling to account for the different sequencing depths. The new method is more robust than parametric methods. It can be applied to data with quantitative, survival, two-class or multiple-class outcomes. We compare our proposed method to Poisson and negative binomial-based methods in simulated and real data sets, and find that our method discovers more consistent patterns than competing methods. PMID:22127579

  18. Effect of joint spacing and joint dip on the stress distribution around tunnels using different numerical methods

    NASA Astrophysics Data System (ADS)

    Nikadat, Nooraddin; Fatehi Marji, Mohammad; Rahmannejad, Reza; Yarahmadi Bafghi, Alireza

    2016-11-01

    Different conditions may affect the stability of tunnels by the geometry (spacing and orientation) of joints in the surrounded rock mass. In this study, by comparing the results obtained by the three novel numerical methods i.e. finite element method (Phase2), discrete element method (UDEC) and indirect boundary element method (TFSDDM), the effects of joint spacing and joint dips on the stress distribution around rock tunnels are numerically studied. These comparisons indicate the validity of the stress analyses around circular rock tunnels. These analyses also reveal that for a semi-continuous environment, boundary element method gives more accurate results compared to the results of finite element and distinct element methods. In the indirect boundary element method, the displacements due to joints of different spacing and dips are estimated by using displacement discontinuity (DD) formulations and the total stress distribution around the tunnel are obtained by using fictitious stress (FS) formulations.

  19. Comparison of manual and automated nucleic acid extraction methods from clinical specimens for microbial diagnosis purposes.

    PubMed

    Wozniak, Aniela; Geoffroy, Enrique; Miranda, Carolina; Castillo, Claudia; Sanhueza, Francia; García, Patricia

    2016-11-01

    The choice of nucleic acids (NAs) extraction method for molecular diagnosis in microbiology is of major importance because of the low microbial load, different nature of microorganisms, and clinical specimens. The NA yield of different extraction methods has been mostly studied using spiked samples. However, information from real human clinical specimens is scarce. The purpose of this study was to compare the performance of a manual low-cost extraction method (Qiagen kit or salting-out extraction method) with the automated high-cost MagNAPure Compact method. According to cycle threshold values for different pathogens, MagNAPure is as efficient as Qiagen for NA extraction from noncomplex clinical specimens (nasopharyngeal swab, skin swab, plasma, respiratory specimens). In contrast, according to cycle threshold values for RNAseP, MagNAPure method may not be an appropriate method for NA extraction from blood. We believe that MagNAPure versatility reduced risk of cross-contamination and reduced hands-on time compensates its high cost. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Comparison of the dye method with the thermocouple psychrometer for measuring leaf water potentials.

    PubMed

    Knipling, E B; Kramer, P J

    1967-10-01

    The dye method for measuring water potential was examined and compared with the thermocouple psychrometer method in order to evaluate its usefulness for measuring leaf water potentials of forest trees and common laboratory plants. Psychrometer measurements are assumed to represent the true leaf water potentials. Because of the contamination of test solutions by cell sap and leaf surface residues, dye method values of most species varied about 1 to 5 bars from psychrometer values over the leaf water potential range of 0 to -30 bars. The dye method is useful for measuring changes and relative values in leaf potential. Because of species differences in the relationships of dye method values to true leaf water potentials, dye method values should be interpreted with caution when comparing different species or the same species growing in widely different environments. Despite its limitations the dye method has a usefulness to many workers because it is simple, requires no elaborate equipment, and can be used in both the laboratory and field.

  1. FDDO and DSMC analyses of rarefied gas flow through 2D nozzles

    NASA Technical Reports Server (NTRS)

    Chung, Chan-Hong; De Witt, Kenneth J.; Jeng, Duen-Ren; Penko, Paul F.

    1992-01-01

    Two different approaches, the finite-difference method coupled with the discrete-ordinate method (FDDO), and the direct-simulation Monte Carlo (DSMC) method, are used in the analysis of the flow of a rarefied gas expanding through a two-dimensional nozzle and into a surrounding low-density environment. In the FDDO analysis, by employing the discrete-ordinate method, the Boltzmann equation simplified by a model collision integral is transformed to a set of partial differential equations which are continuous in physical space but are point functions in molecular velocity space. The set of partial differential equations are solved by means of a finite-difference approximation. In the DSMC analysis, the variable hard sphere model is used as a molecular model and the no time counter method is employed as a collision sampling technique. The results of both the FDDO and the DSMC methods show good agreement. The FDDO method requires less computational effort than the DSMC method by factors of 10 to 40 in CPU time, depending on the degree of rarefaction.

  2. A comparative study of cultural methods for the detection of Salmonella in feed and feed ingredients

    PubMed Central

    Koyuncu, Sevinc; Haggblom, Per

    2009-01-01

    Background Animal feed as a source of infection to food producing animals is much debated. In order to increase our present knowledge about possible feed transmission it is important to know that the present isolation methods for Salmonella are reliable also for feed materials. In a comparative study the ability of the standard method used for isolation of Salmonella in feed in the Nordic countries, the NMKL71 method (Nordic Committee on Food Analysis) was compared to the Modified Semisolid Rappaport Vassiliadis method (MSRV) and the international standard method (EN ISO 6579:2002). Five different feed materials were investigated, namely wheat grain, soybean meal, rape seed meal, palm kernel meal, pellets of pig feed and also scrapings from a feed mill elevator. Four different levels of the Salmonella serotypes S. Typhimurium, S. Cubana and S. Yoruba were added to each feed material, respectively. For all methods pre-enrichment in Buffered Peptone Water (BPW) were carried out followed by enrichments in the different selective media and finally plating on selective agar media. Results The results obtained with all three methods showed no differences in detection levels, with an accuracy and sensitivity of 65% and 56%, respectively. However, Müller-Kauffmann tetrathionate-novobiocin broth (MKTTn), performed less well due to many false-negative results on Brilliant Green agar (BGA) plates. Compared to other feed materials palm kernel meal showed a higher detection level with all serotypes and methods tested. Conclusion The results of this study showed that the accuracy, sensitivity and specificity of the investigated cultural methods were equivalent. However, the detection levels for different feed and feed ingredients varied considerably. PMID:19192298

  3. New decision criteria for selecting delta check methods based on the ratio of the delta difference to the width of the reference range can be generally applicable for each clinical chemistry test item.

    PubMed

    Park, Sang Hyuk; Kim, So-Young; Lee, Woochang; Chun, Sail; Min, Won-Ki

    2012-09-01

    Many laboratories use 4 delta check methods: delta difference, delta percent change, rate difference, and rate percent change. However, guidelines regarding decision criteria for selecting delta check methods have not yet been provided. We present new decision criteria for selecting delta check methods for each clinical chemistry test item. We collected 811,920 and 669,750 paired (present and previous) test results for 27 clinical chemistry test items from inpatients and outpatients, respectively. We devised new decision criteria for the selection of delta check methods based on the ratio of the delta difference to the width of the reference range (DD/RR). Delta check methods based on these criteria were compared with those based on the CV% of the absolute delta difference (ADD) as well as those reported in 2 previous studies. The delta check methods suggested by new decision criteria based on the DD/RR ratio corresponded well with those based on the CV% of the ADD except for only 2 items each in inpatients and outpatients. Delta check methods based on the DD/RR ratio also corresponded with those suggested in the 2 previous studies, except for 1 and 7 items in inpatients and outpatients, respectively. The DD/RR method appears to yield more feasible and intuitive selection criteria and can easily explain changes in the results by reflecting both the biological variation of the test item and the clinical characteristics of patients in each laboratory. We suggest this as a measure to determine delta check methods.

  4. Application and validation of superior spectrophotometric methods for simultaneous determination of ternary mixture used for hypertension management.

    PubMed

    Mohamed, Heba M; Lamie, Nesrine T

    2016-02-15

    Telmisartan (TL), Hydrochlorothiazide (HZ) and Amlodipine besylate (AM) are co-formulated together for hypertension management. Three smart, specific and precise spectrophotometric methods were applied and validated for simultaneous determination of the three cited drugs. Method A is the ratio isoabsorptive point and ratio difference in subtracted spectra (RIDSS) which is based on dividing the ternary mixture of the studied drugs by the spectrum of AM to get the division spectrum, from which concentration of AM can be obtained by measuring the amplitude values in the plateau region at 360nm. Then the amplitude value of the plateau region was subtracted from the division spectrum and HZ concentration was obtained by measuring the difference in amplitude values at 278.5 and 306nm (corresponding to zero difference of TL) while the total concentration of HZ and TL in the mixture was measured at their isoabsorptive point in the division spectrum at 278.5nm (Aiso). TL concentration is then obtained by subtraction. Method B; double divisor ratio spectra derivative spectrophotometry (RS-DS) and method C; mean centering of ratio spectra (MCR) spectrophotometric methods. The proposed methods did not require any initial separation steps prior the analysis of the three drugs. A comparative study was done between the three methods regarding their; simplicity, sensitivity and limitations. Specificity was investigated by analyzing the synthetic mixtures containing different ratios of the three studied drugs and their tablets dosage form. Statistical comparison of the obtained results with those found by the official methods was done, differences were non-significant in regard to accuracy and precision. The three methods were validated in accordance with ICH guidelines and can be used for quality control laboratories for TL, HZ and AM. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Application and validation of superior spectrophotometric methods for simultaneous determination of ternary mixture used for hypertension management

    NASA Astrophysics Data System (ADS)

    Mohamed, Heba M.; Lamie, Nesrine T.

    2016-02-01

    Telmisartan (TL), Hydrochlorothiazide (HZ) and Amlodipine besylate (AM) are co-formulated together for hypertension management. Three smart, specific and precise spectrophotometric methods were applied and validated for simultaneous determination of the three cited drugs. Method A is the ratio isoabsorptive point and ratio difference in subtracted spectra (RIDSS) which is based on dividing the ternary mixture of the studied drugs by the spectrum of AM to get the division spectrum, from which concentration of AM can be obtained by measuring the amplitude values in the plateau region at 360 nm. Then the amplitude value of the plateau region was subtracted from the division spectrum and HZ concentration was obtained by measuring the difference in amplitude values at 278.5 and 306 nm (corresponding to zero difference of TL) while the total concentration of HZ and TL in the mixture was measured at their isoabsorptive point in the division spectrum at 278.5 nm (Aiso). TL concentration is then obtained by subtraction. Method B; double divisor ratio spectra derivative spectrophotometry (RS-DS) and method C; mean centering of ratio spectra (MCR) spectrophotometric methods. The proposed methods did not require any initial separation steps prior the analysis of the three drugs. A comparative study was done between the three methods regarding their; simplicity, sensitivity and limitations. Specificity was investigated by analyzing the synthetic mixtures containing different ratios of the three studied drugs and their tablets dosage form. Statistical comparison of the obtained results with those found by the official methods was done, differences were non-significant in regard to accuracy and precision. The three methods were validated in accordance with ICH guidelines and can be used for quality control laboratories for TL, HZ and AM.

  6. Evaluation of different methods for assessing bioavailability of DDT residues during soil remediation.

    PubMed

    Wang, Jie; Taylor, Allison; Xu, Chenye; Schlenk, Daniel; Gan, Jay

    2018-07-01

    Compared to the total chemical concentration, bioavailability is a better measurement of risks of hydrophobic organic contaminants (HOCs) to biota in contaminated soil or sediment. Many different bioavailability estimation methods have been introduced to assess the effectiveness of remediation treatments. However, to date the different methods have rarely been evaluated against each other, leading to confusions in method selection. In this study, four different bioavailability estimation methods, including solid phase microextraction (SPME) and polyethylene passive sampling (PE) aiming to detect free chemical concentration (C free ), and Tenax desorption and isotope dilution method (IDM) aiming to measure chemical accessibility, were used in parallel to estimate in bioavailability of DDT residues (DDXs) in a historically contaminated soil after addition of different black carbon sorbents. Bioaccumulation into earthworm (Eisenia fetida) was measured concurrently for verification. Activated carbon or biochar amendment at 0.2-2% decreased earthworm bioaccumulation of DDXs by 83.9-99.4%, while multi-walled carbon nanotubes had a limited effect (4.3-20.7%). While all methods correctly predicted changes in DDX bioavailability after black carbon amendment, passive samplers offered more accurate predictions. Predicted levels of DDXs in earthworm lipid using the estimated bioavailability and empirical BCFs matched closely with the experimentally derived tissue concentrations. However, Tenax and IDM overestimated bioavailability when the available DDX levels were low. Our findings suggested that both passive samplers and bioaccessibility methods can be used in assessing remediation efficiency, presenting flexibility in method selection. While accessibility-oriented methods offer better sensitivity and shorter sampling time, passive samplers may be more advantageous because of their better performance and computability for in situ deployment. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Research on registration algorithm for check seal verification

    NASA Astrophysics Data System (ADS)

    Wang, Shuang; Liu, Tiegen

    2008-03-01

    Nowadays seals play an important role in China. With the development of social economy, the traditional method of manual check seal identification can't meet the need s of banking transactions badly. This paper focus on pre-processing and registration algorithm for check seal verification using theory of image processing and pattern recognition. First of all, analyze the complex characteristics of check seals. To eliminate the difference of producing conditions and the disturbance caused by background and writing in check image, many methods are used in the pre-processing of check seal verification, such as color components transformation, linearity transform to gray-scale image, medium value filter, Otsu, close calculations and labeling algorithm of mathematical morphology. After the processes above, the good binary seal image can be obtained. On the basis of traditional registration algorithm, a double-level registration method including rough and precise registration method is proposed. The deflection angle of precise registration method can be precise to 0.1°. This paper introduces the concepts of difference inside and difference outside and use the percent of difference inside and difference outside to judge whether the seal is real or fake. The experimental results of a mass of check seals are satisfied. It shows that the methods and algorithmic presented have good robustness to noise sealing conditions and satisfactory tolerance of difference within class.

  8. Pirogow's Amputation: A Modification of the Operation Method

    PubMed Central

    Bueschges, M.; Muehlberger, T.; Mauss, K. L.; Bruck, J. C.; Ottomann, C.

    2013-01-01

    Introduction. Pirogow's amputation at the ankle presents a valuable alternative to lower leg amputation for patients with the corresponding indications. Although this method offers the ability to stay mobile without the use of a prosthesis, it is rarely performed. This paper proposes a modification regarding the operation method of the Pirogow amputation. The results of the modified operation method on ten patients were objectified 12 months after the operation using a patient questionnaire (Ankle Score). Material and Methods. We modified the original method by rotating the calcaneus. To fix the calcaneus to the tibia, Kirschner wire and a 3/0 spongiosa tension screw as well as a Fixateur externe were used. Results. 70% of those questioned who were amputated following the modified Pirogow method indicated an excellent or very good result in total points whereas in the control group (original Pirogow's amputation) only 40% reported excellent or very good result. In addition, the level of pain experienced one year after the completed operation showed different results in favour of the group being operated with the modified way. Furthermore, patients in both groups showed differences in radiological results, postoperative leg length difference, and postoperative mobility. Conclusion. The modified Pirogow amputation presents a valuable alternative to the original amputation method for patients with the corresponding indications. The benefits are found in the significantly reduced pain, difference in reduced radiological complications, the increase in mobility without a prosthesis, and the reduction of postoperative leg length difference. PMID:23606976

  9. Confidence in Altman-Bland plots: a critical review of the method of differences.

    PubMed

    Ludbrook, John

    2010-02-01

    1. Altman and Bland argue that the virtue of plotting differences against averages in method-comparison studies is that 95% confidence limits for the differences can be constructed. These allow authors and readers to judge whether one method of measurement could be substituted for another. 2. The technique is often misused. So I have set out, by statistical argument and worked examples, to advise pharmacologists and physiologists how best to construct these limits. 3. First, construct a scattergram of differences on averages, then calculate the line of best fit for the linear regression of differences on averages. If the slope of the regression is shown to differ from zero, there is proportional bias. 4. If there is no proportional bias and if the scatter of differences is uniform (homoscedasticity), construct 'classical' 95% confidence limits. 5. If there is proportional bias yet homoscedasticity, construct hyperbolic 95% confidence limits (prediction interval) around the line of best fit. 6. If there is proportional bias and the scatter of values for differences increases progressively as the average values increase (heteroscedasticity), log-transform the raw values from the two methods and replot differences against averages. If this eliminates proportional bias and heteroscedasticity, construct 'classical' 95% confidence limits. Otherwise, construct horizontal V-shaped 95% confidence limits around the line of best fit of differences on averages or around the weighted least products line of best fit to the original data. 7. In designing a method-comparison study, consult a qualified biostatistician, obey the rules of randomization and make replicate observations.

  10. A multi-method assessment of bone maintenance and loss in an Imperial Roman population: Implications for future studies of age-related bone loss in the past.

    PubMed

    Beauchesne, Patrick; Agarwal, Sabrina C

    2017-09-01

    One of the hallmarks of contemporary osteoporosis and bone loss is dramatically higher prevalence of loss and fragility in females post-menopause. In contrast, bioarchaeological studies of bone loss have found a greater diversity of age- and sex-related patterns of bone loss in past populations. We argue that the differing findings may relate to the fact that most studies use only a single methodology to quantify bone loss and do not account for the heterogeneity and complexity of bone maintenance across the skeleton and over the life course. We test the hypothesis that bone mass and maintenance in trabecular bone sites versus cortical bone sites will show differing patterns of age-related bone loss, with cortical bone sites showing sex difference in bone loss that are similar to contemporary Western populations, and trabecular bone loss at earlier ages. We investigated this hypothesis in the Imperial Roman population of Velia using three methods: radiogrammetry of the second metacarpal (N = 71), bone histology of ribs (N = 70), and computerized tomography of trabecular bone architecture (N = 47). All three methods were used to explore sex and age differences in patterns of bone loss. The suite of methods utilized reveal differences in the timing of bone loss with age, but all methods found no statistically significant differences in age-related bone loss. We argue that a multi-method approach reduces the influence of confounding factors by building a reconstruction of bone turnover over the life cycle that a limited single-method project cannot provide. The implications of using multiple methods beyond studies of bone loss are also discussed. © 2017 Wiley Periodicals, Inc.

  11. Bimodal Biometric Verification Using the Fusion of Palmprint and Infrared Palm-Dorsum Vein Images

    PubMed Central

    Lin, Chih-Lung; Wang, Shih-Hung; Cheng, Hsu-Yung; Fan, Kuo-Chin; Hsu, Wei-Lieh; Lai, Chin-Rong

    2015-01-01

    In this paper, we present a reliable and robust biometric verification method based on bimodal physiological characteristics of palms, including the palmprint and palm-dorsum vein patterns. The proposed method consists of five steps: (1) automatically aligning and cropping the same region of interest from different palm or palm-dorsum images; (2) applying the digital wavelet transform and inverse wavelet transform to fuse palmprint and vein pattern images; (3) extracting the line-like features (LLFs) from the fused image; (4) obtaining multiresolution representations of the LLFs by using a multiresolution filter; and (5) using a support vector machine to verify the multiresolution representations of the LLFs. The proposed method possesses four advantages: first, both modal images are captured in peg-free scenarios to improve the user-friendliness of the verification device. Second, palmprint and vein pattern images are captured using a low-resolution digital scanner and infrared (IR) camera. The use of low-resolution images results in a smaller database. In addition, the vein pattern images are captured through the invisible IR spectrum, which improves antispoofing. Third, since the physiological characteristics of palmprint and vein pattern images are different, a hybrid fusing rule can be introduced to fuse the decomposition coefficients of different bands. The proposed method fuses decomposition coefficients at different decomposed levels, with different image sizes, captured from different sensor devices. Finally, the proposed method operates automatically and hence no parameters need to be set manually. Three thousand palmprint images and 3000 vein pattern images were collected from 100 volunteers to verify the validity of the proposed method. The results show a false rejection rate of 1.20% and a false acceptance rate of 1.56%. It demonstrates the validity and excellent performance of our proposed method comparing to other methods. PMID:26703596

  12. Beyond the hype: deep neural networks outperform established methods using a ChEMBL bioactivity benchmark set.

    PubMed

    Lenselink, Eelke B; Ten Dijke, Niels; Bongers, Brandon; Papadatos, George; van Vlijmen, Herman W T; Kowalczyk, Wojtek; IJzerman, Adriaan P; van Westen, Gerard J P

    2017-08-14

    The increase of publicly available bioactivity data in recent years has fueled and catalyzed research in chemogenomics, data mining, and modeling approaches. As a direct result, over the past few years a multitude of different methods have been reported and evaluated, such as target fishing, nearest neighbor similarity-based methods, and Quantitative Structure Activity Relationship (QSAR)-based protocols. However, such studies are typically conducted on different datasets, using different validation strategies, and different metrics. In this study, different methods were compared using one single standardized dataset obtained from ChEMBL, which is made available to the public, using standardized metrics (BEDROC and Matthews Correlation Coefficient). Specifically, the performance of Naïve Bayes, Random Forests, Support Vector Machines, Logistic Regression, and Deep Neural Networks was assessed using QSAR and proteochemometric (PCM) methods. All methods were validated using both a random split validation and a temporal validation, with the latter being a more realistic benchmark of expected prospective execution. Deep Neural Networks are the top performing classifiers, highlighting the added value of Deep Neural Networks over other more conventional methods. Moreover, the best method ('DNN_PCM') performed significantly better at almost one standard deviation higher than the mean performance. Furthermore, Multi-task and PCM implementations were shown to improve performance over single task Deep Neural Networks. Conversely, target prediction performed almost two standard deviations under the mean performance. Random Forests, Support Vector Machines, and Logistic Regression performed around mean performance. Finally, using an ensemble of DNNs, alongside additional tuning, enhanced the relative performance by another 27% (compared with unoptimized 'DNN_PCM'). Here, a standardized set to test and evaluate different machine learning algorithms in the context of multi-task learning is offered by providing the data and the protocols. Graphical Abstract .

  13. Bimodal Biometric Verification Using the Fusion of Palmprint and Infrared Palm-Dorsum Vein Images.

    PubMed

    Lin, Chih-Lung; Wang, Shih-Hung; Cheng, Hsu-Yung; Fan, Kuo-Chin; Hsu, Wei-Lieh; Lai, Chin-Rong

    2015-12-12

    In this paper, we present a reliable and robust biometric verification method based on bimodal physiological characteristics of palms, including the palmprint and palm-dorsum vein patterns. The proposed method consists of five steps: (1) automatically aligning and cropping the same region of interest from different palm or palm-dorsum images; (2) applying the digital wavelet transform and inverse wavelet transform to fuse palmprint and vein pattern images; (3) extracting the line-like features (LLFs) from the fused image; (4) obtaining multiresolution representations of the LLFs by using a multiresolution filter; and (5) using a support vector machine to verify the multiresolution representations of the LLFs. The proposed method possesses four advantages: first, both modal images are captured in peg-free scenarios to improve the user-friendliness of the verification device. Second, palmprint and vein pattern images are captured using a low-resolution digital scanner and infrared (IR) camera. The use of low-resolution images results in a smaller database. In addition, the vein pattern images are captured through the invisible IR spectrum, which improves antispoofing. Third, since the physiological characteristics of palmprint and vein pattern images are different, a hybrid fusing rule can be introduced to fuse the decomposition coefficients of different bands. The proposed method fuses decomposition coefficients at different decomposed levels, with different image sizes, captured from different sensor devices. Finally, the proposed method operates automatically and hence no parameters need to be set manually. Three thousand palmprint images and 3000 vein pattern images were collected from 100 volunteers to verify the validity of the proposed method. The results show a false rejection rate of 1.20% and a false acceptance rate of 1.56%. It demonstrates the validity and excellent performance of our proposed method comparing to other methods.

  14. Discovery and Nurturance of Giftedness in the Culturally Different.

    ERIC Educational Resources Information Center

    Torrance, E. Paul

    Discussed in the monograph are methods for identifying and developing programs for culturally different gifted students. In an overview section, the important issues and trends associated with the discovery and nurturance of giftedness among the culturally different are considered; and screening methods which involve modified traditional…

  15. Diurnal temperature asymmetries and fog at Churchill, Manitoba

    NASA Astrophysics Data System (ADS)

    Gough, William A.; He, Dianze

    2015-07-01

    A variety of methods are available to calculate daily mean temperature. We explore how the difference between two commonly used methods provides insight into the local climate of Churchill, Manitoba. In particular, we found that these differences related closely to seasonal fog. A strong statistically significant correlation was found between the fog frequency (hours per day) and the diurnal temperature asymmetries of the surface temperature using the difference between the min/max and 24-h methods of daily temperature calculation. The relationship was particularly strong for winter, spring and summer. Autumn appears to experience the joint effect of fog formation and the radiative effect of snow cover. The results of this study suggests that subtle variations of diurnality of temperature, as measured in the difference of the two mean temperature methods of calculation, may be used as a proxy for fog detection in the Hudson Bay region. These results also provide a cautionary note for the spatial analysis of mean temperatures using data derived from the two different methods particularly in areas that are fog prone.

  16. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review.

    PubMed

    Mathes, Tim; Klaßen, Pauline; Pieper, Dawid

    2017-11-28

    Our objective was to assess the frequency of data extraction errors and its potential impact on results in systematic reviews. Furthermore, we evaluated the effect of different extraction methods, reviewer characteristics and reviewer training on error rates and results. We performed a systematic review of methodological literature in PubMed, Cochrane methodological registry, and by manual searches (12/2016). Studies were selected by two reviewers independently. Data were extracted in standardized tables by one reviewer and verified by a second. The analysis included six studies; four studies on extraction error frequency, one study comparing different reviewer extraction methods and two studies comparing different reviewer characteristics. We did not find a study on reviewer training. There was a high rate of extraction errors (up to 50%). Errors often had an influence on effect estimates. Different data extraction methods and reviewer characteristics had moderate effect on extraction error rates and effect estimates. The evidence base for established standards of data extraction seems weak despite the high prevalence of extraction errors. More comparative studies are needed to get deeper insights into the influence of different extraction methods.

  17. Brief report: Sex differences in suicide rates and suicide methods among adolescents in South Korea, Japan, Finland, and the US.

    PubMed

    Park, Subin

    2015-04-01

    Sex differences in suicide rates and suicide methods was compared among adolescents in South Korea, Japan, Finland, and the United States. This study analyzed suicide rates and suicide methods of adolescents aged 15-19 years in four countries, using the World Health Organization mortality database. Among both male and female adolescents, the most common method of suicide was jumping from heights in South Korea and hanging in Japan. In Finland, jumping in front of moving objects and firearms were frequently used by males, but not by females. In the United States, males were more likely to use firearms, and females were more likely to use poison. The male to female ratio of suicide rates was higher in the United States (3.8) and Finland (3.6) than in Korea (1.3) and Japan (1.9). Sex differences in suicide methods may contribute to differences in the suicide rates among males and female adolescents in different countries. Copyright © 2015 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.

  18. Excitation-resolved multispectral method for imaging pharmacokinetic parameters in dynamic fluorescent molecular tomography

    NASA Astrophysics Data System (ADS)

    Chen, Maomao; Zhou, Yuan; Su, Han; Zhang, Dong; Luo, Jianwen

    2017-04-01

    Imaging of the pharmacokinetic parameters in dynamic fluorescence molecular tomography (DFMT) can provide three-dimensional metabolic information for biological studies and drug development. However, owing to the ill-posed nature of the FMT inverse problem, the relatively low quality of the parametric images makes it difficult to investigate the different metabolic processes of the fluorescent targets with small distances. An excitation-resolved multispectral DFMT method is proposed; it is based on the fact that the fluorescent targets with different concentrations show different variations in the excitation spectral domain and can be considered independent signal sources. With an independent component analysis method, the spatial locations of different fluorescent targets can be decomposed, and the fluorescent yields of the targets at different time points can be recovered. Therefore, the metabolic process of each component can be independently investigated. Simulations and phantom experiments are carried out to evaluate the performance of the proposed method. The results demonstrated that the proposed excitation-resolved multispectral method can effectively improve the reconstruction accuracy of the parametric images in DFMT.

  19. Determination of arsenic in traditional Chinese medicine by microwave digestion with flow injection-inductively coupled plasma mass spectrometry (FI-ICP-MS).

    PubMed

    Ong, E S; Yong, Y L; Woo, S O

    1999-01-01

    A simple, rapid, and sensitive method with high sample throughput was developed for determining arsenic in traditional Chinese medicine (TCM) in the form of uncoated tablets, sugar-coated tablets, black pills, capsules, powders, and syrups. The method involves microwave digestion with flow injection-inductively coupled plasma mass spectrometry (FI-ICP-MS). Method precision was 2.7-10.1% (relative standard deviation, n = 6) for different concentrations of arsenic in different TCM samples analyzed by different analysts on different days. Method accuracy was checked with a certified reference material (sea lettuce, Ulva lactuca, BCR CRM 279) for external calibration and by spiking arsenic standard into different TCMs. Recoveries of 89-92% were obtained for the certified reference material and higher than 95% for spiked TCMs. Matrix interference was insignificant for samples analyzed by the method of standard addition. Hence, no correction equation was used in the analysis of arsenic in the samples studied. Sample preparation using microwave digestion gave results that were very similar to those obtained by conventional wet acid digestion using nitric acid.

  20. Bennett's acceptance ratio and histogram analysis methods enhanced by umbrella sampling along a reaction coordinate in configurational space.

    PubMed

    Kim, Ilsoo; Allen, Toby W

    2012-04-28

    Free energy perturbation, a method for computing the free energy difference between two states, is often combined with non-Boltzmann biased sampling techniques in order to accelerate the convergence of free energy calculations. Here we present a new extension of the Bennett acceptance ratio (BAR) method by combining it with umbrella sampling (US) along a reaction coordinate in configurational space. In this approach, which we call Bennett acceptance ratio with umbrella sampling (BAR-US), the conditional histogram of energy difference (a mapping of the 3N-dimensional configurational space via a reaction coordinate onto 1D energy difference space) is weighted for marginalization with the associated population density along a reaction coordinate computed by US. This procedure produces marginal histograms of energy difference, from forward and backward simulations, with higher overlap in energy difference space, rendering free energy difference estimations using BAR statistically more reliable. In addition to BAR-US, two histogram analysis methods, termed Bennett overlapping histograms with US (BOH-US) and Bennett-Hummer (linear) least square with US (BHLS-US), are employed as consistency and convergence checks for free energy difference estimation by BAR-US. The proposed methods (BAR-US, BOH-US, and BHLS-US) are applied to a 1-dimensional asymmetric model potential, as has been used previously to test free energy calculations from non-equilibrium processes. We then consider the more stringent test of a 1-dimensional strongly (but linearly) shifted harmonic oscillator, which exhibits no overlap between two states when sampled using unbiased Brownian dynamics. We find that the efficiency of the proposed methods is enhanced over the original Bennett's methods (BAR, BOH, and BHLS) through fast uniform sampling of energy difference space via US in configurational space. We apply the proposed methods to the calculation of the electrostatic contribution to the absolute solvation free energy (excess chemical potential) of water. We then address the controversial issue of ion selectivity in the K(+) ion channel, KcsA. We have calculated the relative binding affinity of K(+) over Na(+) within a binding site of the KcsA channel for which different, though adjacent, K(+) and Na(+) configurations exist, ideally suited to these US-enhanced methods. Our studies demonstrate that the significant improvements in free energy calculations obtained using the proposed methods can have serious consequences for elucidating biological mechanisms and for the interpretation of experimental data.

  1. Mutual information based feature selection for medical image retrieval

    NASA Astrophysics Data System (ADS)

    Zhi, Lijia; Zhang, Shaomin; Li, Yan

    2018-04-01

    In this paper, authors propose a mutual information based method for lung CT image retrieval. This method is designed to adapt to different datasets and different retrieval task. For practical applying consideration, this method avoids using a large amount of training data. Instead, with a well-designed training process and robust fundamental features and measurements, the method in this paper can get promising performance and maintain economic training computation. Experimental results show that the method has potential practical values for clinical routine application.

  2. Assessment of the effect of population and diary sampling methods on estimation of school-age children exposure to fine particles.

    PubMed

    Che, W W; Frey, H Christopher; Lau, Alexis K H

    2014-12-01

    Population and diary sampling methods are employed in exposure models to sample simulated individuals and their daily activity on each simulation day. Different sampling methods may lead to variations in estimated human exposure. In this study, two population sampling methods (stratified-random and random-random) and three diary sampling methods (random resampling, diversity and autocorrelation, and Markov-chain cluster [MCC]) are evaluated. Their impacts on estimated children's exposure to ambient fine particulate matter (PM2.5 ) are quantified via case studies for children in Wake County, NC for July 2002. The estimated mean daily average exposure is 12.9 μg/m(3) for simulated children using the stratified population sampling method, and 12.2 μg/m(3) using the random sampling method. These minor differences are caused by the random sampling among ages within census tracts. Among the three diary sampling methods, there are differences in the estimated number of individuals with multiple days of exposures exceeding a benchmark of concern of 25 μg/m(3) due to differences in how multiday longitudinal diaries are estimated. The MCC method is relatively more conservative. In case studies evaluated here, the MCC method led to 10% higher estimation of the number of individuals with repeated exposures exceeding the benchmark. The comparisons help to identify and contrast the capabilities of each method and to offer insight regarding implications of method choice. Exposure simulation results are robust to the two population sampling methods evaluated, and are sensitive to the choice of method for simulating longitudinal diaries, particularly when analyzing results for specific microenvironments or for exposures exceeding a benchmark of concern. © 2014 Society for Risk Analysis.

  3. Integrated method for chaotic time series analysis

    DOEpatents

    Hively, L.M.; Ng, E.G.

    1998-09-29

    Methods and apparatus for automatically detecting differences between similar but different states in a nonlinear process monitor nonlinear data are disclosed. Steps include: acquiring the data; digitizing the data; obtaining nonlinear measures of the data via chaotic time series analysis; obtaining time serial trends in the nonlinear measures; and determining by comparison whether differences between similar but different states are indicated. 8 figs.

  4. Sine Rotation Vector Method for Attitude Estimation of an Underwater Robot

    PubMed Central

    Ko, Nak Yong; Jeong, Seokki; Bae, Youngchul

    2016-01-01

    This paper describes a method for estimating the attitude of an underwater robot. The method employs a new concept of sine rotation vector and uses both an attitude heading and reference system (AHRS) and a Doppler velocity log (DVL) for the purpose of measurement. First, the acceleration and magnetic-field measurements are transformed into sine rotation vectors and combined. The combined sine rotation vector is then transformed into the differences between the Euler angles of the measured attitude and the predicted attitude; the differences are used to correct the predicted attitude. The method was evaluated according to field-test data and simulation data and compared to existing methods that calculate angular differences directly without a preceding sine rotation vector transformation. The comparison verifies that the proposed method improves the attitude estimation performance. PMID:27490549

  5. A comparison of two closely-related approaches to aerodynamic design optimization

    NASA Technical Reports Server (NTRS)

    Shubin, G. R.; Frank, P. D.

    1991-01-01

    Two related methods for aerodynamic design optimization are compared. The methods, called the implicit gradient approach and the variational (or optimal control) approach, both attempt to obtain gradients necessary for numerical optimization at a cost significantly less than that of the usual black-box approach that employs finite difference gradients. While the two methods are seemingly quite different, they are shown to differ (essentially) in that the order of discretizing the continuous problem, and of applying calculus, is interchanged. Under certain circumstances, the two methods turn out to be identical. We explore the relationship between these methods by applying them to a model problem for duct flow that has many features in common with transonic flow over an airfoil. We find that the gradients computed by the variational method can sometimes be sufficiently inaccurate to cause the optimization to fail.

  6. Finite Element Analysis of Increasing Column Section and CFRP Reinforcement Method under Different Axial Compression Ratio

    NASA Astrophysics Data System (ADS)

    Jinghai, Zhou; Tianbei, Kang; Fengchi, Wang; Xindong, Wang

    2017-11-01

    Eight less stirrups in the core area frame joints are simulated by ABAQUS finite element numerical software. The composite reinforcement method is strengthened with carbon fiber and increasing column section, the axial compression ratio of reinforced specimens is 0.3, 0.45 and 0.6 respectively. The results of the load-displacement curve, ductility and stiffness are analyzed, and it is found that the different axial compression ratio has great influence on the bearing capacity of increasing column section strengthening method, and has little influence on carbon fiber reinforcement method. The different strengthening schemes improve the ultimate bearing capacity and ductility of frame joints in a certain extent, composite reinforcement joints strengthening method to improve the most significant, followed by increasing column section, reinforcement method of carbon fiber reinforced joints to increase the minimum.

  7. Comparison of the Reveal 20-hour method and the BAM culture method for the detection of Escherichia coli O157:H7 in selected foods and environmental swabs: collaborative study.

    PubMed

    Bird, C B; Hoerner, R J; Restaino, L

    2001-01-01

    Four different food types along with environmental swabs were analyzed by the Reveal for E. coli O157:H7 test (Reveal) and the Bacteriological Analytical Manual (BAM) culture method for the presence of Escherichia coli O157:H7. Twenty-seven laboratories representing academia and private industry in the United States and Canada participated. Sample types were inoculated with E. coli O157:H7 at 2 different levels. Of the 1,095 samples and controls analyzed and confirmed, 459 were positive and 557 were negative by both methods. No statistical differences (p <0.05) were observed between the Reveal and BAM methods.

  8. Reverse design of a bull's eye structure for oblique incidence and wider angular transmission efficiency.

    PubMed

    Yamada, Akira; Terakawa, Mitsuhiro

    2015-04-10

    We present a design method of a bull's eye structure with asymmetric grooves for focusing oblique incident light. The design method is capable of designing transmission peaks to a desired oblique angle with capability of collecting light from a wider range of angles. The bull's eye groove geometry for oblique incidence is designed based on the electric field intensity pattern around an isolated subwavelength aperture on a thin gold film at oblique incidence, calculated by the finite difference time domain method. Wide angular transmission efficiency is successfully achieved by overlapping two different bull's eye groove patterns designed with different peak angles. Our novel design method would overcome the angular limitations of the conventional methods.

  9. A descriptive review on methods to prioritize outcomes in a health care context.

    PubMed

    Janssen, Inger M; Gerhardus, Ansgar; Schröer-Günther, Milly A; Scheibler, Fülöp

    2015-12-01

    Evidence synthesis has seen major methodological advances in reducing uncertainty and estimating the sizes of the effects. Much less is known about how to assess the relative value of different outcomes. To identify studies that assessed preferences for outcomes in health conditions. we searched MEDLINE, EMBASE, PsycINFO and the Cochrane Library in February 2014. eligible studies investigated preferences of patients, family members, the general population or healthcare professionals for health outcomes. The intention of this review was to include studies which focus on theoretical alternatives; studies which assessed preferences for distinct treatments were excluded. study characteristics as study objective, health condition, participants, elicitation method, and outcomes assessed in the study were extracted. One hundred and twenty-four studies were identified and categorized into four groups: (1) multi criteria decision analysis (MCDA) (n = 71), (2) rating or ranking (n = 25), (3) utility eliciting (n = 5) and (4) studies comparing different methods (n = 23). The number of outcomes assessed by method group varied. The comparison of different methods or subgroups within one study often resulted in different hierarchies of outcomes. A dominant method most suitable for application in evidence syntheses was not identified. As preferences of patients differ from those of other stakeholders (especially medical professionals), the choice of the group to be questioned is consequential. Further research needs to focus on validity and applicability of the identified methods. © 2014 John Wiley & Sons Ltd.

  10. Linear model correction: A method for transferring a near-infrared multivariate calibration model without standard samples.

    PubMed

    Liu, Yan; Cai, Wensheng; Shao, Xueguang

    2016-12-05

    Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Restoring the missing features of the corrupted speech using linear interpolation methods

    NASA Astrophysics Data System (ADS)

    Rassem, Taha H.; Makbol, Nasrin M.; Hasan, Ali Muttaleb; Zaki, Siti Syazni Mohd; Girija, P. N.

    2017-10-01

    One of the main challenges in the Automatic Speech Recognition (ASR) is the noise. The performance of the ASR system reduces significantly if the speech is corrupted by noise. In spectrogram representation of a speech signal, after deleting low Signal to Noise Ratio (SNR) elements, the incomplete spectrogram is obtained. In this case, the speech recognizer should make modifications to the spectrogram in order to restore the missing elements, which is one direction. In another direction, speech recognizer should be able to restore the missing elements due to deleting low SNR elements before performing the recognition. This is can be done using different spectrogram reconstruction methods. In this paper, the geometrical spectrogram reconstruction methods suggested by some researchers are implemented as a toolbox. In these geometrical reconstruction methods, the linear interpolation along time or frequency methods are used to predict the missing elements between adjacent observed elements in the spectrogram. Moreover, a new linear interpolation method using time and frequency together is presented. The CMU Sphinx III software is used in the experiments to test the performance of the linear interpolation reconstruction method. The experiments are done under different conditions such as different lengths of the window and different lengths of utterances. Speech corpus consists of 20 males and 20 females; each one has two different utterances are used in the experiments. As a result, 80% recognition accuracy is achieved with 25% SNR ratio.

  12. COMPARISON OF DIFFERENT TRUNK ENDURANCE TESTING METHODS IN COLLEGE‐AGED INDIVIDUALS

    PubMed Central

    Krier, Amber D.; Nelson, Julie A.; Rogers, Michael A.; Stuke, Zachariah O.; Smith, Barbara S.

    2012-01-01

    Objective: Determine the reliability of two different modified (MOD1 and MOD2) testing methods compared to a standard method (ST) for testing trunk flexion and extension endurance. Participants: Twenty‐eight healthy individuals (age 26.4 ± 3.2 years, height 1.75 ± m, weight 71.8 ± 10.3 kg, body mass index 23.6 ± 3.4 m/kg2). Method: Trunk endurance time was measured in seconds for flexion and extension under the three different stabilization conditions. The MOD1 testing procedure utilized a female clinician (70.3 kg) and MOD2 utilized a male clinician (90.7 kg) to provide stabilization as opposed to the ST method of belt stabilization. Results: No significant differences occurred between flexion and extension times. Intraclass correlations (ICCs3,1) for the different testing conditions ranged from .79 to .95 (p <.000) and are found in Table 3. Concurrent validity using the ST flexion times as the gold standard coefficients were .95 for MOD1 and .90 for MOD2. For ST extension, coefficients were .91 and .80, for MOD1 and MOD2 respectively (p <.01). Conclusions: These methods proved to be a reliable substitute for previously accepted ST testing methods in normal college‐aged individuals. These modified testing procedures can be implemented in athletic training rooms and weight rooms lacking appropriate tables for the ST testing. Level of Evidence: 3 PMID:23091786

  13. Evaluating different methods used in ethnobotanical and ecological studies to record plant biodiversity

    PubMed Central

    2014-01-01

    Background This study compares the efficiency of identifying the plants in an area of semi-arid Northeast Brazil by methods that a) access the local knowledge used in ethnobotanical studies using semi-structured interviews conducted within the entire community, an inventory interview conducted with two participants using the previously collected vegetation inventory, and a participatory workshop presenting exsiccates and photographs to 32 people and b) inventory the vegetation (phytosociology) in locations with different histories of disturbance using rectangular plots and quadrant points. Methods The proportion of species identified using each method was then compared with Cochran’s Q test. We calculated the use value (UV) of each species using semi-structured interviews; this quantitative index was correlated against values of the vegetation’s structural importance obtained from the sample plot method and point-centered quarter method applied in two areas with different historical usage. The analysis sought to correlate the relative importance of plants to the local community (use value - UV) with the ecological importance of the plants in the vegetation structure (importance value - IV; relative density - RD) by using different sampling methods to analyze the two areas. Results With regard to the methods used for accessing the local knowledge, a difference was observed among the ethnobotanical methods of surveying species (Q = 13.37, df = 2, p = 0.0013): 44 species were identified in the inventory interview, 38 in the participatory workshop and 33 in the semi-structured interviews with the community. There was either no correlation between the UV, relative density (RD) and importance value (IV) of some species, or this correlation was negative. Conclusion It was concluded that the inventory interview was the most efficient method for recording species and their uses, as it allowed more plants to be identified in their original environment. To optimize researchers’ time in future studies, the use of the point-centered quarter method rather than the sample plot method is recommended. PMID:24916833

  14. A review of hybrid implicit explicit finite difference time domain method

    NASA Astrophysics Data System (ADS)

    Chen, Juan

    2018-06-01

    The finite-difference time-domain (FDTD) method has been extensively used to simulate varieties of electromagnetic interaction problems. However, because of its Courant-Friedrich-Levy (CFL) condition, the maximum time step size of this method is limited by the minimum size of cell used in the computational domain. So the FDTD method is inefficient to simulate the electromagnetic problems which have very fine structures. To deal with this problem, the Hybrid Implicit Explicit (HIE)-FDTD method is developed. The HIE-FDTD method uses the hybrid implicit explicit difference in the direction with fine structures to avoid the confinement of the fine spatial mesh on the time step size. So this method has much higher computational efficiency than the FDTD method, and is extremely useful for the problems which have fine structures in one direction. In this paper, the basic formulations, time stability condition and dispersion error of the HIE-FDTD method are presented. The implementations of several boundary conditions, including the connect boundary, absorbing boundary and periodic boundary are described, then some applications and important developments of this method are provided. The goal of this paper is to provide an historical overview and future prospects of the HIE-FDTD method.

  15. Spectrophotometric Methods for Simultaneous Determination of Oxytetracycline HCl and Flunixin Meglumine in Their Veterinary Pharmaceutical Formulation

    PubMed Central

    Abd-Elmonem, Mahmmoud S.; Nazlawy, Hagar N.; Zaazaa, Hala E.

    2017-01-01

    Four precise, accurate, selective, and sensitive UV-spectrophotometric methods were developed and validated for the simultaneous determination of a binary mixture of Oxytetracycline HCl (OXY) and Flunixin Meglumine (FLU). The first method, dual wavelength (DW), depends on measuring the difference in absorbance (ΔA 273.4–327 nm) for the determination of OXY where FLU is zero while FLU is determined at ΔA 251.7–275.7 nm. The second method, first-derivative spectrophotometric method (1D), depends on measuring the peak amplitude of the first derivative selectively at 377 and 266.7 nm for the determination of OXY and FLU, respectively. The third method, ratio difference method, depends on the difference in amplitudes of the ratio spectra at ΔP 286.5–324.8 nm and ΔP 249.6–286.3 nm for the determination of OXY and FLU, respectively. The fourth method, first derivative of ratio spectra method (1DD), depends on measuring the amplitude peak to peak of the first derivative of ratio spectra at 296.7 to 369 nm and 259.1 to 304.7 nm for the determination of OXY and FLU, respectively. Different factors affecting the applied spectrophotometric methods were studied. The proposed methods were validated according to ICH guidelines. Satisfactory results were obtained for determination of both drugs in laboratory prepared mixture and pharmaceutical dosage form. The developed methods are compared favourably with the official ones. PMID:28811956

  16. Variational finite-difference methods in linear and nonlinear problems of the deformation of metallic and composite shells (review)

    NASA Astrophysics Data System (ADS)

    Maksimyuk, V. A.; Storozhuk, E. A.; Chernyshenko, I. S.

    2012-11-01

    Variational finite-difference methods of solving linear and nonlinear problems for thin and nonthin shells (plates) made of homogeneous isotropic (metallic) and orthotropic (composite) materials are analyzed and their classification principles and structure are discussed. Scalar and vector variational finite-difference methods that implement the Kirchhoff-Love hypotheses analytically or algorithmically using Lagrange multipliers are outlined. The Timoshenko hypotheses are implemented in a traditional way, i.e., analytically. The stress-strain state of metallic and composite shells of complex geometry is analyzed numerically. The numerical results are presented in the form of graphs and tables and used to assess the efficiency of using the variational finite-difference methods to solve linear and nonlinear problems of the statics of shells (plates)

  17. Personalized Privacy-Preserving Frequent Itemset Mining Using Randomized Response

    PubMed Central

    Sun, Chongjing; Fu, Yan; Zhou, Junlin; Gao, Hui

    2014-01-01

    Frequent itemset mining is the important first step of association rule mining, which discovers interesting patterns from the massive data. There are increasing concerns about the privacy problem in the frequent itemset mining. Some works have been proposed to handle this kind of problem. In this paper, we introduce a personalized privacy problem, in which different attributes may need different privacy levels protection. To solve this problem, we give a personalized privacy-preserving method by using the randomized response technique. By providing different privacy levels for different attributes, this method can get a higher accuracy on frequent itemset mining than the traditional method providing the same privacy level. Finally, our experimental results show that our method can have better results on the frequent itemset mining while preserving personalized privacy. PMID:25143989

  18. Personalized privacy-preserving frequent itemset mining using randomized response.

    PubMed

    Sun, Chongjing; Fu, Yan; Zhou, Junlin; Gao, Hui

    2014-01-01

    Frequent itemset mining is the important first step of association rule mining, which discovers interesting patterns from the massive data. There are increasing concerns about the privacy problem in the frequent itemset mining. Some works have been proposed to handle this kind of problem. In this paper, we introduce a personalized privacy problem, in which different attributes may need different privacy levels protection. To solve this problem, we give a personalized privacy-preserving method by using the randomized response technique. By providing different privacy levels for different attributes, this method can get a higher accuracy on frequent itemset mining than the traditional method providing the same privacy level. Finally, our experimental results show that our method can have better results on the frequent itemset mining while preserving personalized privacy.

  19. Use of methods for specifying the target difference in randomised controlled trial sample size calculations: Two surveys of trialists' practice.

    PubMed

    Cook, Jonathan A; Hislop, Jennifer M; Altman, Doug G; Briggs, Andrew H; Fayers, Peter M; Norrie, John D; Ramsay, Craig R; Harvey, Ian M; Vale, Luke D

    2014-06-01

    Central to the design of a randomised controlled trial (RCT) is a calculation of the number of participants needed. This is typically achieved by specifying a target difference, which enables the trial to identify a difference of a particular magnitude should one exist. Seven methods have been proposed for formally determining what the target difference should be. However, in practice, it may be driven by convenience or some other informal basis. It is unclear how aware the trialist community is of these formal methods or whether they are used. To determine current practice regarding the specification of the target difference by surveying trialists. Two surveys were conducted: (1) Members of the Society for Clinical Trials (SCT): participants were invited to complete an online survey through the society's email distribution list. Respondents were asked about their awareness, use of, and willingness to recommend methods; (2) Leading UK- and Ireland-based trialists: the survey was sent to UK Clinical Research Collaboration registered Clinical Trials Units, Medical Research Council UK Hubs for Trial Methodology Research, and the Research Design Services of the National Institute for Health Research. This survey also included questions about the most recent trial developed by the respondent's group. Survey 1: Of the 1182 members on the SCT membership email distribution list, 180 responses were received (15%). Awareness of methods ranged from 69 (38%) for health economic methods to 162 (90%) for pilot study. Willingness to recommend among those who had used a particular method ranged from 56% for the opinion-seeking method to 89% for the review of evidence-base method. Survey 2: Of the 61 surveys sent out, 34 (56%) responses were received. Awareness of methods ranged from 33 (97%) for the review of evidence-base and pilot methods to 14 (41%) for the distribution method. The highest level of willingness to recommend among users was for the anchor method (87%). Based upon the most recent trial, the target difference was usually one viewed as important by a stakeholder group, mostly also viewed as a realistic difference given the interventions under evaluation, and sometimes one that led to an achievable sample size. The response rates achieved were relatively low despite the surveys being short, well presented, and having utilised reminders. Substantial variations in practice exist with awareness, use, and willingness to recommend methods varying substantially. The findings support the view that sample size calculation is a more complex process than would appear to be the case from trial reports and protocols. Guidance on approaches for sample size estimation may increase both awareness and use of appropriate formal methods. © The Author(s), 2014.

  20. Evaluation of Lysis Methods for the Extraction of Bacterial DNA for Analysis of the Vaginal Microbiota.

    PubMed

    Gill, Christina; van de Wijgert, Janneke H H M; Blow, Frances; Darby, Alistair C

    2016-01-01

    Recent studies on the vaginal microbiota have employed molecular techniques such as 16S rRNA gene sequencing to describe the bacterial community as a whole. These techniques require the lysis of bacterial cells to release DNA before purification and PCR amplification of the 16S rRNA gene. Currently, methods for the lysis of bacterial cells are not standardised and there is potential for introducing bias into the results if some bacterial species are lysed less efficiently than others. This study aimed to compare the results of vaginal microbiota profiling using four different pretreatment methods for the lysis of bacterial samples (30 min of lysis with lysozyme, 16 hours of lysis with lysozyme, 60 min of lysis with a mixture of lysozyme, mutanolysin and lysostaphin and 30 min of lysis with lysozyme followed by bead beating) prior to chemical and enzyme-based DNA extraction with a commercial kit. After extraction, DNA yield did not significantly differ between methods with the exception of lysis with lysozyme combined with bead beating which produced significantly lower yields when compared to lysis with the enzyme cocktail or 30 min lysis with lysozyme only. However, this did not result in a statistically significant difference in the observed alpha diversity of samples. The beta diversity (Bray-Curtis dissimilarity) between different lysis methods was statistically significantly different, but this difference was small compared to differences between samples, and did not affect the grouping of samples with similar vaginal bacterial community structure by hierarchical clustering. An understanding of how laboratory methods affect the results of microbiota studies is vital in order to accurately interpret the results and make valid comparisons between studies. Our results indicate that the choice of lysis method does not prevent the detection of effects relating to the type of vaginal bacterial community one of the main outcome measures of epidemiological studies. However, we recommend that the same method is used on all samples within a particular study.

  1. Propensity-score matching in economic analyses: comparison with regression models, instrumental variables, residual inclusion, differences-in-differences, and decomposition methods.

    PubMed

    Crown, William H

    2014-02-01

    This paper examines the use of propensity score matching in economic analyses of observational data. Several excellent papers have previously reviewed practical aspects of propensity score estimation and other aspects of the propensity score literature. The purpose of this paper is to compare the conceptual foundation of propensity score models with alternative estimators of treatment effects. References are provided to empirical comparisons among methods that have appeared in the literature. These comparisons are available for a subset of the methods considered in this paper. However, in some cases, no pairwise comparisons of particular methods are yet available, and there are no examples of comparisons across all of the methods surveyed here. Irrespective of the availability of empirical comparisons, the goal of this paper is to provide some intuition about the relative merits of alternative estimators in health economic evaluations where nonlinearity, sample size, availability of pre/post data, heterogeneity, and missing variables can have important implications for choice of methodology. Also considered is the potential combination of propensity score matching with alternative methods such as differences-in-differences and decomposition methods that have not yet appeared in the empirical literature.

  2. Wavelength selection for portable noninvasive blood component measurement system based on spectral difference coefficient and dynamic spectrum

    NASA Astrophysics Data System (ADS)

    Feng, Ximeng; Li, Gang; Yu, Haixia; Wang, Shaohui; Yi, Xiaoqing; Lin, Ling

    2018-03-01

    Noninvasive blood component analysis by spectroscopy has been a hotspot in biomedical engineering in recent years. Dynamic spectrum provides an excellent idea for noninvasive blood component measurement, but studies have been limited to the application of broadband light sources and high-resolution spectroscopy instruments. In order to remove redundant information, a more effective wavelength selection method has been presented in this paper. In contrast to many common wavelength selection methods, this method is based on sensing mechanism which has a clear mechanism and can effectively avoid the noise from acquisition system. The spectral difference coefficient was theoretically proved to have a guiding significance for wavelength selection. After theoretical analysis, the multi-band spectral difference coefficient-wavelength selection method combining with the dynamic spectrum was proposed. An experimental analysis based on clinical trial data from 200 volunteers has been conducted to illustrate the effectiveness of this method. The extreme learning machine was used to develop the calibration models between the dynamic spectrum data and hemoglobin concentration. The experiment result shows that the prediction precision of hemoglobin concentration using multi-band spectral difference coefficient-wavelength selection method is higher compared with other methods.

  3. A multi-strategy approach to informative gene identification from gene expression data.

    PubMed

    Liu, Ziying; Phan, Sieu; Famili, Fazel; Pan, Youlian; Lenferink, Anne E G; Cantin, Christiane; Collins, Catherine; O'Connor-McCourt, Maureen D

    2010-02-01

    An unsupervised multi-strategy approach has been developed to identify informative genes from high throughput genomic data. Several statistical methods have been used in the field to identify differentially expressed genes. Since different methods generate different lists of genes, it is very challenging to determine the most reliable gene list and the appropriate method. This paper presents a multi-strategy method, in which a combination of several data analysis techniques are applied to a given dataset and a confidence measure is established to select genes from the gene lists generated by these techniques to form the core of our final selection. The remainder of the genes that form the peripheral region are subject to exclusion or inclusion into the final selection. This paper demonstrates this methodology through its application to an in-house cancer genomics dataset and a public dataset. The results indicate that our method provides more reliable list of genes, which are validated using biological knowledge, biological experiments, and literature search. We further evaluated our multi-strategy method by consolidating two pairs of independent datasets, each pair is for the same disease, but generated by different labs using different platforms. The results showed that our method has produced far better results.

  4. Simultaneous determination of Fluticasone propionate and Azelastine hydrochloride in the presence of pharmaceutical dosage form additives

    NASA Astrophysics Data System (ADS)

    Merey, Hanan A.; El-Mosallamy, Sally S.; Hassan, Nagiba Y.; El-Zeany, Badr A.

    2016-05-01

    Fluticasone propionate (FLU) and Azelastine hydrochloride (AZE) are co-formulated with phenylethyl alcohol (PEA) and Benzalkonium chloride (BENZ) (as preservatives) in pharmaceutical dosage form for treatment of seasonal allergies. Different spectrophotometric methods were used for the simultaneous determination of cited drugs in the dosage form. Direct spectrophotometric method was used for determining of AZE, while Derivative of double divisor of ratio spectra (DD-RS), Ratio subtraction coupled with ratio difference method (RS-RD) and Mean centering of the ratio spectra (MCR) are used for the determination of FLU. The linearity of the proposed methods was investigated in the range of 5.00-40.00 and 5.00-80.00 μg/mL for FLU and AZE, respectively. The specificity of the developed methods was investigated by analyzing laboratory prepared mixtures containing different ratios of cited drugs in addition to PEA and their pharmaceutical dosage form. The validity of the proposed methods was assessed using the standard addition technique. The obtained results were statistically compared with those obtained by official or the reported method for FLU or AZE, respectively showing no significant difference with respect to accuracy and precision at p = 0.05.

  5. Novel approach for the simultaneous detection of DNA from different fish species based on a nuclear target: quantification potential.

    PubMed

    Prado, Marta; Boix, Ana; von Holst, Christoph

    2012-07-01

    The development of DNA-based methods for the identification and quantification of fish in food and feed samples is frequently focused on a specific fish species and/or on the detection of mitochondrial DNA of fish origin. However, a quantitative method for the most common fish species used by the food and feed industry is needed for official control purposes, and such a method should rely on the use of a single-copy nuclear DNA target owing to its more stable copy number in different tissues. In this article, we report on the development of a real-time PCR method based on the use of a nuclear gene as a target for the simultaneous detection of fish DNA from different species and on the evaluation of its quantification potential. The method was tested in 22 different fish species, including those most commonly used by the food and feed industry, and in negative control samples, which included 15 animal species and nine feed ingredients. The results show that the method reported here complies with the requirements concerning specificity and with the criteria required for real-time PCR methods with high sensitivity.

  6. Effectiveness of different tutorial recitation teaching methods and its implications for TA training

    NASA Astrophysics Data System (ADS)

    Endorf, Robert

    2008-04-01

    We present results from a comparative study of student understanding for students who attended recitation classes that used different teaching methods. The purpose of the study was to evaluate which teaching methods would be the most effective for recitation classes associated with large lectures in introductory physics courses. Student volunteers from our introductory calculus-based physics course at the University of Cincinnati attended a special recitation class that was taught using one of four different teaching methods. A total of 272 students were divided into approximately equal groups for each method. Students in each class were taught the same topic, ``Changes in Energy and Momentum,'' from ``Tutorials in Introductory Physics'' by Lillian McDermott, Peter Shaffer and the Physics Education Group at the University of Washington. The different teaching methods varied in the amount of student and teacher engagement. Student understanding was evaluated through pretests and posttests. Our results demonstrate the importance of the instructor's role in teaching recitation classes. The most effective teaching method was for students working in cooperative learning groups with the instructors questioning the groups using Socratic dialogue. In addition, we investigated student preferences of modes of instruction through an open-ended survey. Our results provide guidance and evidence for the teaching methods which should be emphasized in training course instructors.

  7. Comparison of methods for determination of total oil sands-derived naphthenic acids in water samples.

    PubMed

    Hughes, Sarah A; Huang, Rongfu; Mahaffey, Ashley; Chelme-Ayala, Pamela; Klamerth, Nikolaus; Meshref, Mohamed N A; Ibrahim, Mohamed D; Brown, Christine; Peru, Kerry M; Headley, John V; Gamal El-Din, Mohamed

    2017-11-01

    There are several established methods for the determination of naphthenic acids (NAs) in waters associated with oil sands mining operations. Due to their highly complex nature, measured concentration and composition of NAs vary depending on the method used. This study compared different common sample preparation techniques, analytical instrument methods, and analytical standards to measure NAs in groundwater and process water samples collected from an active oil sands operation. In general, the high- and ultrahigh-resolution methods, namely high performance liquid chromatography time-of-flight mass spectrometry (UPLC-TOF-MS) and Orbitrap mass spectrometry (Orbitrap-MS), were within an order of magnitude of the Fourier transform infrared spectroscopy (FTIR) methods. The gas chromatography mass spectrometry (GC-MS) methods consistently had the highest NA concentrations and greatest standard error. Total NAs concentration was not statistically different between sample preparation of solid phase extraction and liquid-liquid extraction. Calibration standards influenced quantitation results. This work provided a comprehensive understanding of the inherent differences in the various techniques available to measure NAs and hence the potential differences in measured amounts of NAs in samples. Results from this study will contribute to the analytical method standardization for NA analysis in oil sands related water samples. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Comparison of Different Drying Methods for Recovery of Mushroom DNA.

    PubMed

    Wang, Shouxian; Liu, Yu; Xu, Jianping

    2017-06-07

    Several methods have been reported for drying mushroom specimens for population genetic, taxonomic, and phylogenetic studies. However, most methods have not been directly compared for their effectiveness in preserving mushroom DNA. In this study, we compared silica gel drying at ambient temperature and oven drying at seven different temperatures. Two mushroom species representing two types of fruiting bodies were examined: the fleshy button mushroom Agaricus bisporus and the leathery shelf fungus Trametes versicolor. For each species dried with the eight methods, we assessed the mushroom water loss rate, the quality and quantity of extracted DNA, and the effectiveness of using the extracted DNA as a template for PCR amplification of two DNA fragments (ITS and a single copy gene). Dried specimens from all tested methods yielded sufficient DNA for PCR amplification of the two genes in both species. However, differences among the methods for the two species were found in: (i) the time required by different drying methods for the fresh mushroom tissue to reach a stable weight; and (ii) the relative quality and quantity of the extracted genomic DNA. Among these methods, oven drying at 70 °C for 3-4 h seemed the most efficient for preserving field mushroom samples for subsequent molecular work.

  9. A Comparative Study of Four Methods for the Detection of Nematode Eggs and Large Protozoan Cysts in Mandrill Faecal Material.

    PubMed

    Pouillevet, Hanae; Dibakou, Serge-Ely; Ngoubangoye, Barthélémy; Poirotte, Clémence; Charpentier, Marie J E

    2017-01-01

    Coproscopical methods like sedimentation and flotation techniques are widely used in the field for studying simian gastrointestinal parasites. Four parasites of known zoonotic potential were studied in a free-ranging, non-provisioned population of mandrills (Mandrillus sphinx): 2 nematodes (Necatoramericanus/Oesophagostomum sp. complex and Strongyloides sp.) and 2 protozoan species (Balantidium coli and Entamoeba coli). Different coproscopical techniques are available but they are rarely compared to evaluate their efficiency to retrieve parasites. In this study 4 different field-friendly methods were compared. A sedimentation method and 3 different McMaster methods (using sugar, salt, and zinc sulphate solutions) were performed on 47 faecal samples collected from different individuals of both sexes and all ages. First, we show that McMaster flotation methods are appropriate to detect and thus quantify large protozoan cysts. Second, zinc sulphate McMaster flotation allows the retrieval of a higher number of parasite taxa compared to the other 3 methods. This method further shows the highest probability to detect each of the studied parasite taxa. Altogether our results show that zinc sulphate McMaster flotation appears to be the best technique to use when studying nematodes and large protozoa. © 2017 S. Karger AG, Basel.

  10. Comparison of parameter-adapted segmentation methods for fluorescence micrographs.

    PubMed

    Held, Christian; Palmisano, Ralf; Häberle, Lothar; Hensel, Michael; Wittenberg, Thomas

    2011-11-01

    Interpreting images from fluorescence microscopy is often a time-consuming task with poor reproducibility. Various image processing routines that can help investigators evaluate the images are therefore useful. The critical aspect for a reliable automatic image analysis system is a robust segmentation algorithm that can perform accurate segmentation for different cell types. In this study, several image segmentation methods were therefore compared and evaluated in order to identify the most appropriate segmentation schemes that are usable with little new parameterization and robustly with different types of fluorescence-stained cells for various biological and biomedical tasks. The study investigated, compared, and enhanced four different methods for segmentation of cultured epithelial cells. The maximum-intensity linking (MIL) method, an improved MIL, a watershed method, and an improved watershed method based on morphological reconstruction were used. Three manually annotated datasets consisting of 261, 817, and 1,333 HeLa or L929 cells were used to compare the different algorithms. The comparisons and evaluations showed that the segmentation performance of methods based on the watershed transform was significantly superior to the performance of the MIL method. The results also indicate that using morphological opening by reconstruction can improve the segmentation of cells stained with a marker that exhibits the dotted surface of cells. Copyright © 2011 International Society for Advancement of Cytometry.

  11. Knowledge, beliefs and use of nursing methods in preventing pressure sores in Dutch hospitals.

    PubMed

    Halfens, R J; Eggink, M

    1995-02-01

    Different methods have been developed in the past to prevent patients from developing pressure sores. The consensus guidelines developed in the Netherlands make a distinction between preventive methods useful for all patients, methods useful only in individual cases, and methods which are not useful at all. This study explores the extent of use of the different methods within Dutch hospitals, and the knowledge and beliefs of nurses regarding the usefulness of these methods. A mail questionnaire was sent to a representative sample of nurses working within Dutch hospitals. A total of 373 questionnaires were returned and used for the analyses. The results showed that many methods judged by the consensus report as not useful, or only useful in individual cases, are still being used. Some methods which are judged as useful, like the use of a risk assessment scale, are used on only a few wards. The opinion of nurses regarding the usefulness of the methods differ from the guidelines of the consensus committee. Although there is agreement about most of the useful methods, there is less agreement about the methods which are useful in individual cases or methods which are not useful at all. In particular the use of massage and cream are, in the opinion of the nurses, useful in individual or in all cases.

  12. Comparing four methods to estimate usual intake distributions.

    PubMed

    Souverein, O W; Dekkers, A L; Geelen, A; Haubrock, J; de Vries, J H; Ocké, M C; Harttig, U; Boeing, H; van 't Veer, P

    2011-07-01

    The aim of this paper was to compare methods to estimate usual intake distributions of nutrients and foods. As 'true' usual intake distributions are not known in practice, the comparison was carried out through a simulation study, as well as empirically, by application to data from the European Food Consumption Validation (EFCOVAL) Study in which two 24-h dietary recalls (24-HDRs) and food frequency data were collected. The methods being compared were the Iowa State University Method (ISU), National Cancer Institute Method (NCI), Multiple Source Method (MSM) and Statistical Program for Age-adjusted Dietary Assessment (SPADE). Simulation data were constructed with varying numbers of subjects (n), different values for the Box-Cox transformation parameter (λ(BC)) and different values for the ratio of the within- and between-person variance (r(var)). All data were analyzed with the four different methods and the estimated usual mean intake and selected percentiles were obtained. Moreover, the 2-day within-person mean was estimated as an additional 'method'. These five methods were compared in terms of the mean bias, which was calculated as the mean of the differences between the estimated value and the known true value. The application of data from the EFCOVAL Project included calculations of nutrients (that is, protein, potassium, protein density) and foods (that is, vegetables, fruit and fish). Overall, the mean bias of the ISU, NCI, MSM and SPADE Methods was small. However, for all methods, the mean bias and the variation of the bias increased with smaller sample size, higher variance ratios and with more pronounced departures from normality. Serious mean bias (especially in the 95th percentile) was seen using the NCI Method when r(var) = 9, λ(BC) = 0 and n = 1000. The ISU Method and MSM showed a somewhat higher s.d. of the bias compared with NCI and SPADE Methods, indicating a larger method uncertainty. Furthermore, whereas the ISU, NCI and SPADE Methods produced unimodal density functions by definition, MSM produced distributions with 'peaks', when sample size was small, because of the fact that the population's usual intake distribution was based on estimated individual usual intakes. The application to the EFCOVAL data showed that all estimates of the percentiles and mean were within 5% of each other for the three nutrients analyzed. For vegetables, fruit and fish, the differences were larger than that for nutrients, but overall the sample mean was estimated reasonably. The four methods that were compared seem to provide good estimates of the usual intake distribution of nutrients. Nevertheless, care needs to be taken when a nutrient has a high within-person variation or has a highly skewed distribution, and when the sample size is small. As the methods offer different features, practical reasons may exist to prefer one method over the other.

  13. The study on biomass fraction estimate methodology of municipal solid waste incinerator in Korea.

    PubMed

    Kang, Seongmin; Kim, Seungjin; Lee, Jeongwoo; Yun, Hyunki; Kim, Ki-Hyun; Jeon, Eui-Chan

    2016-10-01

    In Korea, the amount of greenhouse gases released due to waste materials was 14,800,000 t CO2eq in 2012, which increased from 5,000,000 t CO2eq in 2010. This included the amount released due to incineration, which has gradually increased since 2010. Incineration was found to be the biggest contributor to greenhouse gases, with 7,400,000 t CO2eq released in 2012. Therefore, with regards to the trading of greenhouse gases emissions initiated in 2015 and the writing of the national inventory report, it is important to increase the reliability of the measurements related to the incineration of waste materials. This research explored methods for estimating the biomass fraction at Korean MSW incinerator facilities and compared the biomass fractions obtained with the different biomass fraction estimation methods. The biomass fraction was estimated by the method using default values of fossil carbon fraction suggested by IPCC, the method using the solid waste composition, and the method using incinerator flue gas. The highest biomass fractions in Korean municipal solid waste incinerator facilities were estimated by the IPCC Default method, followed by the MSW analysis method and the Flue gas analysis method. Therefore, the difference in the biomass fraction estimate was the greatest between the IPCC Default and the Flue gas analysis methods. The difference between the MSW analysis and the flue gas analysis methods was smaller than the difference with IPCC Default method. This suggested that the use of the IPCC default method cannot reflect the characteristics of Korean waste incinerator facilities and Korean MSW. Incineration is one of most effective methods for disposal of municipal solid waste (MSW). This paper investigates the applicability of using biomass content to estimate the amount of CO2 released, and compares the biomass contents determined by different methods in order to establish a method for estimating biomass in the MSW incinerator facilities of Korea. After analyzing the biomass contents of the collected solid waste samples and the flue gas samples, the results were compared with the Intergovernmental Panel on Climate Change (IPCC) method, and it seems that to calculate the biomass fraction it is better to use the flue gas analysis method than the IPCC method. It is valuable to design and operate a real new incineration power plant, especially for the estimation of greenhouse gas emissions.

  14. Testing the Difference of Correlated Agreement Coefficients for Statistical Significance

    ERIC Educational Resources Information Center

    Gwet, Kilem L.

    2016-01-01

    This article addresses the problem of testing the difference between two correlated agreement coefficients for statistical significance. A number of authors have proposed methods for testing the difference between two correlated kappa coefficients, which require either the use of resampling methods or the use of advanced statistical modeling…

  15. Comparison of macro-gravimetric and micro-colorimetric lipid determination methods.

    PubMed

    Inouye, Laura S; Lotufo, Guiherme R

    2006-10-15

    In order to validate a method for lipid analysis of small tissue samples, the standard macro-gravimetric method of Bligh-Dyer (1959) [E.G. Bligh, W.J. Dyer, Can. J. Biochem. Physiol. 37 (1959) 911] and a modification of the micro-colorimetric assay developed by Van Handel (1985) [E. Van Handel, J. Am. Mosq. Control Assoc. 1 (1985) 302] were compared. No significant differences were observed for wet tissues of two species of fish. However, limited analysis of wet tissue of the amphipod, Leptocheirusplumulosus, indicated that the Bligh-Dyer gravimetric method generated higher lipid values, most likely due to the inclusion of non-lipid materials. Additionally, significant differences between the methods were observed with dry tissues, with the micro-colorimetric method consistently reporting calculated lipid values greater than as reported by the gravimetric method. This was most likely due to poor extraction of dry tissue in the standard Bligh-Dyer method, as no significant differences were found when analyzing a single composite extract. The data presented supports the conclusion that the micro-colorimetric method described in this paper is accurate, rapid, and minimizes time and solvent use.

  16. A Review of High-Order and Optimized Finite-Difference Methods for Simulating Linear Wave Phenomena

    NASA Technical Reports Server (NTRS)

    Zingg, David W.

    1996-01-01

    This paper presents a review of high-order and optimized finite-difference methods for numerically simulating the propagation and scattering of linear waves, such as electromagnetic, acoustic, or elastic waves. The spatial operators reviewed include compact schemes, non-compact schemes, schemes on staggered grids, and schemes which are optimized to produce specific characteristics. The time-marching methods discussed include Runge-Kutta methods, Adams-Bashforth methods, and the leapfrog method. In addition, the following fourth-order fully-discrete finite-difference methods are considered: a one-step implicit scheme with a three-point spatial stencil, a one-step explicit scheme with a five-point spatial stencil, and a two-step explicit scheme with a five-point spatial stencil. For each method studied, the number of grid points per wavelength required for accurate simulation of wave propagation over large distances is presented. Recommendations are made with respect to the suitability of the methods for specific problems and practical aspects of their use, such as appropriate Courant numbers and grid densities. Avenues for future research are suggested.

  17. Methods of scaling threshold color difference using printed samples

    NASA Astrophysics Data System (ADS)

    Huang, Min; Cui, Guihua; Liu, Haoxue; Luo, M. Ronnier

    2012-01-01

    A series of printed samples on substrate of semi-gloss paper and with the magnitude of threshold color difference were prepared for scaling the visual color difference and to evaluate the performance of different method. The probabilities of perceptibly was used to normalized to Z-score and different color differences were scaled to the Z-score. The visual color difference was got, and checked with the STRESS factor. The results indicated that only the scales have been changed but the relative scales between pairs in the data are preserved.

  18. Smart Methods for Linezolid Determination in the Presence of Alkaline and Oxidative Degradation Products Utilizing Their Overlapped Spectral Bands

    NASA Astrophysics Data System (ADS)

    Abd El-Monem Hegazy, M.; Shaaban Eissa, M.; Abd El-Sattar, O. I.; Abd El-Kawy, M. M.

    2014-09-01

    Linezolid (LIN) is considered the first available oxazolidinone antibacterial agent. It is susceptible to hydrolysis and oxidation. Five simple, accurate, sensitive and validated UV spectrophotometric methods were developed for LIN determination in the presence of its alkaline (ALK) and oxidative (OXD) degradation products in bulk powder and pharmaceutical formulation. Method A is a second derivative one (D2) in which LIN is determined at 240.9 nm. Method B is a pH-induced differential derivative one where LIN is determined using the fourth derivative (D4) of the difference spectra (ΔA) at 285.3 nm. Methods C, D, and E are manipulating ratio spectra, where C is the double divisor-ratio difference spectrophotometric one (DD-RD) in which LIN was determined by calculating the amplitude difference at 243.7 and 267.6 nm of the ratio spectra. Method D is the double divisor-first derivative of ratio spectra (DD-DD1) in which LIN was determined at 270.2 nm. Method E is a mean centering of ratio spectra one (MCR) in which LIN was determined at 318.0 nm. The developed methods have been validated according to ICH guidelines. The results were statistically compared to that of a reported HPLC method and there was no significant difference regarding both accuracy and precision.

  19. Carbon storage in Chinese grassland ecosystems: Influence of different integrative methods.

    PubMed

    Ma, Anna; He, Nianpeng; Yu, Guirui; Wen, Ding; Peng, Shunlei

    2016-02-17

    The accurate estimate of grassland carbon (C) is affected by many factors at the large scale. Here, we used six methods (three spatial interpolation methods and three grassland classification methods) to estimate C storage of Chinese grasslands based on published data from 2004 to 2014, and assessed the uncertainty resulting from different integrative methods. The uncertainty (coefficient of variation, CV, %) of grassland C storage was approximately 4.8% for the six methods tested, which was mainly determined by soil C storage. C density and C storage to the soil layer depth of 100 cm were estimated to be 8.46 ± 0.41 kg C m(-2) and 30.98 ± 1.25 Pg C, respectively. Ecosystem C storage was composed of 0.23 ± 0.01 (0.7%) above-ground biomass, 1.38 ± 0.14 (4.5%) below-ground biomass, and 29.37 ± 1.2 (94.8%) Pg C in the 0-100 cm soil layer. Carbon storage calculated by the grassland classification methods (18 grassland types) was closer to the mean value than those calculated by the spatial interpolation methods. Differences in integrative methods may partially explain the high uncertainty in C storage estimates in different studies. This first evaluation demonstrates the importance of multi-methodological approaches to accurately estimate C storage in large-scale terrestrial ecosystems.

  20. Image preprocessing study on KPCA-based face recognition

    NASA Astrophysics Data System (ADS)

    Li, Xuan; Li, Dehua

    2015-12-01

    Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.

  1. Methods for measuring denitrification: Diverse approaches to a difficult problem

    USGS Publications Warehouse

    Groffman, Peter M; Altabet, Mary A.; Böhlke, J.K.; Butterbach-Bahl, Klaus; David, Mary B.; Firestone, Mary K.; Giblin, Anne E.; Kana, Todd M.; Nielsen , Lars Peter; Voytek, Mary A.

    2006-01-01

    Denitrification, the reduction of the nitrogen (N) oxides, nitrate (NO3−) and nitrite (NO2−), to the gases nitric oxide (NO), nitrous oxide (N2O), and dinitrogen (N2), is important to primary production, water quality, and the chemistry and physics of the atmosphere at ecosystem, landscape, regional, and global scales. Unfortunately, this process is very difficult to measure, and existing methods are problematic for different reasons in different places at different times. In this paper, we review the major approaches that have been taken to measure denitrification in terrestrial and aquatic environments and discuss the strengths, weaknesses, and future prospects for the different methods. Methodological approaches covered include (1) acetylene-based methods, (2) 15N tracers, (3) direct N2 quantification, (4) N2:Ar ratio quantification, (5) mass balance approaches, (6) stoichiometric approaches, (7) methods based on stable isotopes, (8) in situ gradients with atmospheric environmental tracers, and (9) molecular approaches. Our review makes it clear that the prospects for improved quantification of denitrification vary greatly in different environments and at different scales. While current methodology allows for the production of accurate estimates of denitrification at scales relevant to water and air quality and ecosystem fertility questions in some systems (e.g., aquatic sediments, well-defined aquifers), methodology for other systems, especially upland terrestrial areas, still needs development. Comparison of mass balance and stoichiometric approaches that constrain estimates of denitrification at large scales with point measurements (made using multiple methods), in multiple systems, is likely to propel more improvement in denitrification methods over the next few years.

  2. Evaluation of methods to assess physical activity in free-living conditions.

    PubMed

    Leenders, N Y; Sherman, W M; Nagaraja, H N; Kien, C L

    2001-07-01

    The purpose of this study was to compare different methods of measuring physical activity (PA) in women by the doubly labeled water method (DLW). Thirteen subjects participated in a 7-d protocol during which total daily energy expenditure (TDEE) was measured with DLW. Body composition, basal metabolic rate (BMR), and peak oxygen consumption were also measured. Physical activity-related energy expenditure (PAEE) was then calculated by subtracting measured BMR and the estimated thermic effect of food from TDEE. Simultaneously, over the 7 d, PA was assessed via a 7-d Physical Activity Recall questionnaire (PAR), and subjects wore secured at the waist, a Tritrac-R3D (Madison, WI), a Computer Science Application Inc. activity monitor (CSA; Shalimar, FL), and a Yamax Digi Walker-500 (Tokyo, Japan). Pearson-product moment correlations were calculated to determine the relationships among the different methods for estimating PAEE. Paired t-tests with appropriate adjustments were used to compare the different methods with DLW-PAEE. There was no significant difference between PAEE determined from PAR and DLW. The differences between the two methods ranged from -633 to 280 kcal.d(-1). Compared with DLW, PAEE determined from CSA, Tritrac, and Yamax was significantly underestimated by 59% (-495 kcal.d(-1)), 35% (-320 kcal.d(-1)) and 59% (-497 kcal.d(-1)), respectively. VO2peak explained 43% of the variation in DLW-PAEE. Although the group average for PAR-PAEE agreed with DLW-PAEE, there were differences in the methods among the subjects. PAEE determined by Tritrac, CSA, and Yamax significantly underestimate free-living PAEE in women.

  3. Methods for specifying spatial boundaries of cities in the world: The impacts of delineation methods on city sustainability indices.

    PubMed

    Uchiyama, Yuta; Mori, Koichiro

    2017-08-15

    The purpose of this paper is to analyze how different definitions and methods for delineating the spatial boundaries of cities have an impact on the values of city sustainability indicators. It is necessary to distinguish the inside of cities from the outside when calculating the values of sustainability indicators that assess the impacts of human activities within cities on areas beyond their boundaries. For this purpose, spatial boundaries of cities should be practically detected on the basis of a relevant definition of a city. Although no definition of a city is commonly shared among academic fields, three practical methods for identifying urban areas are available in remote sensing science. Those practical methods are based on population density, landcover, and night-time lights. These methods are correlated, but non-negligible differences exist in their determination of urban extents and urban population. Furthermore, critical and statistically significant differences in some urban environmental sustainability indicators result from the three different urban detection methods. For example, the average values of CO 2 emissions per capita and PM 10 concentration in cities with more than 1 million residents are significantly different among the definitions. When analyzing city sustainability indicators and disseminating the implication of the results, the values based on the different definitions should be simultaneously investigated. It is necessary to carefully choose a relevant definition to analyze sustainability indicators for policy making. Otherwise, ineffective and inefficient policies will be developed. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Downscaling Global Emissions and Its Implications Derived from Climate Model Experiments

    PubMed Central

    Abe, Manabu; Kinoshita, Tsuguki; Hasegawa, Tomoko; Kawase, Hiroaki; Kushida, Kazuhide; Masui, Toshihiko; Oka, Kazutaka; Shiogama, Hideo; Takahashi, Kiyoshi; Tatebe, Hiroaki; Yoshikawa, Minoru

    2017-01-01

    In climate change research, future scenarios of greenhouse gas and air pollutant emissions generated by integrated assessment models (IAMs) are used in climate models (CMs) and earth system models to analyze future interactions and feedback between human activities and climate. However, the spatial resolutions of IAMs and CMs differ. IAMs usually disaggregate the world into 10–30 aggregated regions, whereas CMs require a grid-based spatial resolution. Therefore, downscaling emissions data from IAMs into a finer scale is necessary to input the emissions into CMs. In this study, we examined whether differences in downscaling methods significantly affect climate variables such as temperature and precipitation. We tested two downscaling methods using the same regionally aggregated sulfur emissions scenario obtained from the Asian-Pacific Integrated Model/Computable General Equilibrium (AIM/CGE) model. The downscaled emissions were fed into the Model for Interdisciplinary Research on Climate (MIROC). One of the methods assumed a strong convergence of national emissions intensity (e.g., emissions per gross domestic product), while the other was based on inertia (i.e., the base-year remained unchanged). The emissions intensities in the downscaled spatial emissions generated from the two methods markedly differed, whereas the emissions densities (emissions per area) were similar. We investigated whether the climate change projections of temperature and precipitation would significantly differ between the two methods by applying a field significance test, and found little evidence of a significant difference between the two methods. Moreover, there was no clear evidence of a difference between the climate simulations based on these two downscaling methods. PMID:28076446

  5. Methods for environmental change; an exploratory study.

    PubMed

    Kok, Gerjo; Gottlieb, Nell H; Panne, Robert; Smerecnik, Chris

    2012-11-28

    While the interest of health promotion researchers in change methods directed at the target population has a long tradition, interest in change methods directed at the environment is still developing. In this survey, the focus is on methods for environmental change; especially about how these are composed of methods for individual change ('Bundling') and how within one environmental level, organizations, methods differ when directed at the management ('At') or applied by the management ('From'). The first part of this online survey dealt with examining the 'bundling' of individual level methods to methods at the environmental level. The question asked was to what extent the use of an environmental level method would involve the use of certain individual level methods. In the second part of the survey the question was whether there are differences between applying methods directed 'at' an organization (for instance, by a health promoter) versus 'from' within an organization itself. All of the 20 respondents are experts in the field of health promotion. Methods at the individual level are frequently bundled together as part of a method at a higher ecological level. A number of individual level methods are popular as part of most of the environmental level methods, while others are not chosen very often. Interventions directed at environmental agents often have a strong focus on the motivational part of behavior change.There are different approaches targeting a level or being targeted from a level. The health promoter will use combinations of motivation and facilitation. The manager will use individual level change methods focusing on self-efficacy and skills. Respondents think that any method may be used under the right circumstances, although few endorsed coercive methods. Taxonomies of theoretical change methods for environmental change should include combinations of individual level methods that may be bundled and separate suggestions for methods targeting a level or being targeted from a level. Future research needs to cover more methods to rate and to be rated. Qualitative data may explain some of the surprising outcomes, such as the lack of large differences and the avoidance of coercion. Taxonomies should include the theoretical parameters that limit the effectiveness of the method.

  6. Experiential Learning Methods, Simulation Complexity and Their Effects on Different Target Groups

    ERIC Educational Resources Information Center

    Kluge, Annette

    2007-01-01

    This article empirically supports the thesis that there is no clear and unequivocal argument in favor of simulations and experiential learning. Instead the effectiveness of simulation-based learning methods depends strongly on the target group's characteristics. Two methods of supporting experiential learning are compared in two different complex…

  7. Exploratory methods for truck re-identification in a statewide network based on axle weight and axle spacing data to enhance freight metrics : phase II.

    DOT National Transportation Integrated Search

    2012-05-01

    Vehicle re-identification methods can be used to anonymously match vehicles crossing two different locations based on vehicle attribute data. : This research builds upon a previous study and investigates different methods for solving the re-identific...

  8. Comparing Performance of Methods to Deal with Differential Attrition in Lottery Based Evaluations

    ERIC Educational Resources Information Center

    Zamarro, Gema; Anderson, Kaitlin; Steele, Jennifer; Miller, Trey

    2016-01-01

    The purpose of this study is to study the performance of different methods (inverse probability weighting and estimation of informative bounds) to control for differential attrition by comparing the results of different methods using two datasets: an original dataset from Portland Public Schools (PPS) subject to high rates of differential…

  9. A comparison of in-house real-time LAMP assays with a commercial assay for the detection of pathogenic bacteria

    USDA-ARS?s Scientific Manuscript database

    Molecular detection of bacterial pathogens based on LAMP methods is a faster and simpler approach than conventional culture methods. Although different LAMP-based methods for pathogenic bacterial detection are available, a systematic comparison of these different LAMP assays has not been performed. ...

  10. Improving the Teaching of Microsoft Excel: Traditional Book versus Online Platform

    ERIC Educational Resources Information Center

    Brooks, Stoney; Taylor, Joseph

    2016-01-01

    The authors explore the differences between traditional, book-based methods of teaching Excel and online, platform-supported methods by comparing teaching students in different locations, with and without online support. As Excel is a critical skill for business majors, the authors investigate which methods and locations provide the highest…

  11. How Learning Designs, Teaching Methods and Activities Differ by Discipline in Australian Universities

    ERIC Educational Resources Information Center

    Cameron, Leanne

    2017-01-01

    This paper reports on the learning designs, teaching methods and activities most commonly employed within the disciplines in six universities in Australia. The study sought to establish if there were significant differences between the disciplines in learning designs, teaching methods and teaching activities in the current Australian context, as…

  12. Comparison of different classification methods for analyzing electronic nose data to characterize sesame oils and blends.

    PubMed

    Shao, Xiaolong; Li, Hui; Wang, Nan; Zhang, Qiang

    2015-10-21

    An electronic nose (e-nose) was used to characterize sesame oils processed by three different methods (hot-pressed, cold-pressed, and refined), as well as blends of the sesame oils and soybean oil. Seven classification and prediction methods, namely PCA, LDA, PLS, KNN, SVM, LASSO and RF, were used to analyze the e-nose data. The classification accuracy and MAUC were employed to evaluate the performance of these methods. The results indicated that sesame oils processed with different methods resulted in different sensor responses, with cold-pressed sesame oil producing the strongest sensor signals, followed by the hot-pressed sesame oil. The blends of pressed sesame oils with refined sesame oil were more difficult to be distinguished than the blends of pressed sesame oils and refined soybean oil. LDA, KNN, and SVM outperformed the other classification methods in distinguishing sesame oil blends. KNN, LASSO, PLS, and SVM (with linear kernel), and RF models could adequately predict the adulteration level (% of added soybean oil) in the sesame oil blends. Among the prediction models, KNN with k = 1 and 2 yielded the best prediction results.

  13. Analysis of the essential oils of Alpiniae Officinarum Hance in different extraction methods

    NASA Astrophysics Data System (ADS)

    Yuan, Y.; Lin, L. J.; Huang, X. B.; Li, J. H.

    2017-09-01

    It was developed for the analysis of the essential oils of Alpiniae Officinarum Hance extracted by steam distillation (SD), ultrasonic assisted solvent extraction (UAE) and supercritical fluid extraction (SFE) via gas chromatography mass spectrometry (GC-MS) combined with retention index (RI) method. There were multiple volatile components of the oils extracted by the three above-mention methods respectively identified; meanwhile, each one was quantified by area normalization method. The results indicated that the content of 1,8-Cineole, the index constituent, by SD was similar as SFE, and higher than UAE. Although UAE was less time consuming and consumed less energy, the oil quality was poorer due to the use of organic solvents was hard to degrade. In addition, some constituents could be obtained by SFE but could not by SD. In conclusion, essential oil of different extraction methods from the same batch of materials had been proved broadly similarly, however, there were some differences in composition and component ratio. Therefore, development and utilization of different extraction methods must be selected according to the functional requirements of products.

  14. Segmentation of mouse dynamic PET images using a multiphase level set method

    NASA Astrophysics Data System (ADS)

    Cheng-Liao, Jinxiu; Qi, Jinyi

    2010-11-01

    Image segmentation plays an important role in medical diagnosis. Here we propose an image segmentation method for four-dimensional mouse dynamic PET images. We consider that voxels inside each organ have similar time activity curves. The use of tracer dynamic information allows us to separate regions that have similar integrated activities in a static image but with different temporal responses. We develop a multiphase level set method that utilizes both the spatial and temporal information in a dynamic PET data set. Different weighting factors are assigned to each image frame based on the noise level and activity difference among organs of interest. We used a weighted absolute difference function in the data matching term to increase the robustness of the estimate and to avoid over-partition of regions with high contrast. We validated the proposed method using computer simulated dynamic PET data, as well as real mouse data from a microPET scanner, and compared the results with those of a dynamic clustering method. The results show that the proposed method results in smoother segments with the less number of misclassified voxels.

  15. EIT Imaging of admittivities with a D-bar method and spatial prior: experimental results for absolute and difference imaging.

    PubMed

    Hamilton, S J

    2017-05-22

    Electrical impedance tomography (EIT) is an emerging imaging modality that uses harmless electrical measurements taken on electrodes at a body's surface to recover information about the internal electrical conductivity and or permittivity. The image reconstruction task of EIT is a highly nonlinear inverse problem that is sensitive to noise and modeling errors making the image reconstruction task challenging. D-bar methods solve the nonlinear problem directly, bypassing the need for detailed and time-intensive forward models, to provide absolute (static) as well as time-difference EIT images. Coupling the D-bar methodology with the inclusion of high confidence a priori data results in a noise-robust regularized image reconstruction method. In this work, the a priori D-bar method for complex admittivities is demonstrated effective on experimental tank data for absolute imaging for the first time. Additionally, the method is adjusted for, and tested on, time-difference imaging scenarios. The ability of the method to be used for conductivity, permittivity, absolute as well as time-difference imaging provides the user with great flexibility without a high computational cost.

  16. Investigating the Importance of the Pocket-estimation Method in Pocket-based Approaches: An Illustration Using Pocket-ligand Classification.

    PubMed

    Caumes, Géraldine; Borrel, Alexandre; Abi Hussein, Hiba; Camproux, Anne-Claude; Regad, Leslie

    2017-09-01

    Small molecules interact with their protein target on surface cavities known as binding pockets. Pocket-based approaches are very useful in all of the phases of drug design. Their first step is estimating the binding pocket based on protein structure. The available pocket-estimation methods produce different pockets for the same target. The aim of this work is to investigate the effects of different pocket-estimation methods on the results of pocket-based approaches. We focused on the effect of three pocket-estimation methods on a pocket-ligand (PL) classification. This pocket-based approach is useful for understanding the correspondence between the pocket and ligand spaces and to develop pharmacological profiling models. We found pocket-estimation methods yield different binding pockets in terms of boundaries and properties. These differences are responsible for the variation in the PL classification results that can have an impact on the detected correspondence between pocket and ligand profiles. Thus, we highlighted the importance of the pocket-estimation method choice in pocket-based approaches. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Evaluation of angiogram visualization methods for fast and reliable aneurysm diagnosis

    NASA Astrophysics Data System (ADS)

    Lesar, Žiga; Bohak, Ciril; Marolt, Matija

    2015-03-01

    In this paper we present the results of an evaluation of different visualization methods for angiogram volumetric data-ray casting, marching cubes, and multi-level partition of unity implicits. There are several options available with ray-casting: isosurface extraction, maximum intensity projection and alpha compositing, each producing fundamentally different results. Different visualization methods are suitable for different needs, so this choice is crucial in diagnosis and decision making processes. We also evaluate visual effects such as ambient occlusion, screen space ambient occlusion, and depth of field. Some visualization methods include transparency, so we address the question of relevancy of this additional visual information. We employ transfer functions to map data values to color and transparency, allowing us to view or hide particular tissues. All the methods presented in this paper were developed using OpenCL, striving for real-time rendering and quality interaction. An evaluation has been conducted to assess the suitability of the visualization methods. Results show superiority of isosurface extraction with ambient occlusion effects. Visual effects may positively or negatively affect perception of depth, motion, and relative positions in space.

  18. Health condition identification of multi-stage planetary gearboxes using a mRVM-based method

    NASA Astrophysics Data System (ADS)

    Lei, Yaguo; Liu, Zongyao; Wu, Xionghui; Li, Naipeng; Chen, Wu; Lin, Jing

    2015-08-01

    Multi-stage planetary gearboxes are widely applied in aerospace, automotive and heavy industries. Their key components, such as gears and bearings, can easily suffer from damage due to tough working environment. Health condition identification of planetary gearboxes aims to prevent accidents and save costs. This paper proposes a method based on multiclass relevance vector machine (mRVM) to identify health condition of multi-stage planetary gearboxes. In this method, a mRVM algorithm is adopted as a classifier, and two features, i.e. accumulative amplitudes of carrier orders (AACO) and energy ratio based on difference spectra (ERDS), are used as the input of the classifier to classify different health conditions of multi-stage planetary gearboxes. To test the proposed method, seven health conditions of a two-stage planetary gearbox are considered and vibration data is acquired from the planetary gearbox under different motor speeds and loading conditions. The results of three tests based on different data show that the proposed method obtains an improved identification performance and robustness compared with the existing method.

  19. Comparative analysis of two methods for measuring sales volumes during malaria medicine outlet surveys

    PubMed Central

    2013-01-01

    Background There is increased interest in using commercial providers for improving access to quality malaria treatment. Understanding their current role is an essential first step, notably in terms of the volume of diagnostics and anti-malarials they sell. Sales volume data can be used to measure the importance of different provider and product types, frequency of parasitological diagnosis and impact of interventions. Several methods for measuring sales volumes are available, yet all have methodological challenges and evidence is lacking on the comparability of different methods. Methods Using sales volume data on anti-malarials and rapid diagnostic tests (RDTs) for malaria collected through provider recall (RC) and retail audits (RA), this study measures the degree of agreement between the two methods at wholesale and retail commercial providers in Cambodia following the Bland-Altman approach. Relative strengths and weaknesses of the methods were also investigated through qualitative research with fieldworkers. Results A total of 67 wholesalers and 107 retailers were sampled. Wholesale sales volumes were estimated through both methods for 62 anti-malarials and 23 RDTs and retail volumes for 113 anti-malarials and 33 RDTs. At wholesale outlets, RA estimates for anti-malarial sales were on average higher than RC estimates (mean difference of four adult equivalent treatment doses (95% CI 0.6-7.2)), equivalent to 30% of mean sales volumes. For RDTs at wholesalers, the between-method mean difference was not statistically significant (one test, 95% CI −6.0-4.0). At retail outlets, between-method differences for both anti-malarials and RDTs increased with larger volumes being measured, so mean differences were not a meaningful measure of agreement between the methods. Qualitative research revealed that in Cambodia where sales volumes are small, RC had key advantages: providers were perceived to remember more easily their sales volumes and find RC less invasive; fieldworkers found it more convenient; and it was cheaper to implement than RA. Discussion/conclusions Both RA and RC had implementation challenges and were prone to data collection errors. Choice of empirical methods is likely to have important implications for data quality depending on the study context. PMID:24010526

  20. [Hand and wrist bone maturation in children with central precocious puberty and idiopathic short stature].

    PubMed

    Wang, Anru; Yang, Fangling; Yu, Baosheng; Shan, Ye; Gao, Lanying; Zhang, Xiaoxiao; Peng, Ya

    2013-07-01

    To investigate the maturation of individual bones on the hand and wrist in children with central precocious puberty (CPP) and idiopathic short stature (ISS). Hand and wrist films of 25 children with CPP, 29 children with ISS and 21 normal controls were evaluated by conventional Greulich-Pyle (GP) atlas method and individual bone assessment method, in which all twenty bones of the hand and wrist were evaluated based on GP atlas, including 2 radius and ulna, 7 carpal bones, 11 metacarpal and phalangeal bones, the average bone age (BA) was calculated. The differences in groups were analyzed by independent samples t test. The differences between the two methods were analyzed by paired sample t test. The differences between BA and chronological age (CA) were analyzed by ROC with SPSS 17.0. Compared with the normal control group, the advance of BA in the CPP group was 0.70-2.26 y (1.48 ±0.78) by the GP atlas method, while that was 0.28-2.00 y(1.14 ±0.86) by the individual bone evaluation method. In all twenty bones, the advance of metacarpal and phalangeal BA was the greatest [0.34-2.06 y(1.2±0.86)]. In the ISS group,the delay of BA was 0.47-2.91 y(-1.69±1.22) by the GP atlas method, while that was 0.48-2.50 y (-1.49±1.01) by individual bone evaluation method.The delay of carpal BA was the greatest [0.59-2.73 y(-1.66±1.07)] in all twenty bones. In the ISS group and the normal control group, there were no statistic differences between the two methods. In the CPP group, statistic difference was found between two methods. There were no statistic differences for the areas under ROC curves between two methods. The advance of metacarpal and phalangeal BA is the greatest in CPP group and the delay of carpal BA is the greatest in ISS group.Both methods provide diagnostic information for bone age in CPP and ISS children.

  1. Equivalence of internal and external mixture schemes of single scattering properties in vector radiative transfer

    PubMed Central

    Mukherjee, Lipi; Zhai, Peng-Wang; Hu, Yongxiang; Winker, David M.

    2018-01-01

    Polarized radiation fields in a turbid medium are influenced by single-scattering properties of scatterers. It is common that media contain two or more types of scatterers, which makes it essential to properly mix single-scattering properties of different types of scatterers in the vector radiative transfer theory. The vector radiative transfer solvers can be divided into two basic categories: the stochastic and deterministic methods. The stochastic method is basically the Monte Carlo method, which can handle scatterers with different scattering properties explicitly. This mixture scheme is called the external mixture scheme in this paper. The deterministic methods, however, can only deal with a single set of scattering properties in the smallest discretized spatial volume. The single-scattering properties of different types of scatterers have to be averaged before they are input to deterministic solvers. This second scheme is called the internal mixture scheme. The equivalence of these two different mixture schemes of scattering properties has not been demonstrated so far. In this paper, polarized radiation fields for several scattering media are solved using the Monte Carlo and successive order of scattering (SOS) methods and scattering media contain two types of scatterers: Rayleigh scatterers (molecules) and Mie scatterers (aerosols). The Monte Carlo and SOS methods employ external and internal mixture schemes of scatterers, respectively. It is found that the percentage differences between radiances solved by these two methods with different mixture schemes are of the order of 0.1%. The differences of Q/I, U/I, and V/I are of the order of 10−5 ~ 10−4, where I, Q, U, and V are the Stokes parameters. Therefore, the equivalence between these two mixture schemes is confirmed to the accuracy level of the radiative transfer numerical benchmarks. This result provides important guidelines for many radiative transfer applications that involve the mixture of different scattering and absorptive particles. PMID:29047543

  2. Statistical Considerations of Data Processing in Giovanni Online Tool

    NASA Technical Reports Server (NTRS)

    Suhung, Shen; Leptoukh, G.; Acker, J.; Berrick, S.

    2005-01-01

    The GES DISC Interactive Online Visualization and Analysis Infrastructure (Giovanni) is a web-based interface for the rapid visualization and analysis of gridded data from a number of remote sensing instruments. The GES DISC currently employs several Giovanni instances to analyze various products, such as Ocean-Giovanni for ocean products from SeaWiFS and MODIS-Aqua; TOMS & OM1 Giovanni for atmospheric chemical trace gases from TOMS and OMI, and MOVAS for aerosols from MODIS, etc. (http://giovanni.gsfc.nasa.gov) Foremost among the Giovanni statistical functions is data averaging. Two aspects of this function are addressed here. The first deals with the accuracy of averaging gridded mapped products vs. averaging from the ungridded Level 2 data. Some mapped products contain mean values only; others contain additional statistics, such as number of pixels (NP) for each grid, standard deviation, etc. Since NP varies spatially and temporally, averaging with or without weighting by NP will be different. In this paper, we address differences of various weighting algorithms for some datasets utilized in Giovanni. The second aspect is related to different averaging methods affecting data quality and interpretation for data with non-normal distribution. The present study demonstrates results of different spatial averaging methods using gridded SeaWiFS Level 3 mapped monthly chlorophyll a data. Spatial averages were calculated using three different methods: arithmetic mean (AVG), geometric mean (GEO), and maximum likelihood estimator (MLE). Biogeochemical data, such as chlorophyll a, are usually considered to have a log-normal distribution. The study determined that differences between methods tend to increase with increasing size of a selected coastal area, with no significant differences in most open oceans. The GEO method consistently produces values lower than AVG and MLE. The AVG method produces values larger than MLE in some cases, but smaller in other cases. Further studies indicated that significant differences between AVG and MLE methods occurred in coastal areas where data have large spatial variations and a log-bimodal distribution instead of log-normal distribution.

  3. Comparison of fractionation methods for nitrogen and starch in maize and grass silages.

    PubMed

    Ali, M; de Jonge, L H; Cone, J W; van Duinkerken, G; Blok, M C; Bruinenberg, M H; Hendriks, W H

    2016-06-01

    In in situ nylon bag technique, many feed evaluation systems use a washing machine method (WMM) to determine the washout (W) fraction and to wash the rumen incubated nylon bags. As this method has some disadvantages, an alternate modified method (MM) was recently introduced. The aim of this study was to determine and compare the W and non-washout (D+U) fractions of nitrogen (N) and/or starch of maize and grass silages, using the WMM and the MM. Ninety-nine maize silage and 99 grass silage samples were selected with a broad range in chemical composition. The results showed a large range in the W, soluble (S) and D+U fractions of N of maize and grass silages and the W, insoluble washout (W-S) and D+U fractions of starch of maize silages, determined by both methods, due to variation in their chemical composition. The values for N fractions of maize and grass silages obtained with both methods were found different (p < 0.001). Large differences (p < 0.001) were found in the D+U fraction of starch of maize silages which might be due to different methodological approaches, such as different rinsing procedures (washing vs. shaking), duration of rinsing (40 min vs. 60 min) and different solvents (water vs. buffer solution). The large differences (p < 0.001) in the W-S and D+U fractions of starch determined with both methods can led to different predicted values for the effective rumen starch degradability. In conclusion, the MM with one recommended shaking procedure, performed under identical and controlled experimental conditions, can give more reliable results compared to the WMM, using different washing programs and procedures. Journal of Animal Physiology and Animal Nutrition © 2015 Blackwell Verlag GmbH.

  4. Boundary conditions for the solution of compressible Navier-Stokes equations by an implicit factored method

    NASA Technical Reports Server (NTRS)

    Shih, T. I.-P.; Smith, G. E.; Springer, G. S.; Rimon, Y.

    1983-01-01

    A method is presented for formulating the boundary conditions in implicit finite-difference form needed for obtaining solutions to the compressible Navier-Stokes equations by the Beam and Warming implicit factored method. The usefulness of the method was demonstrated (a) by establishing the boundary conditions applicable to the analysis of the flow inside an axisymmetric piston-cylinder configuration and (b) by calculating velocities and mass fractions inside the cylinder for different geometries and different operating conditions. Stability, selection of time step and grid sizes, and computer time requirements are discussed in reference to the piston-cylinder problem analyzed.

  5. Machining Data Handbook. 3rd Edition. Volume 2

    DTIC Science & Technology

    1980-01-01

    17.2 The power required in machining can be determined by vided in figures 17.2-3 through 17.2-10 for deternining the several different methods as...patterns and require not caused problems. and the. e have been reports of im- different methods of checking. proved die life in speial applications...microinches Ra 11.25 to 5.0 Am application of this method the surface is subjected to melt- RaJ) and wide differences in recast layer thicknesses ing

  6. Nuclear Quadrupole Resonance (NQR) Method and Probe for Generating RF Magnetic Fields in Different Directions to Distinguish NQR from Acoustic Ringing Induced in a Sample

    DTIC Science & Technology

    1997-08-01

    77,719 TITLE OF THE INVENTION NUCLEAR QUADRUPOLE RESONANCE ( NQR ) METHOD AND PROBE FOR GENERATING RF MAGNETIC FIELDS IN DIFFERENT DIRECTIONS TO...DISTINGUISH NQR FROM ACOUSTIC RINGING INDUCED IN A SAMPLE BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a...nuclear quadrupole 15 resonance ( NQR ) method and probe for generating RF magnetic fields in different directions towards a sample. More specifically

  7. Analysis of drift correction in different simulated weighing schemes

    NASA Astrophysics Data System (ADS)

    Beatrici, A.; Rebelo, A.; Quintão, D.; Cacais, F. L.; Loayza, V. M.

    2015-10-01

    In the calibration of high accuracy mass standards some weighing schemes are used to reduce or eliminate the zero drift effects in mass comparators. There are different sources for the drift and different methods for its treatment. By using numerical methods, drift functions were simulated and a random term was included in each function. The comparison between the results obtained from ABABAB and ABBA weighing series was carried out. The results show a better efficacy of ABABAB method for drift with smooth variation and small randomness.

  8. Extension of moment projection method to the fragmentation process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Shaohua; Yapp, Edward K.Y.; Akroyd, Jethro

    2017-04-15

    The method of moments is a simple but efficient method of solving the population balance equation which describes particle dynamics. Recently, the moment projection method (MPM) was proposed and validated for particle inception, coagulation, growth and, more importantly, shrinkage; here the method is extended to include the fragmentation process. The performance of MPM is tested for 13 different test cases for different fragmentation kernels, fragment distribution functions and initial conditions. Comparisons are made with the quadrature method of moments (QMOM), hybrid method of moments (HMOM) and a high-precision stochastic solution calculated using the established direct simulation algorithm (DSA) and advantagesmore » of MPM are drawn.« less

  9. Optimisation of reconstruction--reprojection-based motion correction for cardiac SPECT.

    PubMed

    Kangasmaa, Tuija S; Sohlberg, Antti O

    2014-07-01

    Cardiac motion is a challenging cause of image artefacts in myocardial perfusion SPECT. A wide range of motion correction methods have been developed over the years, and so far automatic algorithms based on the reconstruction--reprojection principle have proved to be the most effective. However, these methods have not been fully optimised in terms of their free parameters and implementational details. Two slightly different implementations of reconstruction--reprojection-based motion correction techniques were optimised for effective, good-quality motion correction and then compared with each other. The first of these methods (Method 1) was the traditional reconstruction-reprojection motion correction algorithm, where the motion correction is done in projection space, whereas the second algorithm (Method 2) performed motion correction in reconstruction space. The parameters that were optimised include the type of cost function (squared difference, normalised cross-correlation and mutual information) that was used to compare measured and reprojected projections, and the number of iterations needed. The methods were tested with motion-corrupt projection datasets, which were generated by adding three different types of motion (lateral shift, vertical shift and vertical creep) to motion-free cardiac perfusion SPECT studies. Method 2 performed slightly better overall than Method 1, but the difference between the two implementations was small. The execution time for Method 2 was much longer than for Method 1, which limits its clinical usefulness. The mutual information cost function gave clearly the best results for all three motion sets for both correction methods. Three iterations were sufficient for a good quality correction using Method 1. The traditional reconstruction--reprojection-based method with three update iterations and mutual information cost function is a good option for motion correction in clinical myocardial perfusion SPECT.

  10. Effectiveness evaluation of whole-body electromyostimulation as a post-exercise recovery method.

    PubMed

    DE LA Camara, Miguel A; Pardos, Ana I; Veiga, Óscar L

    2018-01-04

    Whole-body electromyostimulation (WB-EMS) devices are now being used in health and sports training, although there are few studies investigating their benefits. The objective of this research was to evaluate the effectiveness of WB-EMS as a post-exercise recovery method, and compare it with other methods like active and passive recovery. The study included nine trained men (age = 21 ± 1years, height = 1.77 ± 0.4 m, mass = 62 ± 7 kg). Three trials were performed in three different sessions, 1 week apart. Each trial, the participants completed the same exercise protocol and a different recovery method each time. A repeated measures design was used to check the basal reestablishing on several physiological variables [lactate, heart rate, percentage of tissue hemoglobin saturation, temperature, and neuromuscular fatigue] and to evaluate the quality of recovery. The non-parametric Wilcoxon and Friedman ANOVA tests were used to examine the differences between recovery methods. The results showed no differences between methods in the physiological and psychological variables analyzed. Although, the blood lactate concentration showed borderline statistical significance between methods (P = 0.050). Likewise, WB-EMS failed to recover baseline blood lactate concentration (P = 0.021) and percentage of tissue hemoglobin saturation (P = 0.023), in contrast to the other two methods. These findings suggest that WB-EMS is not a good recovery method because the power of reestablishing of several physiological and psychological parameters is not superior to other recovery methods like active and passive recovery.

  11. Efficacy of Conventional Laser Irradiation Versus a New Method for Gingival Depigmentation (Sieve Method): A Clinical Trial.

    PubMed

    Houshmand, Behzad; Janbakhsh, Noushin; Khalilian, Fatemeh; Talebi Ardakani, Mohammad Reza

    2017-01-01

    Introduction: Diode laser irradiation has recently shown promising results for treatment of gingival pigmentation. This study sought to compare the efficacy of 2 diode laser irradiation protocols for treatment of gingival pigmentations, namely the conventional method and the sieve method. Methods: In this split-mouth clinical trial, 15 patients with gingival pigmentation were selected and their pigmentation intensity was determined using Dummett's oral pigmentation index (DOPI) in different dental regions. Diode laser (980 nm wavelength and 2 W power) was irradiated through a stipple pattern (sieve method) and conventionally in the other side of the mouth. Level of pain and satisfaction with the outcome (both patient and periodontist) were measured using a 0-10 visual analog scale (VAS) for both methods. Patients were followed up at 2 weeks, one month and 3 months. Pigmentation levels were compared using repeated measures of analysis of variance (ANOVA). The difference in level of pain and satisfaction between the 2 groups was analyzed by sample t test and general estimate equation model. Results: No significant differences were found regarding the reduction of pigmentation scores and pain and scores between the 2 groups. The difference in satisfaction with the results at the three time points was significant in both conventional and sieve methods in patients ( P = 0.001) and periodontists ( P = 0.015). Conclusion: Diode laser irradiation in both methods successfully eliminated gingival pigmentations. The sieve method was comparable to conventional technique, offering no additional advantage.

  12. Influence of photoactivation method and mold for restoration on the Knoop hardness of resin composite restorations.

    PubMed

    Brandt, William Cunha; Silva-Concilio, Lais Regiane; Neves, Ana Christina Claro; de Souza-Junior, Eduardo Jose Carvalho; Sinhoreti, Mario Alexandre Coelho

    2013-09-01

    The aim of this study was to evaluate in vitro the Knoop hardness in the top and bottom of composite photo activated by different methods when different mold materials were used. Z250 (3M ESPE) and XL2500 halogen unit (3M ESPE) were used. For hardness test, conical restorations were made in extracted bovine incisors (tooth mold) and also metal mold (approximately 2 mm top diameter × 1.5 mm bottom diameter × 2 mm in height). Different photoactivation methods were tested: high-intensity continuous (HIC), low-intensity continuous (LIC), soft-start, or pulse-delay (PD), with constant radiant exposure. Knoop readings were performed on top and bottom restoration surfaces. Data were submitted to two-way ANOVA and Tukey's test (p = 0.05). On the top, regardless of the mold used, no significant difference in the Knoop hardness (Knoop hardness number, in kilograms-force per square millimeter) was observed between the photoactivation methods. On the bottom surface, the photoactivation method HIC shows higher means of hardness than LIC when tooth and metal were used. Significant differences of hardness on the top and in the bottom were detected between tooth and metal. The photoactivation method LIC and the material mold can interfere in the hardness values of composite restorations.

  13. A reproducible accelerated in vitro release testing method for PLGA microspheres.

    PubMed

    Shen, Jie; Lee, Kyulim; Choi, Stephanie; Qu, Wen; Wang, Yan; Burgess, Diane J

    2016-02-10

    The objective of the present study was to develop a discriminatory and reproducible accelerated in vitro release method for long-acting PLGA microspheres with inner structure/porosity differences. Risperidone was chosen as a model drug. Qualitatively and quantitatively equivalent PLGA microspheres with different inner structure/porosity were obtained using different manufacturing processes. Physicochemical properties as well as degradation profiles of the prepared microspheres were investigated. Furthermore, in vitro release testing of the prepared risperidone microspheres was performed using the most common in vitro release methods (i.e., sample-and-separate and flow through) for this type of product. The obtained compositionally equivalent risperidone microspheres had similar drug loading but different inner structure/porosity. When microsphere particle size appeared similar, porous risperidone microspheres showed faster microsphere degradation and drug release compared with less porous microspheres. Both in vitro release methods investigated were able to differentiate risperidone microsphere formulations with differences in porosity under real-time (37 °C) and accelerated (45 °C) testing conditions. Notably, only the accelerated USP apparatus 4 method showed good reproducibility for highly porous risperidone microspheres. These results indicated that the accelerated USP apparatus 4 method is an appropriate fast quality control tool for long-acting PLGA microspheres (even with porous structures). Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Echo movement and evolution from real-time processing.

    NASA Technical Reports Server (NTRS)

    Schaffner, M. R.

    1972-01-01

    Preliminary experimental data on the effectiveness of conventional radars in measuring the movement and evolution of meteorological echoes when the radar is connected to a programmable real-time processor are examined. In the processor programming is accomplished by conceiving abstract machines which constitute the actual programs used in the methods employed. An analysis of these methods, such as the center of gravity method, the contour-displacement method, the method of slope, the cross-section method, the contour crosscorrelation method, the method of echo evolution at each point, and three-dimensional measurements, shows that the motions deduced from them may differ notably (since each method determines different quantities) but the plurality of measurement may give additional information on the characteristics of the precipitation.

  15. A comparison study of different facial soft tissue analysis methods.

    PubMed

    Kook, Min-Suk; Jung, Seunggon; Park, Hong-Ju; Oh, Hee-Kyun; Ryu, Sun-Youl; Cho, Jin-Hyoung; Lee, Jae-Seo; Yoon, Suk-Ja; Kim, Min-Soo; Shin, Hyo-Keun

    2014-07-01

    The purpose of this study was to evaluate several different facial soft tissue measurement methods. After marking 15 landmarks in the facial area of 12 mannequin heads of different sizes and shapes, facial soft tissue measurements were performed by the following 5 methods: Direct anthropometry, Digitizer, 3D CT, 3D scanner, and DI3D system. With these measurement methods, 10 measurement values representing the facial width, height, and depth were determined twice with a one week interval by one examiner. These data were analyzed with the SPSS program. The position created based on multi-dimensional scaling showed that direct anthropometry, 3D CT, digitizer, 3D scanner demonstrated relatively similar values, while the DI3D system showed slightly different values. All 5 methods demonstrated good accuracy and had a high coefficient of reliability (>0.92) and a low technical error (<0.9 mm). The measured value of the distance between the right and left medial canthus obtained by using the DI3D system was statistically significantly different from that obtained by using the digital caliper, digitizer and laser scanner (p < 0.05), but the other measured values were not significantly different. On evaluating the reproducibility of measurement methods, two measurement values (Ls-Li, G-Pg) obtained by using direct anthropometry, one measurement value (N'-Prn) obtained by using the digitizer, and four measurement values (EnRt-EnLt, AlaRt-AlaLt, ChRt-ChLt, Sn-Pg) obtained by using the DI3D system, were statistically significantly different. However, the mean measurement error in every measurement method was low (<0.7 mm). All measurement values obtained by using the 3D CT and 3D scanner did not show any statistically significant difference. The results of this study show that all 3D facial soft tissue analysis methods demonstrate favorable accuracy and reproducibility, and hence they can be used in clinical practice and research studies. Copyright © 2013 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  16. Aspects of numerical and representational methods related to the finite-difference simulation of advective and dispersive transport of freshwater in a thin brackish aquifer

    USGS Publications Warehouse

    Merritt, M.L.

    1993-01-01

    The simulation of the transport of injected freshwater in a thin brackish aquifer, overlain and underlain by confining layers containing more saline water, is shown to be influenced by the choice of the finite-difference approximation method, the algorithm for representing vertical advective and dispersive fluxes, and the values assigned to parametric coefficients that specify the degree of vertical dispersion and molecular diffusion that occurs. Computed potable water recovery efficiencies will differ depending upon the choice of algorithm and approximation method, as will dispersion coefficients estimated based on the calibration of simulations to match measured data. A comparison of centered and backward finite-difference approximation methods shows that substantially different transition zones between injected and native waters are depicted by the different methods, and computed recovery efficiencies vary greatly. Standard and experimental algorithms and a variety of values for molecular diffusivity, transverse dispersivity, and vertical scaling factor were compared in simulations of freshwater storage in a thin brackish aquifer. Computed recovery efficiencies vary considerably, and appreciable differences are observed in the distribution of injected freshwater in the various cases tested. The results demonstrate both a qualitatively different description of transport using the experimental algorithms and the interrelated influences of molecular diffusion and transverse dispersion on simulated recovery efficiency. When simulating natural aquifer flow in cross-section, flushing of the aquifer occurred for all tested coefficient choices using both standard and experimental algorithms. ?? 1993.

  17. Validation of a physical anthropology methodology using mandibles for gender estimation in a Brazilian population

    PubMed Central

    CARVALHO, Suzana Papile Maciel; BRITO, Liz Magalhães; de PAIVA, Luiz Airton Saavedra; BICUDO, Lucilene Arilho Ribeiro; CROSATO, Edgard Michel; de OLIVEIRA, Rogério Nogueira

    2013-01-01

    Validation studies of physical anthropology methods in the different population groups are extremely important, especially in cases in which the population variations may cause problems in the identification of a native individual by the application of norms developed for different communities. Objective This study aimed to estimate the gender of skeletons by application of the method of Oliveira, et al. (1995), previously used in a population sample from Northeast Brazil. Material and Methods The accuracy of this method was assessed for a population from Southeast Brazil and validated by statistical tests. The method used two mandibular measurements, namely the bigonial distance and the mandibular ramus height. The sample was composed of 66 skulls and the method was applied by two examiners. The results were statistically analyzed by the paired t test, logistic discriminant analysis and logistic regression. Results The results demonstrated that the application of the method of Oliveira, et al. (1995) in this population achieved very different outcomes between genders, with 100% for females and only 11% for males, which may be explained by ethnic differences. However, statistical adjustment of measurement data for the population analyzed allowed accuracy of 76.47% for males and 78.13% for females, with the creation of a new discriminant formula. Conclusion It was concluded that methods involving physical anthropology present high rate of accuracy for human identification, easy application, low cost and simplicity; however, the methodologies must be validated for the different populations due to differences in ethnic patterns, which are directly related to the phenotypic aspects. In this specific case, the method of Oliveira, et al. (1995) presented good accuracy and may be used for gender estimation in Brazil in two geographic regions, namely Northeast and Southeast; however, for other regions of the country (North, Central West and South), previous methodological adjustment is recommended as demonstrated in this study. PMID:24037076

  18. ICASE Semiannual Report, October 1, 1992 through March 31, 1993

    DTIC Science & Technology

    1993-06-01

    NUMERICAL MATHEMATICS Saul Abarbanel Further results have been obtained regarding long time integration of high order compact finite difference schemes...overall accuracy. These problems are common to all numerical methods: finite differences , finite elements and spectral methods. It should be noted that...fourth order finite difference scheme. * In the same case, the D6 wavelets provide a sixth order finite difference , noncompact formula. * The wavelets

  19. Discontinuous Spectral Difference Method for Conservation Laws on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Liu, Yen; Vinokur, Marcel; Wang, Z. J.

    2004-01-01

    A new, high-order, conservative, and efficient method for conservation laws on unstructured grids is developed. The concept of discontinuous and high-order local representations to achieve conservation and high accuracy is utilized in a manner similar to the Discontinuous Galerkin (DG) and the Spectral Volume (SV) methods, but while these methods are based on the integrated forms of the equations, the new method is based on the differential form to attain a simpler formulation and higher efficiency. A discussion on the Discontinuous Spectral Difference (SD) Method, locations of the unknowns and flux points and numerical results are also presented.

  20. Comparison of surface freshwater fluxes from different climate forecasts produced through different ensemble generation schemes.

    NASA Astrophysics Data System (ADS)

    Romanova, Vanya; Hense, Andreas; Wahl, Sabrina; Brune, Sebastian; Baehr, Johanna

    2016-04-01

    The decadal variability and its predictability of the surface net freshwater fluxes is compared in a set of retrospective predictions, all using the same model setup, and only differing in the implemented ocean initialisation method and ensemble generation method. The basic aim is to deduce the differences between the initialization/ensemble generation methods in view of the uncertainty of the verifying observational data sets. The analysis will give an approximation of the uncertainties of the net freshwater fluxes, which up to now appear to be one of the most uncertain products in observational data and model outputs. All ensemble generation methods are implemented into the MPI-ESM earth system model in the framework of the ongoing MiKlip project (www.fona-miklip.de). Hindcast experiments are initialised annually between 2000-2004, and from each start year 10 ensemble members are initialized for 5 years each. Four different ensemble generation methods are compared: (i) a method based on the Anomaly Transform method (Romanova and Hense, 2015) in which the initial oceanic perturbations represent orthogonal and balanced anomaly structures in space and time and between the variables taken from a control run, (ii) one-day-lagged ocean states from the MPI-ESM-LR baseline system (iii) one-day-lagged of ocean and atmospheric states with preceding full-field nudging to re-analysis in both the atmospheric and the oceanic component of the system - the baseline one MPI-ESM-LR system, (iv) an Ensemble Kalman Filter (EnKF) implemented into oceanic part of MPI-ESM (Brune et al. 2015), assimilating monthly subsurface oceanic temperature and salinity (EN3) using the Parallel Data Assimilation Framework (PDAF). The hindcasts are evaluated probabilistically using fresh water flux data sets from four different reanalysis data sets: MERRA, NCEP-R1, GFDL ocean reanalysis and GECCO2. The assessments show no clear differences in the evaluations scores on regional scales. However, on the global scale the physically motivated methods (i) and (iv) provide probabilistic hindcasts with a consistently higher reliability than the lagged initialization methods (ii)/(iii) despite the large uncertainties in the verifying observations and in the simulations.

  1. Establishing Statistical Equivalence of Data from Different Sampling Approaches for Assessment of Bacterial Phenotypic Antimicrobial Resistance

    PubMed Central

    2018-01-01

    ABSTRACT To assess phenotypic bacterial antimicrobial resistance (AMR) in different strata (e.g., host populations, environmental areas, manure, or sewage effluents) for epidemiological purposes, isolates of target bacteria can be obtained from a stratum using various sample types. Also, different sample processing methods can be applied. The MIC of each target antimicrobial drug for each isolate is measured. Statistical equivalence testing of the MIC data for the isolates allows evaluation of whether different sample types or sample processing methods yield equivalent estimates of the bacterial antimicrobial susceptibility in the stratum. We demonstrate this approach on the antimicrobial susceptibility estimates for (i) nontyphoidal Salmonella spp. from ground or trimmed meat versus cecal content samples of cattle in processing plants in 2013-2014 and (ii) nontyphoidal Salmonella spp. from urine, fecal, and blood human samples in 2015 (U.S. National Antimicrobial Resistance Monitoring System data). We found that the sample types for cattle yielded nonequivalent susceptibility estimates for several antimicrobial drug classes and thus may gauge distinct subpopulations of salmonellae. The quinolone and fluoroquinolone susceptibility estimates for nontyphoidal salmonellae from human blood are nonequivalent to those from urine or feces, conjecturally due to the fluoroquinolone (ciprofloxacin) use to treat infections caused by nontyphoidal salmonellae. We also demonstrate statistical equivalence testing for comparing sample processing methods for fecal samples (culturing one versus multiple aliquots per sample) to assess AMR in fecal Escherichia coli. These methods yield equivalent results, except for tetracyclines. Importantly, statistical equivalence testing provides the MIC difference at which the data from two sample types or sample processing methods differ statistically. Data users (e.g., microbiologists and epidemiologists) may then interpret practical relevance of the difference. IMPORTANCE Bacterial antimicrobial resistance (AMR) needs to be assessed in different populations or strata for the purposes of surveillance and determination of the efficacy of interventions to halt AMR dissemination. To assess phenotypic antimicrobial susceptibility, isolates of target bacteria can be obtained from a stratum using different sample types or employing different sample processing methods in the laboratory. The MIC of each target antimicrobial drug for each of the isolates is measured, yielding the MIC distribution across the isolates from each sample type or sample processing method. We describe statistical equivalence testing for the MIC data for evaluating whether two sample types or sample processing methods yield equivalent estimates of the bacterial phenotypic antimicrobial susceptibility in the stratum. This includes estimating the MIC difference at which the data from the two approaches differ statistically. Data users (e.g., microbiologists, epidemiologists, and public health professionals) can then interpret whether that present difference is practically relevant. PMID:29475868

  2. Establishing Statistical Equivalence of Data from Different Sampling Approaches for Assessment of Bacterial Phenotypic Antimicrobial Resistance.

    PubMed

    Shakeri, Heman; Volkova, Victoriya; Wen, Xuesong; Deters, Andrea; Cull, Charley; Drouillard, James; Müller, Christian; Moradijamei, Behnaz; Jaberi-Douraki, Majid

    2018-05-01

    To assess phenotypic bacterial antimicrobial resistance (AMR) in different strata (e.g., host populations, environmental areas, manure, or sewage effluents) for epidemiological purposes, isolates of target bacteria can be obtained from a stratum using various sample types. Also, different sample processing methods can be applied. The MIC of each target antimicrobial drug for each isolate is measured. Statistical equivalence testing of the MIC data for the isolates allows evaluation of whether different sample types or sample processing methods yield equivalent estimates of the bacterial antimicrobial susceptibility in the stratum. We demonstrate this approach on the antimicrobial susceptibility estimates for (i) nontyphoidal Salmonella spp. from ground or trimmed meat versus cecal content samples of cattle in processing plants in 2013-2014 and (ii) nontyphoidal Salmonella spp. from urine, fecal, and blood human samples in 2015 (U.S. National Antimicrobial Resistance Monitoring System data). We found that the sample types for cattle yielded nonequivalent susceptibility estimates for several antimicrobial drug classes and thus may gauge distinct subpopulations of salmonellae. The quinolone and fluoroquinolone susceptibility estimates for nontyphoidal salmonellae from human blood are nonequivalent to those from urine or feces, conjecturally due to the fluoroquinolone (ciprofloxacin) use to treat infections caused by nontyphoidal salmonellae. We also demonstrate statistical equivalence testing for comparing sample processing methods for fecal samples (culturing one versus multiple aliquots per sample) to assess AMR in fecal Escherichia coli These methods yield equivalent results, except for tetracyclines. Importantly, statistical equivalence testing provides the MIC difference at which the data from two sample types or sample processing methods differ statistically. Data users (e.g., microbiologists and epidemiologists) may then interpret practical relevance of the difference. IMPORTANCE Bacterial antimicrobial resistance (AMR) needs to be assessed in different populations or strata for the purposes of surveillance and determination of the efficacy of interventions to halt AMR dissemination. To assess phenotypic antimicrobial susceptibility, isolates of target bacteria can be obtained from a stratum using different sample types or employing different sample processing methods in the laboratory. The MIC of each target antimicrobial drug for each of the isolates is measured, yielding the MIC distribution across the isolates from each sample type or sample processing method. We describe statistical equivalence testing for the MIC data for evaluating whether two sample types or sample processing methods yield equivalent estimates of the bacterial phenotypic antimicrobial susceptibility in the stratum. This includes estimating the MIC difference at which the data from the two approaches differ statistically. Data users (e.g., microbiologists, epidemiologists, and public health professionals) can then interpret whether that present difference is practically relevant. Copyright © 2018 Shakeri et al.

  3. Memory-optimized shift operator alternating direction implicit finite difference time domain method for plasma

    NASA Astrophysics Data System (ADS)

    Song, Wanjun; Zhang, Hou

    2017-11-01

    Through introducing the alternating direction implicit (ADI) technique and the memory-optimized algorithm to the shift operator (SO) finite difference time domain (FDTD) method, the memory-optimized SO-ADI FDTD for nonmagnetized collisional plasma is proposed and the corresponding formulae of the proposed method for programming are deduced. In order to further the computational efficiency, the iteration method rather than Gauss elimination method is employed to solve the equation set in the derivation of the formulae. Complicated transformations and convolutions are avoided in the proposed method compared with the Z transforms (ZT) ADI FDTD method and the piecewise linear JE recursive convolution (PLJERC) ADI FDTD method. The numerical dispersion of the SO-ADI FDTD method with different plasma frequencies and electron collision frequencies is analyzed and the appropriate ratio of grid size to the minimum wavelength is given. The accuracy of the proposed method is validated by the reflection coefficient test on a nonmagnetized collisional plasma sheet. The testing results show that the proposed method is advantageous for improving computational efficiency and saving computer memory. The reflection coefficient of a perfect electric conductor (PEC) sheet covered by multilayer plasma and the RCS of the objects coated by plasma are calculated by the proposed method and the simulation results are analyzed.

  4. Methods for assessing geodiversity

    NASA Astrophysics Data System (ADS)

    Zwoliński, Zbigniew; Najwer, Alicja; Giardino, Marco

    2017-04-01

    The accepted systematics of geodiversity assessment methods will be presented in three categories: qualitative, quantitative and qualitative-quantitative. Qualitative methods are usually descriptive methods that are suited to nominal and ordinal data. Quantitative methods use a different set of parameters and indicators to determine the characteristics of geodiversity in the area being researched. Qualitative-quantitative methods are a good combination of the collection of quantitative data (i.e. digital) and cause-effect data (i.e. relational and explanatory). It seems that at the current stage of the development of geodiversity research methods, qualitative-quantitative methods are the most advanced and best assess the geodiversity of the study area. Their particular advantage is the integration of data from different sources and with different substantive content. Among the distinguishing features of the quantitative and qualitative-quantitative methods for assessing geodiversity are their wide use within geographic information systems, both at the stage of data collection and data integration, as well as numerical processing and their presentation. The unresolved problem for these methods, however, is the possibility of their validation. It seems that currently the best method of validation is direct filed confrontation. Looking to the next few years, the development of qualitative-quantitative methods connected with cognitive issues should be expected, oriented towards ontology and the Semantic Web.

  5. Statistical differences between relative quantitative molecular fingerprints from microbial communities.

    PubMed

    Portillo, M C; Gonzalez, J M

    2008-08-01

    Molecular fingerprints of microbial communities are a common method for the analysis and comparison of environmental samples. The significance of differences between microbial community fingerprints was analyzed considering the presence of different phylotypes and their relative abundance. A method is proposed by simulating coverage of the analyzed communities as a function of sampling size applying a Cramér-von Mises statistic. Comparisons were performed by a Monte Carlo testing procedure. As an example, this procedure was used to compare several sediment samples from freshwater ponds using a relative quantitative PCR-DGGE profiling technique. The method was able to discriminate among different samples based on their molecular fingerprints, and confirmed the lack of differences between aliquots from a single sample.

  6. Matrix effects in pesticide multi-residue analysis by liquid chromatography-mass spectrometry.

    PubMed

    Kruve, Anneli; Künnapas, Allan; Herodes, Koit; Leito, Ivo

    2008-04-11

    Three sample preparation methods: Luke method (AOAC 985.22), QuEChERS (quick, easy, cheap, effective, rugged and safe) and matrix solid-phase dispersion (MSPD) were applied to different fruits and vegetables for analysis of 14 pesticide residues by high-performance liquid chromatography with electrospray ionization-mass spectrometry (HPLC/ESI/MS). Matrix effect, recovery and process efficiency of the sample preparation methods applied to different fruits and vegetables were compared. The Luke method was found to produce least matrix effect. On an average the best recoveries were obtained with the QuEChERS method. MSPD gave unsatisfactory recoveries for some basic pesticide residues. Comparison of matrix effects for different apple varieties showed high variability for some residues. It was demonstrated that the amount of co-extracting compounds that cause ionization suppression of aldicarb depends on the apple variety as well as on the sample preparation method employed.

  7. Effect of synthesis methods with different annealing temperatures on micro structure, cations distribution and magnetic properties of nano-nickel ferrite

    NASA Astrophysics Data System (ADS)

    El-Sayed, Karimat; Mohamed, Mohamed Bakr; Hamdy, Sh.; Ata-Allah, S. S.

    2017-02-01

    Nano-crystalline NiFe2O4 was synthesized by citrate and sol-gel methods at different annealing temperatures and the results were compared with a bulk sample prepared by ceramic method. The effect of methods of preparation and different annealing temperatures on the crystallize size, strain, bond lengths, bond angles, cations distribution and degree of inversions were investigated by X-ray powder diffraction, high resolution transmission electron microscope, Mössbauer effect spectrometer and vibrating sample magnetometer. The cations distributions were determined at both octahedral and tetrahedral sites using both Mössbauer effect spectroscopy and a modified Bertaut method using Rietveld method. The Mössbauer effect spectra showed a regular decrease in the hyperfine field with decreasing particle size. Saturation magnetization and coercivity are found to be affected by the particle size and the cations distribution.

  8. Comparison of performance due to guided hyperlearning, unguided hyperlearning, and conventional learning in mathematics: an empirical study

    NASA Astrophysics Data System (ADS)

    Fathurrohman, Maman; Porter, Anne; Worthy, Annette L.

    2014-07-01

    In this paper, the use of guided hyperlearning, unguided hyperlearning, and conventional learning methods in mathematics are compared. The design of the research involved a quasi-experiment with a modified single-factor multiple treatment design comparing the three learning methods, guided hyperlearning, unguided hyperlearning, and conventional learning. The participants were from three first-year university classes, numbering 115 students in total. Each group received guided, unguided, or conventional learning methods in one of the three different topics, namely number systems, functions, and graphing. The students' academic performance differed according to the type of learning. Evaluation of the three methods revealed that only guided hyperlearning and conventional learning were appropriate methods for the psychomotor aspects of drawing in the graphing topic. There was no significant difference between the methods when learning the cognitive aspects involved in the number systems topic and the functions topic.

  9. Evaluation of Eight Methods for Aligning Orientation of Two Coordinate Systems.

    PubMed

    Mecheri, Hakim; Robert-Lachaine, Xavier; Larue, Christian; Plamondon, André

    2016-08-01

    The aim of this study was to evaluate eight methods for aligning the orientation of two different local coordinate systems. Alignment is very important when combining two different systems of motion analysis. Two of the methods were developed specifically for biomechanical studies, and because there have been at least three decades of algorithm development in robotics, it was decided to include six methods from this field. To compare these methods, an Xsens sensor and two Optotrak clusters were attached to a Plexiglas plate. The first optical marker cluster was fixed on the sensor and 20 trials were recorded. The error of alignment was calculated for each trial, and the mean, the standard deviation, and the maximum values of this error over all trials were reported. One-way repeated measures analysis of variance revealed that the alignment error differed significantly across the eight methods. Post-hoc tests showed that the alignment error from the methods based on angular velocities was significantly lower than for the other methods. The method using angular velocities performed the best, with an average error of 0.17 ± 0.08 deg. We therefore recommend this method, which is easy to perform and provides accurate alignment.

  10. Three-dimensional forward modeling of DC resistivity using the aggregation-based algebraic multigrid method

    NASA Astrophysics Data System (ADS)

    Chen, Hui; Deng, Ju-Zhi; Yin, Min; Yin, Chang-Chun; Tang, Wen-Wu

    2017-03-01

    To speed up three-dimensional (3D) DC resistivity modeling, we present a new multigrid method, the aggregation-based algebraic multigrid method (AGMG). We first discretize the differential equation of the secondary potential field with mixed boundary conditions by using a seven-point finite-difference method to obtain a large sparse system of linear equations. Then, we introduce the theory behind the pairwise aggregation algorithms for AGMG and use the conjugate-gradient method with the V-cycle AGMG preconditioner (AGMG-CG) to solve the linear equations. We use typical geoelectrical models to test the proposed AGMG-CG method and compare the results with analytical solutions and the 3DDCXH algorithm for 3D DC modeling (3DDCXH). In addition, we apply the AGMG-CG method to different grid sizes and geoelectrical models and compare it to different iterative methods, such as ILU-BICGSTAB, ILU-GCR, and SSOR-CG. The AGMG-CG method yields nearly linearly decreasing errors, whereas the number of iterations increases slowly with increasing grid size. The AGMG-CG method is precise and converges fast, and thus can improve the computational efficiency in forward modeling of three-dimensional DC resistivity.

  11. Comparison of infusion pumps calibration methods

    NASA Astrophysics Data System (ADS)

    Batista, Elsa; Godinho, Isabel; do Céu Ferreira, Maria; Furtado, Andreia; Lucas, Peter; Silva, Claudia

    2017-12-01

    Nowadays, several types of infusion pump are commonly used for drug delivery, such as syringe pumps and peristaltic pumps. These instruments present different measuring features and capacities according to their use and therapeutic application. In order to ensure the metrological traceability of these flow and volume measuring equipment, it is necessary to use suitable calibration methods and standards. Two different calibration methods can be used to determine the flow error of infusion pumps. One is the gravimetric method, considered as a primary method, commonly used by National Metrology Institutes. The other calibration method, a secondary method, relies on an infusion device analyser (IDA) and is typically used by hospital maintenance offices. The suitability of the IDA calibration method was assessed by testing several infusion instruments at different flow rates using the gravimetric method. In addition, a measurement comparison between Portuguese Accredited Laboratories and hospital maintenance offices was performed under the coordination of the Portuguese Institute for Quality, the National Metrology Institute. The obtained results were directly related to the used calibration method and are presented in this paper. This work has been developed in the framework of the EURAMET projects EMRP MeDD and EMPIR 15SIP03.

  12. Comparison of prosthetic models produced by traditional and additive manufacturing methods.

    PubMed

    Park, Jin-Young; Kim, Hae-Young; Kim, Ji-Hwan; Kim, Jae-Hong; Kim, Woong-Chul

    2015-08-01

    The purpose of this study was to verify the clinical-feasibility of additive manufacturing by comparing the accuracy of four different manufacturing methods for metal coping: the conventional lost wax technique (CLWT); subtractive methods with wax blank milling (WBM); and two additive methods, multi jet modeling (MJM), and micro-stereolithography (Micro-SLA). Thirty study models were created using an acrylic model with the maxillary upper right canine, first premolar, and first molar teeth. Based on the scan files from a non-contact blue light scanner (Identica; Medit Co. Ltd., Seoul, Korea), thirty cores were produced using the WBM, MJM, and Micro-SLA methods, respectively, and another thirty frameworks were produced using the CLWT method. To measure the marginal and internal gap, the silicone replica method was adopted, and the silicone images obtained were evaluated using a digital microscope (KH-7700; Hirox, Tokyo, Japan) at 140X magnification. Analyses were performed using two-way analysis of variance (ANOVA) and Tukey post hoc test (α=.05). The mean marginal gaps and internal gaps showed significant differences according to tooth type (P<.001 and P<.001, respectively) and manufacturing method (P<.037 and P<.001, respectively). Micro-SLA did not show any significant difference from CLWT regarding mean marginal gap compared to the WBM and MJM methods. The mean values of gaps resulting from the four different manufacturing methods were within a clinically allowable range, and, thus, the clinical use of additive manufacturing methods is acceptable as an alternative to the traditional lost wax-technique and subtractive manufacturing.

  13. Ultrasound-Assist Extrusion Methods for the Fabrication of Polymer Nanocomposites Based on Polypropylene/Multi-Wall Carbon Nanotubes

    PubMed Central

    Ávila-Orta, Carlos A.; Quiñones-Jurado, Zoe V.; Waldo-Mendoza, Miguel A.; Rivera-Paz, Erika A.; Cruz-Delgado, Víctor J.; Mata-Padilla, José M.; González-Morones, Pablo; Ziolo, Ronald F.

    2015-01-01

    Isotactic polypropylenes (iPP) with different melt flow indexes (MFI) were used to fabricate nanocomposites (NCs) with 10 wt % loadings of multi-wall carbon nanotubes (MWCNTs) using ultrasound-assisted extrusion methods to determine their effect on the morphology, melt flow, and electrical properties of the NCs. Three different types of iPPs were used with MFIs of 2.5, 34 and 1200 g/10 min. Four different NC fabrication methods based on melt extrusion were used. In the first method melt extrusion fabrication without ultrasound assistance was used. In the second and third methods, an ultrasound probe attached to a hot chamber located at the exit of the die was used to subject the sample to fixed frequency and variable frequency, respectively. The fourth method is similar to the first method, with the difference being that the carbon nanotubes were treated in a fluidized air-bed with an ultrasound probe before being used in the fabrication of the NCs with no ultrasound assistance during extrusion. The samples were characterized by MFI, Optical microscopy (OM), Scanning electron microscopy (SEM), Transmission electron microscopy (TEM), electrical surface resistivity, and electric charge. MFI decreases in all cases with addition of MWCNTs with the largest decrease observed for samples with the highest MFI. The surface resistivity, which ranged from 1013 to 105 Ω/sq, and electric charge, were observed to depend on the ultrasound-assisted fabrication method as well as on the melt flow index of the iPP. A relationship between agglomerate size and area ratio with electric charge was found. Several trends in the overall data were identified and are discussed in terms of MFI and the different fabrication methods. PMID:28793686

  14. Marginal and Internal Adaptation of Zirconia Crowns: A Comparative Study of Assessment Methods.

    PubMed

    Cunali, Rafael Schlögel; Saab, Rafaella Caramori; Correr, Gisele Maria; Cunha, Leonardo Fernandes da; Ornaghi, Bárbara Pick; Ritter, André V; Gonzaga, Carla Castiglia

    2017-01-01

    Marginal and internal adaptation is critical for the success of indirect restorations. New imaging systems make it possible to evaluate these parameters with precision and non-destructively. This study evaluated the marginal and internal adaptation of zirconia copings fabricated with two different systems using both silicone replica and microcomputed tomography (micro-CT) assessment methods. A metal master model, representing a preparation for an all-ceramic full crown, was digitally scanned and polycrystalline zirconia copings were fabricated with either Ceramill Zi (Amann-Girrbach) or inCoris Zi (Dentslpy-Sirona), n=10. For each coping, marginal and internal gaps were evaluated by silicone replica and micro-CT assessment methods. Four assessment points of each replica cross-section and micro-CT image were evaluated using imaging software: marginal gap (MG), axial wall (AW), axio-occlusal angle (AO) and mid-occlusal wall (MO). Data were statistically analyzed by factorial ANOVA and Tukey test (a=0.05). There was no statistically significant difference between the methods for MG and AW. For AO, there were significant differences between methods for Amann copings, while for Dentsply-Sirona copings similar values were observed. For MO, both methods presented statistically significant differences. A positive correlation was observed determined by the two assessment methods for MG values. In conclusion, the assessment method influenced the evaluation of marginal and internal adaptation of zirconia copings. Micro-CT showed lower marginal and internal gap values when compared to the silicone replica technique, although the difference was not always statistically significant. Marginal gap and axial wall assessment points showed the lower gap values, regardless of ceramic system and assessment method used.

  15. Comparison of four extraction/methylation analytical methods to measure fatty acid composition by gas chromatography in meat.

    PubMed

    Juárez, M; Polvillo, O; Contò, M; Ficco, A; Ballico, S; Failla, S

    2008-05-09

    Four different extraction-derivatization methods commonly used for fatty acid analysis in meat (in situ or one-step method, saponification method, classic method and a combination of classic extraction and saponification derivatization) were tested. The in situ method had low recovery and variation. The saponification method showed the best balance between recovery, precision, repeatability and reproducibility. The classic method had high recovery and acceptable variation values, except for the polyunsaturated fatty acids, showing higher variation than the former methods. The combination of extraction and methylation steps had great recovery values, but the precision, repeatability and reproducibility were not acceptable. Therefore the saponification method would be more convenient for polyunsaturated fatty acid analysis, whereas the in situ method would be an alternative for fast analysis. However the classic method would be the method of choice for the determination of the different lipid classes.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, Hyun-Ju; Chung, Chin-Wook, E-mail: joykang@hanyang.ac.kr; Choi, Hyeok

    A modified central difference method (MCDM) is proposed to obtain the electron energy distribution functions (EEDFs) in single Langmuir probes. Numerical calculation of the EEDF with MCDM is simple and has less noise. This method provides the second derivatives at a given point as the weighted average of second order central difference derivatives calculated at different voltage intervals, weighting each by the square of the interval. In this paper, the EEDFs obtained from MCDM are compared to those calculated via the averaged central difference method. It is found that MCDM effectively suppresses the noises in the EEDF, while the samemore » number of points are used to calculate of the second derivative.« less

  17. Parallel realities: exploring poverty dynamics using mixed methods in rural Bangladesh.

    PubMed

    Davisa, Peter; Baulch, Bob

    2011-01-01

    This paper explores the implications of using two methodological approaches to study poverty dynamics in rural Bangladesh. Using data from a unique longitudinal study, we show how different methods lead to very different assessments of socio-economic mobility. We suggest five ways of reconciling these differences: considering assets in addition to expenditures, proximity to the poverty line, other aspects of well-being, household division, and qualitative recall errors. Considering assets and proximity to the poverty line along with expenditures resolves three-fifths of the qualitative and quantitative differences. Use of such integrated mixed-methods can therefore improve the reliability of poverty dynamics research.

  18. Experimental research on showing automatic disappearance pen handwriting based on spectral imaging technology

    NASA Astrophysics Data System (ADS)

    Su, Yi; Xu, Lei; Liu, Ningning; Huang, Wei; Xu, Xiaojing

    2016-10-01

    Purpose to find an efficient, non-destructive examining method for showing the disappearing words after writing with automatic disappearance pen. Method Using the imaging spectrometer to show the potential disappearance words on paper surface according to different properties of reflection absorbed by various substances in different bands. Results the disappeared words by using different disappearance pens to write on the same paper or the same disappearance pen to write on different papers, both can get good show results through the use of the spectral imaging examining methods. Conclusion Spectral imaging technology can show the disappearing words after writing by using the automatic disappearance pen.

  19. Reduction of aflatoxin in rice by different cooking methods.

    PubMed

    Sani, Ali Mohamadi; Azizi, Eisa Gholampour; Salehi, Esmaeel Ataye; Rahimi, Khadije

    2014-07-01

    Rice (Oryza sativa Linn) is one of the basic diets in the north of Iran. The aim of present study was to detect total aflatoxin (AFT) in domestic and imported rice in Amol (in the north of Iran) and to evaluate the effect of different cooking methods on the levels of the toxin. For this purpose, 42 rice samples were collected from retail stores. The raw samples were analysed by enzyme-linked immunosorbent assay (ELISA) technique for toxin assessment and then submitted to two different cooking methods including traditional local method and in rice cooker. After treatment, AFT was determined. Results show that the average concentration of AFT in domestic and imported samples was 1.08 ± 0.02 and 1.89 ± 0.87 ppb, respectively, which is lower than national and European Union standards. The highest AFT reduction (24.8%) was observed when rice samples were cooked by rice cooker but the difference with local method was not statistically significant (p > 0.05). © The Author(s) 2012.

  20. The use of spectral methods in bidomain studies.

    PubMed

    Trayanova, N; Pilkington, T

    1992-01-01

    A Fourier transform method is developed for solving the bidomain coupled differential equations governing the intracellular and extracellular potentials on a finite sheet of cardiac cells undergoing stimulation. The spectral formulation converts the system of differential equations into a "diagonal" system of algebraic equations. Solving the algebraic equations directly and taking the inverse transform of the potentials proved numerically less expensive than solving the coupled differential equations by means of traditional numerical techniques, such as finite differences; the comparison between the computer execution times showed that the Fourier transform method was about 40 times faster than the finite difference method. By application of the Fourier transform method, transmembrane potential distributions in the two-dimensional myocardial slice were calculated. For a tissue characterized by a ratio of the intra- to extracellular conductivities that is different in all principal directions, the transmembrane potential distribution exhibits a rather complicated geometrical pattern. The influence of the different anisotropy ratios, the finite tissue size, and the stimuli configuration on the pattern of membrane polarization is investigated.

  1. Comparative Study on Two Different Methods for Determination of Hydraulic Conductivity of HeLa Cells During Freezing.

    PubMed

    Li, Lei; Gao, Cai; Zhao, Gang; Shu, Zhiquan; Cao, Yunxia; Gao, Dayong

    2016-12-01

    The measurement of hydraulic conductivity of the cell membrane is very important for optimizing the protocol of cryopreservation and cryosurgery. There are two different methods using differential scanning calorimetry (DSC) to measure the freezing response of cells and tissues. Devireddy et al. presented the slow-fast-slow (SFS) cooling method, in which the difference of the heat release during the freezing process between the osmotically active and inactive cells is used to obtain the cell membrane hydraulic conductivity and activation energy. Luo et al. simplified the procedure and introduced the single-slow (SS) cooling protocol, which requires only one cooling process although different cytocrits are required for the determination of the membrane transport properties. To the best of our knowledge, there is still a lack of comparison of experimental processes and requirements for experimental conditions between these two methods. This study made a systematic comparison between these two methods from the aforementioned aspects in detail. The SFS and SS cooling methods mentioned earlier were utilized to obtain the reference hydraulic conductivity (L pg ) and activation energy (E Lp ) of HeLa cells by fitting the model to DSC data. With the SFS method, it was determined that L pg  = 0.10 μm/(min·atm) and E Lp  = 22.9 kcal/mol; whereas the results obtained by the SS cooling method showed that L pg  = 0.10 μm/(min·atm) and E Lp  = 23.6 kcal/mol. The results indicated that the values of the water transport parameters measured by two methods were comparable. In other words, the two parameters can be obtained by comparing the heat releases between two slow cooling processes of the same sample according to the SFS method. However, the SS method required analyzing heat releases of samples with different cytocrits. Thus, more experimental time was required.

  2. Multiple attenuation to reflection seismic data using Radon filter and Wave Equation Multiple Rejection (WEMR) method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erlangga, Mokhammad Puput

    Separation between signal and noise, incoherent or coherent, is important in seismic data processing. Although we have processed the seismic data, the coherent noise is still mixing with the primary signal. Multiple reflections are a kind of coherent noise. In this research, we processed seismic data to attenuate multiple reflections in the both synthetic and real seismic data of Mentawai. There are several methods to attenuate multiple reflection, one of them is Radon filter method that discriminates between primary reflection and multiple reflection in the τ-p domain based on move out difference between primary reflection and multiple reflection. However, inmore » case where the move out difference is too small, the Radon filter method is not enough to attenuate the multiple reflections. The Radon filter also produces the artifacts on the gathers data. Except the Radon filter method, we also use the Wave Equation Multiple Elimination (WEMR) method to attenuate the long period multiple reflection. The WEMR method can attenuate the long period multiple reflection based on wave equation inversion. Refer to the inversion of wave equation and the magnitude of the seismic wave amplitude that observed on the free surface, we get the water bottom reflectivity which is used to eliminate the multiple reflections. The WEMR method does not depend on the move out difference to attenuate the long period multiple reflection. Therefore, the WEMR method can be applied to the seismic data which has small move out difference as the Mentawai seismic data. The small move out difference on the Mentawai seismic data is caused by the restrictiveness of far offset, which is only 705 meter. We compared the real free multiple stacking data after processing with Radon filter and WEMR process. The conclusion is the WEMR method can more attenuate the long period multiple reflection than the Radon filter method on the real (Mentawai) seismic data.« less

  3. A hydrostatic weighing method using total lung capacity and a small tank.

    PubMed Central

    Warner, J G; Yeater, R; Sherwood, L; Weber, K

    1986-01-01

    The purpose of this study was to establish the validity and reliability of a hydrostatic weighing method using total lung capacity (measuring vital capacity with a respirometer at the time of weighing) the prone position, and a small oblong tank. The validity of the method was established by comparing the TLC prone (tank) method against three hydrostatic weighing methods administered in a pool. The three methods included residual volume seated, TLC seated and TLC prone. Eighty male and female subjects were underwater weighed using each of the four methods. Validity coefficients for per cent body fat between the TLC prone (tank) method and the RV seated (pool), TLC seated (pool) and TLC prone (pool) methods were .98, .99 and .99, respectively. A randomised complete block ANOVA found significant differences between the RV seated (pool) method and each of the three TLC methods with respect to both body density and per cent body fat. The differences were negligible with respect to HW error. Reliability of the TLC prone (tank) method was established by weighing twenty subjects three different times with ten-minute time intervals between testing. Multiple correlations yielded reliability coefficients for body density and per cent body fat values of .99 and .99, respectively. It was concluded that the TLC prone (tank) method is valid, reliable and a favourable method of hydrostatic weighing. PMID:3697596

  4. A hydrostatic weighing method using total lung capacity and a small tank.

    PubMed

    Warner, J G; Yeater, R; Sherwood, L; Weber, K

    1986-03-01

    The purpose of this study was to establish the validity and reliability of a hydrostatic weighing method using total lung capacity (measuring vital capacity with a respirometer at the time of weighing) the prone position, and a small oblong tank. The validity of the method was established by comparing the TLC prone (tank) method against three hydrostatic weighing methods administered in a pool. The three methods included residual volume seated, TLC seated and TLC prone. Eighty male and female subjects were underwater weighed using each of the four methods. Validity coefficients for per cent body fat between the TLC prone (tank) method and the RV seated (pool), TLC seated (pool) and TLC prone (pool) methods were .98, .99 and .99, respectively. A randomised complete block ANOVA found significant differences between the RV seated (pool) method and each of the three TLC methods with respect to both body density and per cent body fat. The differences were negligible with respect to HW error. Reliability of the TLC prone (tank) method was established by weighing twenty subjects three different times with ten-minute time intervals between testing. Multiple correlations yielded reliability coefficients for body density and per cent body fat values of .99 and .99, respectively. It was concluded that the TLC prone (tank) method is valid, reliable and a favourable method of hydrostatic weighing.

  5. A rapid method to extract Seebeck coefficient under a large temperature difference

    NASA Astrophysics Data System (ADS)

    Zhu, Qing; Kim, Hee Seok; Ren, Zhifeng

    2017-09-01

    The Seebeck coefficient is one of the three important properties in thermoelectric materials. Since thermoelectric materials usually work under large temperature difference in real applications, we propose a quasi-steady state method to accurately measure the Seebeck coefficient under large temperature gradient. Compared to other methods, this method is not only highly accurate but also less time consuming. It can measure the Seebeck coefficient in both the temperature heating up and cooling down processes. In this work, a Zintl material (Mg3.15Nb0.05Sb1.5Bi0.49Te0.01) was tested to extract the Seebeck coefficient from room temperature to 573 K. Compared with a commercialized Seebeck coefficient measurement device (ZEM-3), there is ±5% difference between those from ZEM-3 and this method.

  6. Radiative opacities of iron using a difference algebraic converging method at temperatures near solar convection zone

    NASA Astrophysics Data System (ADS)

    Fan, Zhixiang; Sun, Weiguo; Zhang, Yi; Fu, Jia; Hu, Shide; Fan, Qunchao

    2018-03-01

    An interpolation method named difference algebraic converging method for opacity (DACMo) is proposed to study the opacities and transmissions of metal plasmas. The studies on iron plasmas at temperatures near the solar convection zone show that (1) the DACMo values reproduce most spectral structures and magnitudes of experimental opacities and transmissions. (2) The DACMo can be used to predict unknown opacities at other temperature Te' and density ρ' using the opacity constants obtained at ( Te , ρ). (3) The DACMo may predict reasonable opacities which may not be available experimentally but the least-squares (LS) method does not. (4) The computational speed of the DACMo is at least 10 times faster than that of the original difference converging method for opacity.

  7. Discrimination of Fritillary according to geographical origin with Fourier transform infrared spectroscopy and two-dimensional correlation IR spectroscopy.

    PubMed

    Hua, Rui; Sun, Su-Qin; Zhou, Qun; Noda, Isao; Wang, Bao-Qin

    2003-09-19

    Fritillaria is a traditional Chinese herbal medicine for eliminating phlegm and relieving a cough with a long history in China and some other Asian countries. The objective of this study is to develop a nondestructive and accurate method to discriminate Fritillaria of different geographical origins, which is a troublesome work by existing analytical methods. We conducted a systematic study on five kinds of Fritillaria by Fourier transform infrared spectroscopy, second derivative infrared spectroscopy, and two-dimensional (2D) correlation infrared spectroscopy under thermal perturbation. Because Fritillaria consist of a large amount of starch, the conventional IR spectra of different Fritillaria only have very limited spectral feature differences. Based on these differences, we can separate different Fritillaria to a limited extent, but this method was deemed not very practical. The second derivative IR spectra of Fritillaria could enhance spectrum resolution, amplify the differences between the IR spectra of different Fritillaria, and provide some dissimilarity in their starch content, when compared with the spectrum of pure starch. Finally, we applied thermal perturbation to Fritillaria and analyzed the resulting spectra by the 2D correlation method to distinguish different Fritillaria easily and clearly. The distinction of very similar Fritillaria was possible because the spectral resolution was greatly enhanced by the 2D correlation spectroscopy. In addition, with the dynamic information of molecular structure provided by 2D correlation IR spectra, we studied the differences in the stability of active components of Fritillaria. The differences embodied mainly on the intensity ratio of the auto-peak at 985 cm(-1) and other auto-peaks. The 2D correlation IR spectroscopy (2D IR) of Fritillaria can be a new and powerful method to discriminate Fritillaria.

  8. Comparing K-mer based methods for improved classification of 16S sequences.

    PubMed

    Vinje, Hilde; Liland, Kristian Hovde; Almøy, Trygve; Snipen, Lars

    2015-07-01

    The need for precise and stable taxonomic classification is highly relevant in modern microbiology. Parallel to the explosion in the amount of sequence data accessible, there has also been a shift in focus for classification methods. Previously, alignment-based methods were the most applicable tools. Now, methods based on counting K-mers by sliding windows are the most interesting classification approach with respect to both speed and accuracy. Here, we present a systematic comparison on five different K-mer based classification methods for the 16S rRNA gene. The methods differ from each other both in data usage and modelling strategies. We have based our study on the commonly known and well-used naïve Bayes classifier from the RDP project, and four other methods were implemented and tested on two different data sets, on full-length sequences as well as fragments of typical read-length. The difference in classification error obtained by the methods seemed to be small, but they were stable and for both data sets tested. The Preprocessed nearest-neighbour (PLSNN) method performed best for full-length 16S rRNA sequences, significantly better than the naïve Bayes RDP method. On fragmented sequences the naïve Bayes Multinomial method performed best, significantly better than all other methods. For both data sets explored, and on both full-length and fragmented sequences, all the five methods reached an error-plateau. We conclude that no K-mer based method is universally best for classifying both full-length sequences and fragments (reads). All methods approach an error plateau indicating improved training data is needed to improve classification from here. Classification errors occur most frequent for genera with few sequences present. For improving the taxonomy and testing new classification methods, the need for a better and more universal and robust training data set is crucial.

  9. TL and ESR based identification of gamma-irradiated frozen fish using different hydrolysis techniques

    NASA Astrophysics Data System (ADS)

    Ahn, Jae-Jun; Akram, Kashif; Shahbaz, Hafiz Muhammad; Kwon, Joong-Ho

    2014-12-01

    Frozen fish fillets (walleye Pollack and Japanese Spanish mackerel) were selected as samples for irradiation (0-10 kGy) detection trials using different hydrolysis methods. Photostimulated luminescence (PSL)-based screening analysis for gamma-irradiated frozen fillets showed low sensitivity due to limited silicate mineral contents on the samples. Same limitations were found in the thermoluminescence (TL) analysis on mineral samples isolated by density separation method. However, acid (HCl) and alkali (KOH) hydrolysis methods were effective in getting enough minerals to carry out TL analysis, which was reconfirmed through the normalization step by calculating the TL ratios (TL1/TL2). For improved electron spin resonance (ESR) analysis, alkali and enzyme (alcalase) hydrolysis methods were compared in separating minute-bone fractions. The enzymatic method provided more clear radiation-specific hydroxyapatite radicals than that of the alkaline method. Different hydrolysis methods could extend the application of TL and ESR techniques in identifying the irradiation history of frozen fish fillets.

  10. A hybrid method for accurate star tracking using star sensor and gyros.

    PubMed

    Lu, Jiazhen; Yang, Lie; Zhang, Hao

    2017-10-01

    Star tracking is the primary operating mode of star sensors. To improve tracking accuracy and efficiency, a hybrid method using a star sensor and gyroscopes is proposed in this study. In this method, the dynamic conditions of an aircraft are determined first by the estimated angular acceleration. Under low dynamic conditions, the star sensor is used to measure the star vector and the vector difference method is adopted to estimate the current angular velocity. Under high dynamic conditions, the angular velocity is obtained by the calibrated gyros. The star position is predicted based on the estimated angular velocity and calibrated gyros using the star vector measurements. The results of the semi-physical experiment show that this hybrid method is accurate and feasible. In contrast with the star vector difference and gyro-assisted methods, the star position prediction result of the hybrid method is verified to be more accurate in two different cases under the given random noise of the star centroid.

  11. Application of dietary fiber method AOAC 2011.25 in fruit and comparison with AOAC 991.43 method.

    PubMed

    Tobaruela, Eric de C; Santos, Aline de O; Almeida-Muradian, Ligia B de; Araujo, Elias da S; Lajolo, Franco M; Menezes, Elizabete W

    2018-01-01

    AOAC 2011.25 method enables the quantification of most of the dietary fiber (DF) components according to the definition proposed by Codex Alimentarius. This study aimed to compare the DF content in fruits analyzed by the AOAC 2011.25 and AOAC 991.43 methods. Plums (Prunus salicina), atemoyas (Annona x atemoya), jackfruits (Artocarpus heterophyllus), and mature coconuts (Cocos nucifera) from different Brazilian regions (3 lots/fruit) were analyzed for DF, resistant starch, and fructans contents. The AOAC 2011.25 method was evaluated for precision, accuracy, and linearity in different food matrices and carbohydrate standards. The DF contents of plums, atemoyas, and jackfruits obtained by AOAC 2011.25 was higher than those obtained by AOAC 991.43 due to the presence of fructans. The DF content of mature coconuts obtained by the same methods did not present a significant difference. The AOAC 2011.25 method is recommended for fruits with considerable fructans content because it achieves more accurate values. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Effect of different cooking methods on total phenolic contents and antioxidant activities of four Boletus mushrooms.

    PubMed

    Sun, Liping; Bai, Xue; Zhuang, Yongliang

    2014-11-01

    The influences of cooking methods (steaming, pressure-cooking, microwaving, frying and boiling) on total phenolic contents and antioxidant activities of fruit body of Boletus mushrooms (B. aereus, B. badius, B. pinophilus and B. edulis) have been evaluated. The results showed that microwaving was better in retention of total phenolics than other cooking methods, while boiling significantly decreased the contents of total phenolics in samples under study. Effects of different cooking methods on phenolic acids profiles of Boletus mushrooms showed varieties with both the species of mushroom and the cooking method. Effects of cooking treatments on antioxidant activities of Boletus mushrooms were evaluated by in vitro assays of hydroxyl radical (OH·) -scavenging activity, reducing power and 1, 1-diphenyl-2-picrylhydrazyl radicals (DPPH·) -scavenging activity. Results indicated the changes of antioxidant activities of four Boletus mushrooms were different in five cooking methods. This study could provide some information to encourage food industry to recommend particular cooking methods.

  13. Numerical simulation of rarefied gas flow through a slit

    NASA Technical Reports Server (NTRS)

    Keith, Theo G., Jr.; Jeng, Duen-Ren; De Witt, Kenneth J.; Chung, Chan-Hong

    1990-01-01

    Two different approaches, the finite-difference method coupled with the discrete-ordinate method (FDDO), and the direct-simulation Monte Carlo (DSMC) method, are used in the analysis of the flow of a rarefied gas from one reservoir to another through a two-dimensional slit. The cases considered are for hard vacuum downstream pressure, finite pressure ratios, and isobaric pressure with thermal diffusion, which are not well established in spite of the simplicity of the flow field. In the FDDO analysis, by employing the discrete-ordinate method, the Boltzmann equation simplified by a model collision integral is transformed to a set of partial differential equations which are continuous in physical space but are point functions in molecular velocity space. The set of partial differential equations are solved by means of a finite-difference approximation. In the DSMC analysis, three kinds of collision sampling techniques, the time counter (TC) method, the null collision (NC) method, and the no time counter (NTC) method, are used.

  14. Consistency of extreme flood estimation approaches

    NASA Astrophysics Data System (ADS)

    Felder, Guido; Paquet, Emmanuel; Penot, David; Zischg, Andreas; Weingartner, Rolf

    2017-04-01

    Estimations of low-probability flood events are frequently used for the planning of infrastructure as well as for determining the dimensions of flood protection measures. There are several well-established methodical procedures to estimate low-probability floods. However, a global assessment of the consistency of these methods is difficult to achieve, the "true value" of an extreme flood being not observable. Anyway, a detailed comparison performed on a given case study brings useful information about the statistical and hydrological processes involved in different methods. In this study, the following three different approaches for estimating low-probability floods are compared: a purely statistical approach (ordinary extreme value statistics), a statistical approach based on stochastic rainfall-runoff simulation (SCHADEX method), and a deterministic approach (physically based PMF estimation). These methods are tested for two different Swiss catchments. The results and some intermediate variables are used for assessing potential strengths and weaknesses of each method, as well as for evaluating the consistency of these methods.

  15. Methods of verifying net carbon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McClung, M.

    1996-10-01

    Problems currently exist with using net carbon as an industrial standard to gauge smelter performance. First, throughout the industry there are a number of different methods used for determining net carbon. Also, until recently there has not been a viable method to cross check or predict change in net carbon. This inherently leads to differences and most likely inaccuracies when comparing performances of different plants using a net carbon number. Ravenswood uses specific methods when calculating the net carbon balance. The R and D Carbon, Ltd. formula developed by Verner Fisher, et al, to predict and cross check net carbonmore » based on baked carbon core analysis has been successfully used. Another method is used, as a cross check, which is based on the raw materials (cokes and pitch) usage as related to the metal produced. The combination of these methods gives a definitive representation of the carbon performance in the reduction cell. This report details the methods Ravenswood Aluminum uses and the information derived from it.« less

  16. The comparative analysis of the current-meter method and the pressure-time method used for discharge measurements in the Kaplan turbine penstocks

    NASA Astrophysics Data System (ADS)

    Adamkowski, A.; Krzemianowski, Z.

    2012-11-01

    The paper presents experiences gathered during many years of utilizing the current-meter and pressure-time methods for flow rate measurements in many hydropower plants. The integration techniques used in these both methods are different from the recommendations contained in the relevant international standards, mainly from the graphical and arithmetical ones. The results of the comparative analysis of both methods applied at the same time during the hydraulic performance tests of two Kaplan turbines in one of the Polish hydropower plant are presented in the final part of the paper. In the case of the pressure-time method application, the concrete penstocks of the tested turbines required installing a special measuring instrumentation inside the penstock. The comparison has shown a satisfactory agreement between the results of discharge measurements executed using the both considered methods. Maximum differences between the discharge values have not exceeded 1.0 % and the average differences have not been greater than 0.5 %.

  17. Genetic potential of common bean progenies selected for crude fiber content obtained through different breeding methods.

    PubMed

    Júnior, V A P; Melo, P G S; Pereira, H S; Bassinello, P Z; Melo, L C

    2015-05-29

    Gastrointestinal health is of great importance due to the increasing consumption of functional foods, especially those concern-ing diets rich in fiber content. The common bean has been valorized as a nutritious food due to its appreciable fiber content and the fact that it is consumed in many countries. The current study aimed to evaluate and compare the genetic potential of common bean progenies of the carioca group, developed through different breeding methods, for crude fiber content. The progenies originated through hybridization of two advanced strains, CNFC 7812 and CNFC 7829, up to the F7 generation using three breeding methods: bulk-population, bulk within F2 families, and single seed descent. Fifteen F8 progenies were evaluated in each method, as well as two check cultivars and both parents, us-ing a 7 x 7 simple lattice design, with experimental plots comprised of two 4-m long rows. Field trials were conducted in eleven environments encompassing four Brazilian states and three different sowing times during 2009 and 2010. Estimates of genetic parameters indicate differences among the breeding methods, which seem to be related to the different processes for sampling the advanced progenies inherent to each method, given that the trait in question is not subject to natural selection. Variability amongst progenies occurred within the three breeding methods and there was also a significant effect of environment on the progeny for all methods. Progenies developed by bulk-population attained the highest estimates of genetic parameters, had less interaction with the environment, and greater variability.

  18. Exploration of analysis methods for diagnostic imaging tests: problems with ROC AUC and confidence scores in CT colonography.

    PubMed

    Mallett, Susan; Halligan, Steve; Collins, Gary S; Altman, Doug G

    2014-01-01

    Different methods of evaluating diagnostic performance when comparing diagnostic tests may lead to different results. We compared two such approaches, sensitivity and specificity with area under the Receiver Operating Characteristic Curve (ROC AUC) for the evaluation of CT colonography for the detection of polyps, either with or without computer assisted detection. In a multireader multicase study of 10 readers and 107 cases we compared sensitivity and specificity, using radiological reporting of the presence or absence of polyps, to ROC AUC calculated from confidence scores concerning the presence of polyps. Both methods were assessed against a reference standard. Here we focus on five readers, selected to illustrate issues in design and analysis. We compared diagnostic measures within readers, showing that differences in results are due to statistical methods. Reader performance varied widely depending on whether sensitivity and specificity or ROC AUC was used. There were problems using confidence scores; in assigning scores to all cases; in use of zero scores when no polyps were identified; the bimodal non-normal distribution of scores; fitting ROC curves due to extrapolation beyond the study data; and the undue influence of a few false positive results. Variation due to use of different ROC methods exceeded differences between test results for ROC AUC. The confidence scores recorded in our study violated many assumptions of ROC AUC methods, rendering these methods inappropriate. The problems we identified will apply to other detection studies using confidence scores. We found sensitivity and specificity were a more reliable and clinically appropriate method to compare diagnostic tests.

  19. Alaska: Beaufort Sea

    Atmospheric Science Data Center

    2014-05-15

    ... Imaging SpectroRadiometer (MISR), illustrate different methods that may be used to assess sea ice type. Sea ice in the Beaufort Sea ... March 19, 2001 - Illustration of different methods to assess sea ice type. project:  MISR ...

  20. Does the Classroom Delivery Method Make a Difference?

    ERIC Educational Resources Information Center

    Bunn, Esther; Fischer, Mary; Marsh, Treba

    2014-01-01

    This study seeks to determine if a difference exists in student performance and participation between an online and face-to-face Accounting Intermediate I class taught by the same professor. Even though students self-selected which course section to enroll, no significant difference was found to exist between the delivery method of the two courses…

Top