Sample records for methods give comparable

  1. Comparison of individual-based model output to data using a model of walleye pollock early life history in the Gulf of Alaska

    NASA Astrophysics Data System (ADS)

    Hinckley, Sarah; Parada, Carolina; Horne, John K.; Mazur, Michael; Woillez, Mathieu

    2016-10-01

    Biophysical individual-based models (IBMs) have been used to study aspects of early life history of marine fishes such as recruitment, connectivity of spawning and nursery areas, and marine reserve design. However, there is no consistent approach to validating the spatial outputs of these models. In this study, we hope to rectify this gap. We document additions to an existing individual-based biophysical model for Alaska walleye pollock (Gadus chalcogrammus), some simulations made with this model and methods that were used to describe and compare spatial output of the model versus field data derived from ichthyoplankton surveys in the Gulf of Alaska. We used visual methods (e.g. distributional centroids with directional ellipses), several indices (such as a Normalized Difference Index (NDI), and an Overlap Coefficient (OC), and several statistical methods: the Syrjala method, the Getis-Ord Gi* statistic, and a geostatistical method for comparing spatial indices. We assess the utility of these different methods in analyzing spatial output and comparing model output to data, and give recommendations for their appropriate use. Visual methods are useful for initial comparisons of model and data distributions. Metrics such as the NDI and OC give useful measures of co-location and overlap, but care must be taken in discretizing the fields into bins. The Getis-Ord Gi* statistic is useful to determine the patchiness of the fields. The Syrjala method is an easily implemented statistical measure of the difference between the fields, but does not give information on the details of the distributions. Finally, the geostatistical comparison of spatial indices gives good information of details of the distributions and whether they differ significantly between the model and the data. We conclude that each technique gives quite different information about the model-data distribution comparison, and that some are easy to apply and some more complex. We also give recommendations for a multistep process to validate spatial output from IBMs.

  2. A Comparison of Pseudo-Maximum Likelihood and Asymptotically Distribution-Free Dynamic Factor Analysis Parameter Estimation in Fitting Covariance Structure Models to Block-Toeplitz Matrices Representing Single-Subject Multivariate Time-Series.

    ERIC Educational Resources Information Center

    Molenaar, Peter C. M.; Nesselroade, John R.

    1998-01-01

    Pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation methods for estimating dynamic factor model parameters within a covariance structure framework were compared through a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates, but only ADF gives standard errors and chi-square…

  3. Application of work sampling technique to analyze logging operations.

    Treesearch

    Edwin S. Miyata; Helmuth M. Steinhilb; Sharon A. Winsauer

    1981-01-01

    Discusses the advantages and disadvantages of various time study methods for determining efficiency and productivity in logging. The work sampling method is compared with the continuous time-study method. Gives the feasibility, capability, and limitation of the work sampling method.

  4. Measurement System Analyses - Gauge Repeatability and Reproducibility Methods

    NASA Astrophysics Data System (ADS)

    Cepova, Lenka; Kovacikova, Andrea; Cep, Robert; Klaput, Pavel; Mizera, Ondrej

    2018-02-01

    The submitted article focuses on a detailed explanation of the average and range method (Automotive Industry Action Group, Measurement System Analysis approach) and of the honest Gauge Repeatability and Reproducibility method (Evaluating the Measurement Process approach). The measured data (thickness of plastic parts) were evaluated by both methods and their results were compared on the basis of numerical evaluation. Both methods were additionally compared and their advantages and disadvantages were discussed. One difference between both methods is the calculation of variation components. The AIAG method calculates the variation components based on standard deviation (then a sum of variation components does not give 100 %) and the honest GRR study calculates the variation components based on variance, where the sum of all variation components (part to part variation, EV & AV) gives the total variation of 100 %. Acceptance of both methods among the professional society, future use, and acceptance by manufacturing industry were also discussed. Nowadays, the AIAG is the leading method in the industry.

  5. COMPARISON OF METHODS FOR MEASURING CONCENTRATIONS OF SEMIVOLATILE PARTICULATE MATTER

    EPA Science Inventory

    The paper gives results of a comparison of methods for measuring concentrations of semivolatile particulate matter (PM) from indoor-environment, small, combustion sources. Particle concentration measurements were compared for methods using filters and a small electrostatic precip...

  6. Determining the optimal number of Kanban in multi-products supply chain system

    NASA Astrophysics Data System (ADS)

    Widyadana, G. A.; Wee, H. M.; Chang, Jer-Yuan

    2010-02-01

    Kanban, a key element of just-in-time system, is a re-order card or signboard giving instruction or triggering the pull system to manufacture or supply a component based on actual usage of material. There are two types of Kanban: production Kanban and withdrawal Kanban. This study uses optimal and meta-heuristic methods to determine the Kanban quantity and withdrawal lot sizes in a supply chain system. Although the mix integer programming method gives an optimal solution, it is not time efficient. For this reason, the meta-heuristic methods are suggested. In this study, a genetic algorithm (GA) and a hybrid of genetic algorithm and simulated annealing (GASA) are used. The study compares the performance of GA and GASA with that of the optimal method using MIP. The given problems show that both GA and GASA result in a near optimal solution, and they outdo the optimal method in term of run time. In addition, the GASA heuristic method gives a better performance than the GA heuristic method.

  7. Minimum Description Length Block Finder, a Method to Identify Haplotype Blocks and to Compare the Strength of Block Boundaries

    PubMed Central

    Mannila, H.; Koivisto, M.; Perola, M.; Varilo, T.; Hennah, W.; Ekelund, J.; Lukk, M.; Peltonen, L.; Ukkonen, E.

    2003-01-01

    We describe a new probabilistic method for finding haplotype blocks that is based on the use of the minimum description length (MDL) principle. We give a rigorous definition of the quality of a segmentation of a genomic region into blocks and describe a dynamic programming algorithm for finding the optimal segmentation with respect to this measure. We also describe a method for finding the probability of a block boundary for each pair of adjacent markers: this gives a tool for evaluating the significance of each block boundary. We have applied the method to the published data of Daly and colleagues. The results expose some problems that exist in the current methods for the evaluation of the significance of predicted block boundaries. Our method, MDL block finder, can be used to compare block borders in different sample sets, and we demonstrate this by applying the MDL-based method to define the block structure in chromosomes from population isolates. PMID:12761696

  8. Minimum description length block finder, a method to identify haplotype blocks and to compare the strength of block boundaries.

    PubMed

    Mannila, H; Koivisto, M; Perola, M; Varilo, T; Hennah, W; Ekelund, J; Lukk, M; Peltonen, L; Ukkonen, E

    2003-07-01

    We describe a new probabilistic method for finding haplotype blocks that is based on the use of the minimum description length (MDL) principle. We give a rigorous definition of the quality of a segmentation of a genomic region into blocks and describe a dynamic programming algorithm for finding the optimal segmentation with respect to this measure. We also describe a method for finding the probability of a block boundary for each pair of adjacent markers: this gives a tool for evaluating the significance of each block boundary. We have applied the method to the published data of Daly and colleagues. The results expose some problems that exist in the current methods for the evaluation of the significance of predicted block boundaries. Our method, MDL block finder, can be used to compare block borders in different sample sets, and we demonstrate this by applying the MDL-based method to define the block structure in chromosomes from population isolates.

  9. Comparing the Effectiveness of Peer Instruction to Individual Learning during a Chromatography Course

    ERIC Educational Resources Information Center

    Morice, J.; Michinov, N.; Delaval, M.; Sideridou, A.; Ferrières, V.

    2015-01-01

    Peer instruction has been recognized as an instructional method having a positive impact on learning compared to traditional lectures in science. This method has been widely supported by the socio-constructivist approach to learning giving a positive role to interaction between peers in the construction of knowledge. As far as we know, no study…

  10. Toward a Definition of the Engineering Method.

    ERIC Educational Resources Information Center

    Koen, Billy Vaughn

    1984-01-01

    Defines the engineering method by: (1) giving a preliminary definition and examples of its essential term (heuristics); (2) comparing the definition to a popular alternative; and (3) presenting a simple form of the definition. This definition states that the engineering method is the use of engineering heuristics. (JN)

  11. Simple and sensitive method for the quantification of total bilirubin in human serum using 3-methyl-2-benzothiazolinone hydrazone hydrochloride as a chromogenic probe

    NASA Astrophysics Data System (ADS)

    Nagaraja, Padmarajaiah; Avinash, Krishnegowda; Shivakumar, Anantharaman; Dinesh, Rangappa; Shrestha, Ashwinee Kumar

    2010-11-01

    We here describe a new spectrophotometric method for measuring total bilirubin in serum. The method is based on the cleavage of bilirubin giving formaldehyde which further reacts with diazotized 3-methyl-2-benzothiazolinone hydrazone hydrochloride giving blue colored solution with maximum absorbance at 630 nm. Sensitivity of the developed method was compared with Jendrassik-Grof assay procedure and its applicability has been tested with human serum samples. Good correlation was attained between both methods giving slope of 0.994, intercept 0.015, and R2 = 0.997. Beers law obeyed in the range of 0.068-17.2 μM with good linearity, absorbance y = 0.044 Cbil + 0.003. Relative standard deviation was 0.006872, within day precision ranged 0.3-1.2% and day-to-day precision ranged 1-6%. Recovery of the method varied from 97 to 102%. The proposed method has higher sensitivity with less interference. The obtained product was extracted and was spectrally characterized for structural confirmation with FT-IR, 1H NMR.

  12. Diffusion blotting: a rapid and simple method for production of multiple blots from a single gel.

    PubMed

    Olsen, Ingrid; Wiker, Harald G

    2015-01-01

    A very simple and fast method for diffusion blotting of proteins from precast SDS-PAGE gels on a solid plastic support was developed. Diffusion blotting for 3 min gives a quantitative transfer of 10 % compared to 1-h electroblotting. For each subsequent blot from the same gel a doubling of transfer time is necessary to obtain the same amount of protein onto each blot. High- and low-molecular-weight components are transferred equally efficiently when compared to electroblotting. However, both methods do give a higher total transfer of the low-molecular-weight proteins compared to the large proteins. The greatest advantage of diffusion blotting is that several blots can be made from each lane, thus enabling testing of multiple antisera on virtually identical blots. The gel remains on the plastic support, which prevents it from stretching or shrinking. This ensures identical blots and facilitates more reliable molecular weight determination. Furthermore the proteins remaining in the gel can be stained with Coomassie Brilliant Blue or other methods for exact and easy comparison with the developed blots. These advantages make diffusion blotting the method of choice when quantitative protein transfer is not required.

  13. An argument for the use of Aristotelian method in bioethics.

    PubMed

    Allmark, Peter

    2006-01-01

    The main claim of this paper is that the method outlined and used in Aristotle's Ethics is an appropriate and credible one to use in bioethics. Here "appropriate" means that the method is capable of establishing claims and developing concepts in bioethics and "credible" that the method has some plausibility, it is not open to obvious and immediate objection. It begins by suggesting why this claim matters and then gives a brief outline of Aristotle's method. The main argument is made in three stages. First, it is argued that Aristotelian method is credible because it compares favourably with alternatives. In this section it is shown that Aristotelian method is not vulnerable to criticisms that are made both of methods that give a primary place to moral theory (such as utilitarianism) and those that eschew moral theory (such as casuistry and social science approaches). As such, it compares favourably with these other approaches that are vulnerable to at least some of these criticisms. Second, the appropriateness of Aristotelian method is indicated through outlining how it would deal with a particular case. Finally, it is argued that the success of Aristotle's philosophy is suggestive of both the credibility and appropriateness of his method.

  14. Comparing Videotapes and Written Narrative Records of Second Grade Reading Classes: Selecting Methods for Particular Observational Goals.

    ERIC Educational Resources Information Center

    Gardner, C. H.; And Others

    The classroom behaviors recorded during three second grade reading lessons provide suitable evidence for comparing the relative merits of using narrative observations versus videotapes as data collection techniques. The comparative analysis illustrates the detail and precision of videotape. Primarily, videotape gives a true picture of linear time,…

  15. Direct imaging of small scatterers using reduced time dependent data

    NASA Astrophysics Data System (ADS)

    Cakoni, Fioralba; Rezac, Jacob D.

    2017-06-01

    We introduce qualitative methods for locating small objects using time dependent acoustic near field waves. These methods have reduced data collection requirements compared to typical qualitative imaging techniques. In particular, we only collect scattered field data in a small region surrounding the location from which an incident field was transmitted. The new methods are partially theoretically justified and numerical simulations demonstrate their efficacy. We show that these reduced data techniques give comparable results to methods which require full multistatic data and that these time dependent methods require less scattered field data than their time harmonic analogs.

  16. Margin-maximizing feature elimination methods for linear and nonlinear kernel-based discriminant functions.

    PubMed

    Aksu, Yaman; Miller, David J; Kesidis, George; Yang, Qing X

    2010-05-01

    Feature selection for classification in high-dimensional spaces can improve generalization, reduce classifier complexity, and identify important, discriminating feature "markers." For support vector machine (SVM) classification, a widely used technique is recursive feature elimination (RFE). We demonstrate that RFE is not consistent with margin maximization, central to the SVM learning approach. We thus propose explicit margin-based feature elimination (MFE) for SVMs and demonstrate both improved margin and improved generalization, compared with RFE. Moreover, for the case of a nonlinear kernel, we show that RFE assumes that the squared weight vector 2-norm is strictly decreasing as features are eliminated. We demonstrate this is not true for the Gaussian kernel and, consequently, RFE may give poor results in this case. MFE for nonlinear kernels gives better margin and generalization. We also present an extension which achieves further margin gains, by optimizing only two degrees of freedom--the hyperplane's intercept and its squared 2-norm--with the weight vector orientation fixed. We finally introduce an extension that allows margin slackness. We compare against several alternatives, including RFE and a linear programming method that embeds feature selection within the classifier design. On high-dimensional gene microarray data sets, University of California at Irvine (UCI) repository data sets, and Alzheimer's disease brain image data, MFE methods give promising results.

  17. Metal–organic complexation in the marine environment

    PubMed Central

    Luther, George W; Rozan, Timothy F; Witter, Amy; Lewis, Brent

    2001-01-01

    We discuss the voltammetric methods that are used to assess metal–organic complexation in seawater. These consist of titration methods using anodic stripping voltammetry (ASV) and cathodic stripping voltammetry competitive ligand experiments (CSV-CLE). These approaches and a kinetic approach using CSV-CLE give similar information on the amount of excess ligand to metal in a sample and the conditional metal ligand stability constant for the excess ligand bound to the metal. CSV-CLE data using different ligands to measure Fe(III) organic complexes are similar. All these methods give conditional stability constants for which the side reaction coefficient for the metal can be corrected but not that for the ligand. Another approach, pseudovoltammetry, provides information on the actual metal–ligand complex(es) in a sample by doing ASV experiments where the deposition potential is varied more negatively in order to destroy the metal–ligand complex. This latter approach gives concentration information on each actual ligand bound to the metal as well as the thermodynamic stability constant of each complex in solution when compared to known metal–ligand complexes. In this case the side reaction coefficients for the metal and ligand are corrected. Thus, this method may not give identical information to the titration methods because the excess ligand in the sample may not be identical to some of the actual ligands binding the metal in the sample. PMID:16759421

  18. A theory of meaning of caregiving for parents of mentally ill children in Taiwan, a qualitative study.

    PubMed

    Yen, Wen-Jiuan; Teng, Ching-Hwa; Huang, Xuan-Yi; Ma, Wei-Fen; Lee, Sheuan; Tseng, Hsiu-Chih

    2010-01-01

    The aim of this study is to generate a theory of meaning of care-giving for parents of mentally ill children in Taiwan. Studies indicate that the meaning of care-giving plays an important role in the psychological adjustment of care-givers to care-giving. With a positive meaning of care-giving, care-givers can accept their roles and adapt to them more readily. The research employs the qualitative method of grounded theory, the inquiry is based on symbolic interactionism. Twenty parental care-givers of children with schizophrenia were recruited at a private hospital in central Taiwan. Semi-structured interviews were conducted. A comparative method was used to analyse the text and field notes. Responsibility (zeren) emerges as the core category or concept. Responsibility expresses broadly the behavioural principles that are culturally prescribed and centred on familial ethics and values. Related concepts and principles that influence caregiver actions and affections include a return of karma, challenges from local gods and fate. By maintaining their culturally prescribed interpretations of care-giving, parents hope to give care indefinitely without complaints. The findings clearly suggest that the meaning of care-giving is determined through a process of internal debate that is shaped by culturally specific concepts. The paper attempts to explain some of these culturally specific determinants and explanations of care-giving behaviour. The theory contributes knowledge about the meaning of care-giving for parents of mentally ill children in Taiwan. It should be useful reference for mental health professionals, who provide counselling services to ethnically Taiwanese care-givers.

  19. Linear regression analysis for comparing two measurers or methods of measurement: but which regression?

    PubMed

    Ludbrook, John

    2010-07-01

    1. There are two reasons for wanting to compare measurers or methods of measurement. One is to calibrate one method or measurer against another; the other is to detect bias. Fixed bias is present when one method gives higher (or lower) values across the whole range of measurement. Proportional bias is present when one method gives values that diverge progressively from those of the other. 2. Linear regression analysis is a popular method for comparing methods of measurement, but the familiar ordinary least squares (OLS) method is rarely acceptable. The OLS method requires that the x values are fixed by the design of the study, whereas it is usual that both y and x values are free to vary and are subject to error. In this case, special regression techniques must be used. 3. Clinical chemists favour techniques such as major axis regression ('Deming's method'), the Passing-Bablok method or the bivariate least median squares method. Other disciplines, such as allometry, astronomy, biology, econometrics, fisheries research, genetics, geology, physics and sports science, have their own preferences. 4. Many Monte Carlo simulations have been performed to try to decide which technique is best, but the results are almost uninterpretable. 5. I suggest that pharmacologists and physiologists should use ordinary least products regression analysis (geometric mean regression, reduced major axis regression): it is versatile, can be used for calibration or to detect bias and can be executed by hand-held calculator or by using the loss function in popular, general-purpose, statistical software.

  20. Comparison Of Methods Used In Cartography For The Skeletonisation Of Areal Objects

    NASA Astrophysics Data System (ADS)

    Szombara, Stanisław

    2015-12-01

    The article presents a method that would compare skeletonisation methods for areal objects. The skeleton of an areal object, being its linear representation, is used, among others, in cartographic visualisation. The method allows us to compare between any skeletonisation methods in terms of the deviations of distance differences between the skeleton of the object and its border from one side and the distortions of skeletonisation from another. In the article, 5 methods were compared: Voronoi diagrams, densified Voronoi diagrams, constrained Delaunay triangulation, Straight Skeleton and Medial Axis (Transform). The results of comparison were presented on the example of several areal objects. The comparison of the methods showed that in all the analysed objects the Medial Axis (Transform) gives the smallest distortion and deviation values, which allows us to recommend it.

  1. [A comparative evaluation of the methods for determining nitrogen dioxide in an industrial environment].

    PubMed

    Panev, T

    1991-01-01

    The present work has the purpose to make a comparative evaluation of the different types detector tubes--for analysis, long-term and passive for determination of NO2 and the results to be compared, with those received by the spectrophotometric method and the reagent of Zaltsman. Studies were performed in the hall of the garage for repair of diesel buses during one working shift. The results point out that the analysing tubes for NO2 give good results with the spectrophotometric method. The measured average-shift concentrations of NO2 by long-term and passive tubes are juxtaposed with the average-received values with the analytical tubes and with the analytical method.

  2. Measuring ammonia concentrations and emissions from agricultural land and liquid surfaces: a review.

    PubMed

    Shah, Sanjay B; Westerman, Philip W; Arogo, Jactone

    2006-07-01

    Aerial ammonia concentrations (Cg) are measured using acid scrubbers, filter packs, denuders, or optical methods. Using Cg and wind speed or airflow rate, ammonia emission rate or flux can be directly estimated using enclosures or micrometeorological methods. Using nitrogen (N) recovery is not recommended, mainly because the different gaseous N components cannot be separated. Although low cost and replicable, chambers modify environmental conditions and are suitable only for comparing treatments. Wind tunnels do not modify environmental conditions as much as chambers, but they may not be appropriate for determining ammonia fluxes; however, they can be used to compare emissions and test models. Larger wind tunnels that also simulate natural wind profiles may be more useful for comparing treatments than micrometeorological methods because the latter require larger plots and are, thus, difficult to replicate. For determining absolute ammonia flux, the micrometeorological methods are the most suitable because they are nonintrusive. For use with micrometeorological methods, both the passive denuders and optical methods give comparable accuracies, although the latter give real-time Cg but at a higher cost. The passive denuder is wind weighted and also costs less than forced-air Cg measurement methods, but it requires calibration. When ammonia contamination during sample preparation and handling is a concern and separating the gas-phase ammonia and aerosol ammonium is not required, the scrubber is preferred over the passive denuder. The photothermal interferometer, because of its low detection limit and robustness, may hold potential for use in agriculture, but it requires evaluation. With its simpler theoretical basis and fewer restrictions, the integrated horizontal flux (IHF) method is preferable over other micrometeorological methods, particularly for lagoons, where berms and land-lagoon boundaries modify wind flow and flux gradients. With uniform wind flow, the ZINST method requiring measurement at one predetermined height may perform comparably to the IHF method but at a lower cost.

  3. Ab initio phonon thermal transport in monolayer InSe, GaSe, GaS, and alloys

    NASA Astrophysics Data System (ADS)

    Pandey, Tribhuwan; Parker, David S.; Lindsay, Lucas

    2017-11-01

    We compare vibrational properties and phonon thermal conductivities (κ) of monolayer InSe, GaSe, and GaS systems using density functional theory and Peierls-Boltzmann transport methods. In going from InSe to GaSe to GaS, system mass decreases giving both increasing acoustic phonon velocities and decreasing scattering of these heat-carrying modes with optic phonons, ultimately giving {κ }{InSe}< {κ }{GaSe}< {κ }{GaS}. This behavior is demonstrated by correlating the scattering phase space limited by fundamental conservation conditions with mode scattering rates and phonon dispersions for each material. We also show that, unlike flat monolayer systems such as graphene, in InSe, GaSe and GaS thermal transport is governed by in-plane vibrations. Alloying of InSe, GaSe, and GaS systems provides an effective method for modulating their κ through intrinsic vibrational modifications and phonon scattering from mass disorder giving reductions ˜2-3.5 times. This disorder also suppresses phonon mean free paths in the alloy systems compared to those in their crystalline counterparts. This work provides fundamental insights of lattice thermal transport from basic vibrational properties for an interesting set of two-dimensional materials.

  4. Solar radiation over Egypt: Comparison of predicted and measured meteorological data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamel, M.A.; Shalaby, S.A.; Mostafa, S.S.

    1993-06-01

    Measurements of global solar irradiance on a horizontal surface at five meteorological stations in Egypt for three years 1987, 1988, and 1989 are compared with their corresponding values computed by two independent methods. The first method is based on the Angstrom formula, which correlates relative solar irradiance H/H[sub o] to corresponding relative duration of bright sunshine n/N. Regional regression coefficients are obtained and used for prediction of global solar irradiance. Good agreement with measurements is obtained. In the second method an empirical relation, in which sunshine duration and the noon altitude of the sun as inputs together with appropriate choicemore » of zone parameters, is employed. This gives good agreement with the measurements. Comparison shows that the first method gives better fitting with the experimental data.« less

  5. Quantitative Analysis of Ca, Mg, and K in the Roots of Angelica pubescens f. biserrata by Laser-Induced Breakdown Spectroscopy Combined with Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Wang, J.; Shi, M.; Zheng, P.; Xue, Sh.; Peng, R.

    2018-03-01

    Laser-induced breakdown spectroscopy has been applied for the quantitative analysis of Ca, Mg, and K in the roots of Angelica pubescens Maxim. f. biserrata Shan et Yuan used in traditional Chinese medicine. Ca II 317.993 nm, Mg I 517.268 nm, and K I 769.896 nm spectral lines have been chosen to set up calibration models for the analysis using the external standard and artificial neural network methods. The linear correlation coefficients of the predicted concentrations versus the standard concentrations of six samples determined by the artificial neural network method are 0.9896, 0.9945, and 0.9911 for Ca, Mg, and K, respectively, which are better than for the external standard method. The artificial neural network method also gives better performance comparing with the external standard method for the average and maximum relative errors, average relative standard deviations, and most maximum relative standard deviations of the predicted concentrations of Ca, Mg, and K in the six samples. Finally, it is proved that the artificial neural network method gives better performance compared to the external standard method for the quantitative analysis of Ca, Mg, and K in the roots of Angelica pubescens.

  6. Factors Associated with Student Pharmacist Philanthropy to the College Before and After Graduation

    PubMed Central

    Spivey, Christina A.

    2015-01-01

    Objective. To examine the early stages of a student giving program, to evaluate the program’s influence on college donations, and to evaluate other factors that may affect student and year-one postgraduation giving at a college or school of pharmacy. Methods. A review of student records for graduates of 2013 and 2014 was conducted. Data included student characteristics, scholarship support, international program participation, senior class gift participation, and postgraduation giving. Mann-Whitney U, Kruskal-Wallis, Wilcoxon signed-rank, and Chi-square analyses were performed. Results. Of 273 graduates, the majority were female (57.1%) and white (74%). Class of 2014 contributed a higher amount to the class gift than the prior class. In 2014, those who received scholarships gave higher amounts to the class gift. For the combined classes, there was an association between the number of students who gave pregraduation and the number who gave postgraduation. In the 2013 class, individuals gave greater amounts postgraduation compared to pregraduation, and a higher percentage of alumni (8%) gave postgraduation compared to alumni from the previous class (<1%). Conclusion. Participation in year-one postgraduation giving increased after implementation of the senior class gift program. Receiving scholarships influenced giving to the class gift but was not associated with postgraduation giving. Future studies are needed to develop a more comprehensive understanding of student and alumni philanthropy. PMID:27168615

  7. Factors Associated with Student Pharmacist Philanthropy to the College Before and After Graduation.

    PubMed

    Chisholm-Burns, Marie A; Spivey, Christina A

    2015-09-25

    Objective. To examine the early stages of a student giving program, to evaluate the program's influence on college donations, and to evaluate other factors that may affect student and year-one postgraduation giving at a college or school of pharmacy. Methods. A review of student records for graduates of 2013 and 2014 was conducted. Data included student characteristics, scholarship support, international program participation, senior class gift participation, and postgraduation giving. Mann-Whitney U, Kruskal-Wallis, Wilcoxon signed-rank, and Chi-square analyses were performed. Results. Of 273 graduates, the majority were female (57.1%) and white (74%). Class of 2014 contributed a higher amount to the class gift than the prior class. In 2014, those who received scholarships gave higher amounts to the class gift. For the combined classes, there was an association between the number of students who gave pregraduation and the number who gave postgraduation. In the 2013 class, individuals gave greater amounts postgraduation compared to pregraduation, and a higher percentage of alumni (8%) gave postgraduation compared to alumni from the previous class (<1%). Conclusion. Participation in year-one postgraduation giving increased after implementation of the senior class gift program. Receiving scholarships influenced giving to the class gift but was not associated with postgraduation giving. Future studies are needed to develop a more comprehensive understanding of student and alumni philanthropy.

  8. A New Frequency-Domain Method for Bunch Length Measurement

    NASA Astrophysics Data System (ADS)

    Ferianis, M.; Pros, M.

    1997-05-01

    A new method for bunch length measurements has been developed at Elettra. It is based on a spectral observation of the synchrotron radiation light pulses. The single pulse spectrum is shaped by means of an optical process which gives the method an increased sensitivity compared to the usual spectral observations. Some simulations have been carried out to check the method in non-ideal conditions. The results of the first measurements are also presented.

  9. Particle simulation of Coulomb collisions: Comparing the methods of Takizuka and Abe and Nanbu

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Chiaming; Lin, Tungyou; Caflisch, Russel

    2008-04-20

    The interactions of charged particles in a plasma are governed by long-range Coulomb collision. We compare two widely used Monte Carlo models for Coulomb collisions. One was developed by Takizuka and Abe in 1977, the other was developed by Nanbu in 1997. We perform deterministic and statistical error analysis with respect to particle number and time step. The two models produce similar stochastic errors, but Nanbu's model gives smaller time step errors. Error comparisons between these two methods are presented.

  10. Analysis of polyethylene terephthalate PET plastic bottle jointing system using finite element method (FEM)

    NASA Astrophysics Data System (ADS)

    Zaidi, N. A.; Rosli, Muhamad Farizuan; Effendi, M. S. M.; Abdullah, Mohamad Hariri

    2017-09-01

    For almost all injection molding applications of Polyethylene Terephthalate (PET) plastic was analyzed the strength, durability and stiffness of properties by using Finite Element Method (FEM) for jointing system of wood furniture. The FEM was utilized for analyzing the PET jointing system for Oak and Pine as wood based material of furniture. The difference pattern design of PET as wood jointing furniture gives the difference value of strength furniture itself. The results show the wood specimen with grooves and eclipse pattern design PET jointing give lower global estimated error is 28.90%, compare to the rectangular and non-grooves wood specimen of global estimated error is 63.21%.

  11. On physical optics for calculating scattering from coated bodies

    NASA Technical Reports Server (NTRS)

    Baldauf, J.; Lee, S. W.; Ling, H.; Chou, R.

    1989-01-01

    The familiar physical optics (PO) approximation is no longer valid when the perfectly conducting scatterer is coated with dielectric material. This paper reviews several possible PO formulations. By comparing the PO formulation with the moment method solution based on the impedance boundary condition for the case of the coated cone-sphere, a PO formulation using both electric and magnetic currents consistently gives the best numerical results. Comparisons of the exact moment method with the PO formulations using the impedance boundary condition and the PO formulation using the Fresnel reflection coefficient for the case of scattering from the cone-ellipsoid demonstrate that the Fresnel reflection coefficient gives the best numerical results in general.

  12. Practical theories for service life prediction of critical aerospace structural components

    NASA Technical Reports Server (NTRS)

    Ko, William L.; Monaghan, Richard C.; Jackson, Raymond H.

    1992-01-01

    A new second-order theory was developed for predicting the service lives of aerospace structural components. The predictions based on this new theory were compared with those based on the Ko first-order theory and the classical theory of service life predictions. The new theory gives very accurate service life predictions. An equivalent constant-amplitude stress cycle method was proposed for representing the random load spectrum for crack growth calculations. This method predicts the most conservative service life. The proposed use of minimum detectable crack size, instead of proof load established crack size as an initial crack size for crack growth calculations, could give a more realistic service life.

  13. Evaluation of methods of determining humic acids in nucleic acid samples for molecular biological analysis.

    PubMed

    Wang, Yong; Fujii, Takeshi

    2011-01-01

    It is important in molecular biological analyses to evaluate contamination of co-extracted humic acids in DNA/RNA extracted from soil. We compared the sensitivity of various methods for measurement of humic acids, and influences of DNA/RNA and proteins on the measurement. Considering the results, we give suggestions as to choice of methods for measurement of humic acids in molecular biological analyses.

  14. Pyocin-sensitivity testing as a method of typing Pseudomonas aeruginosa: use of "phage-free" preparations of pyocin.

    PubMed

    Rampling, A; Whitby, J L; Wildy, P

    1975-11-01

    A method for pyocin-sensitivity typing by means of "phage-free" preparations of pyocin is described. The method was tested on 227 isolates of P. aeruginosa, collected from 34 different foci of infection in hospitals in the British Isles and the results were compared with those for combined serological and phage typing of all strains and pyocin production of 105 of the isolates. It is concluded that pyocin-sensitivity typing is a simple and reliable method giving a high degree of discrimination, comparable to that of combined serological and phage typing, and it is suitable for use in routine hospital laboratories.

  15. Comparative evaluation of ultrasound scanner accuracy in distance measurement

    NASA Astrophysics Data System (ADS)

    Branca, F. P.; Sciuto, S. A.; Scorza, A.

    2012-10-01

    The aim of the present study is to develop and compare two different automatic methods for accuracy evaluation in ultrasound phantom measurements on B-mode images: both of them give as a result the relative error e between measured distances, performed by 14 brand new ultrasound medical scanners, and nominal distances, among nylon wires embedded in a reference test object. The first method is based on a least squares estimation, while the second one applies the mean value of the same distance evaluated at different locations in ultrasound image (same distance method). Results for both of them are proposed and explained.

  16. A Comparison of the Predictive Capabilities of the Embedded-Atom Method and Modified Embedded-Atom Method Potentials for Lithium

    DOE PAGES

    Vella, Joseph R.; Stillinger, Frank H.; Panagiotopoulos, Athanassios Z.; ...

    2015-07-23

    Here, we compare six lithium potentials by examining their ability to predict coexistence properties and liquid structure using molecular dynamics. All potentials are of the embedded-atom-method (EAM) type. The coexistence properties we focus on are the melting curve, vapor pressure, saturated liquid density, and vapor-liquid surface tension. For each property studied, the simulation results are compared to available experimental data in order to properly assess the accuracy of each potential. We find that the Cui 2NN MEAM is the most robust potential, giving adequate agreement with most of the properties examined. For example, the zero-pressure melting point of this potentialmore » is shown to be around 443 K, while experimentally is it about 454 K. This potential also gives excellent agreement with saturated liquid densities, even though no liquid properties were used in the fitting procedure. Our study allows us to conclude that the Cui 2NN MEAM should be used for further simulations of lithiums.« less

  17. Quantitative Determination of Photosynthetic Pigments in Green Beans Using Thin-Layer Chromatography and a Flatbed Scanner as Densitometer

    ERIC Educational Resources Information Center

    Valverde, Juan; This, Herve; Vignolle, Marc

    2007-01-01

    A simple method for the quantitative determination of photosynthetic pigments extracted from green beans using thin-layer chromatography is proposed. Various extraction methods are compared, and it is shown how a simple flatbed scanner and free software for image processing can give a quantitative determination of pigments. (Contains 5 figures.)

  18. Evaluation of Explosive Strength for Young and Adult Athletes

    ERIC Educational Resources Information Center

    Viitasalo, Jukka T.

    1988-01-01

    The reliability of new electrical measurements of vertical jumping height and of throwing velocity was tested. These results were compared to traditional measurement techniques. The new method was found to give reliable results from children to adults. Methodology is discussed. (Author/JL)

  19. Performance of the Effective Core Potentials of Ca, Hg and Pb in Complexes with Ligands Containing N and O Donor Atoms.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramirez, Jose Z.; Vargas, Rubicelia; Garza, Jorge

    This paper presents a systematic study of the performance of the relativistic effective core potentials (RECPs) proposed by Stoll-Preuss, Christiansen-Ermler and Hay-Wadt for Ca2+, Hg2+ and Pb2+. The RECPs performance is studied when these cations are combined with ethylene glycol, 2-aminoethanol and ethylenediamine to form bidentate complexes. First, the description of the bidentate ligands is analyzed with the Kohn-Sham method by using SVWN, BLYP and B3LYP exchange-correlation functionals and they are compared with the Moeller-Plesset perturbation theory (MP2), for all these methods the TZVP basis set was used. We found that the BLYP exchange-correlation functional gives similar results that thosemore » obtained by the B3LYP and MP2 methods. Thus, the bidentate metal complexes were studied with the BLYP method combined with the RECPs. In order to compare RECPs performance, all the systems considered in this work were studied with the relativistic all-electron Douglas-Kroll (DK3) method. We observed that the Christiansen-Ermler RECPs give the best energetic and geometrical description for Ca and Hg complexes when compared with the all-electron method. For Pb complexes the spin-orbit interaction and Basis Set Superposition error must be taken into account in the RECP. In general, the trend showed in the complexation energies with the all-electron method is followed by the complexation energies computed with all the pseudopotential tested in this work. Battelle operates PNNL for the USDOE.« less

  20. Enhance Video Film using Retnix method

    NASA Astrophysics Data System (ADS)

    Awad, Rasha; Al-Zuky, Ali A.; Al-Saleh, Anwar H.; Mohamad, Haidar J.

    2018-05-01

    An enhancement technique used to improve the studied video quality. Algorithms like mean and standard deviation are used as a criterion within this paper, and it applied for each video clip that divided into 80 images. The studied filming environment has different light intensity (315, 566, and 644Lux). This different environment gives similar reality to the outdoor filming. The outputs of the suggested algorithm are compared with the results before applying it. This method is applied into two ways: first, it is applied for the full video clip to get the enhanced film; second, it is applied for every individual image to get the enhanced image then compiler them to get the enhanced film. This paper shows that the enhancement technique gives good quality video film depending on a statistical method, and it is recommended to use it in different application.

  1. Freestanding midwifery units versus obstetric units: does the effect of place of birth differ with level of social disadvantage?

    PubMed Central

    2012-01-01

    Background Social inequity in perinatal and maternal health is a well-documented health problem even in countries with a high level of social equality. We aimed to study whether the effect of birthplace on perinatal and maternal morbidity, birth interventions and use of pain relief among low risk women intending to give birth in two freestanding midwifery units (FMU) versus two obstetric units in Denmark differed by level of social disadvantage. Methods The study was designed as a cohort study with a matched control group. It included 839 low-risk women intending to give birth in an FMU, who were prospectively and individually matched on nine selected obstetric/socio-economic factors to 839 low-risk women intending OU birth. Educational level was chosen as a proxy for social position. Analysis was by intention-to-treat. Results Women intending to give birth in an FMU had a significantly higher likelihood of uncomplicated, spontaneous birth with good outcomes for mother and infant compared to women intending to give birth in an OU. The likelihood of intact perineum, use of upright position for birth and water birth was also higher. No difference was found in perinatal morbidity or third/fourth degree tears, while birth interventions including caesarean section and epidural analgesia were significantly less frequent among women intending to give birth in an FMU. In our sample of healthy low-risk women with spontaneous onset of labour at term after an uncomplicated pregnancy, the positive results of intending to give birth in an FMU as compared to an OU were found to hold for both women with post-secondary education and the potentially vulnerable group of FMU women without post-secondary education. In all cases, women without post-secondary education intending to give birth in an FMU had comparable and, in some respects, more favourable outcomes when compared to women with the same level of education intending to give birth in an OU. In this sample of low-risk women, we found that the effect of intended place on birth outcomes did not differ with women’s level of education. Conclusion FMU care appears to offer important benefits for birthing women with no additional risk to the infant. Both for women with and without post-secondary education, intending to give birth in an FMU significantly increased the likelihood of a spontaneous, uncomplicated birth with good outcomes for mother and infant compared to women intending to give birth in an OU. All women should be provided with adequate information about different care models and supported in making an informed decision about the place of birth. PMID:22726575

  2. Conjugate-Gradient Neural Networks in Classification of Multisource and Very-High-Dimensional Remote Sensing Data

    NASA Technical Reports Server (NTRS)

    Benediktsson, J. A.; Swain, P. H.; Ersoy, O. K.

    1993-01-01

    Application of neural networks to classification of remote sensing data is discussed. Conventional two-layer backpropagation is found to give good results in classification of remote sensing data but is not efficient in training. A more efficient variant, based on conjugate-gradient optimization, is used for classification of multisource remote sensing and geographic data and very-high-dimensional data. The conjugate-gradient neural networks give excellent performance in classification of multisource data, but do not compare as well with statistical methods in classification of very-high-dimentional data.

  3. Ab initio phonon thermal transport in monolayer InSe, GaSe, GaS, and alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pandey, Tribhuwan; Parker, David S.; Lindsay, Lucas

    We compare vibrational properties and phonon thermal conductivities (κ) of monolayer InSe, GaSe and GaS systems using density functional theory and Peierls-Boltzmann transport methods. In going from InSe to GaSe to GaS, system mass decreases giving both increasing acoustic phonon velocities and decreasing scattering of these heat-carrying modes with optic phonons, ultimately giving κInSe< κGaSe< κGaS. This behavior is demonstrated by correlating the scattering phase space limited by fundamental conservation conditions with mode scattering rates and phonon dispersions for each material. We also show that, unlike flat monolayer systems such as graphene, thermal transport is governed by in-plane vibrations inmore » InSe, GaSe and GaS, similar to buckled monolayer materials such as silicene. Alloying of InSe, GaSe and GaS systems provides an effective method for modulating their κ through intrinsic vibrational modifications and phonon scattering from mass disorder giving reductions ~2-3.5 times. This disorder also suppresses phonon mean free paths in the alloy systems compared to those in their crystalline counterparts. This work provides fundamental insights of lattice thermal transport from basic vibrational properties for an interesting set of two-dimensional materials.« less

  4. Effectiveness of a Television Advertisement Campaign on Giving Cigarettes in a Chinese Population

    PubMed Central

    Qin, Yu; Su, Jian; Xiang, Quanyong; Hu, Yihe; Xu, Guanqun; Ma, Jiuhua; Shi, Zumin

    2014-01-01

    Background Anti-tobacco television advertisement campaigns may convey messages on smoking-related health consequences and create norms against giving cigarettes. Methods Altogether, 156 and 112 slots of a television advertisement “Giving cigarettes is giving harm” were aired on Suzhou and Yizheng, respectively, over one month in 2010. Participants were recruited from 15 locations in Suzhou and 8 locations in Yizheng using a street intercept method. Overall 2306 residents aged 18–45 years completed questionnaires, including 1142 before the campaign and 1164 after, with respective response rates of 79.1% and 79.7%. Chi square tests were used to compare the difference between categorical variables. Results After the campaign, 36.0% of subjects recalled that they had seen the advertisement. Residents of Suzhou had a higher recall rate than those of Yizheng (47.6% vs. 20.6%, P < 0.001). The rate of not giving cigarettes dropped from 32.1% before the campaign to 28.5% after (P = 0.05). In the post-campaign evaluation, participants who reported seeing the advertisement were more likely not to give cigarettes in the future than those who reported not seeing the advertisement (38.7% vs. 27.5%, P < 0.001). Conclusions Our study showed that an anti-tobacco television advertisements helped change societal norms and improve health behavior. Continuous and adequate funding of anti-tobacco media campaigns targeted at different levels of the general population is needed, in conjunction with a comprehensive tobacco control effort. PMID:25196169

  5. A study on the performance comparison of metaheuristic algorithms on the learning of neural networks

    NASA Astrophysics Data System (ADS)

    Lai, Kee Huong; Zainuddin, Zarita; Ong, Pauline

    2017-08-01

    The learning or training process of neural networks entails the task of finding the most optimal set of parameters, which includes translation vectors, dilation parameter, synaptic weights, and bias terms. Apart from the traditional gradient descent-based methods, metaheuristic methods can also be used for this learning purpose. Since the inception of genetic algorithm half a century ago, the last decade witnessed the explosion of a variety of novel metaheuristic algorithms, such as harmony search algorithm, bat algorithm, and whale optimization algorithm. Despite the proof of the no free lunch theorem in the discipline of optimization, a survey in the literature of machine learning gives contrasting results. Some researchers report that certain metaheuristic algorithms are superior to the others, whereas some others argue that different metaheuristic algorithms give comparable performance. As such, this paper aims to investigate if a certain metaheuristic algorithm will outperform the other algorithms. In this work, three metaheuristic algorithms, namely genetic algorithms, particle swarm optimization, and harmony search algorithm are considered. The algorithms are incorporated in the learning of neural networks and their classification results on the benchmark UCI machine learning data sets are compared. It is found that all three metaheuristic algorithms give similar and comparable performance, as captured in the average overall classification accuracy. The results corroborate the findings reported in the works done by previous researchers. Several recommendations are given, which include the need of statistical analysis to verify the results and further theoretical works to support the obtained empirical results.

  6. Particle Simulation of Coulomb Collisions: Comparing the Methods of Takizuka & Abe and Nanbu

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, C; Lin, T; Caflisch, R

    2007-05-22

    The interactions of charged particles in a plasma are in a plasma is governed by the long-range Coulomb collision. We compare two widely used Monte Carlo models for Coulomb collisions. One was developed by Takizuka and Abe in 1977, the other was developed by Nanbu in 1997. We perform deterministic and stochastic error analysis with respect to particle number and time step. The two models produce similar stochastic errors, but Nanbu's model gives smaller time step errors. Error comparisons between these two methods are presented.

  7. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  8. Elastic models: a comparative study applied to retinal images.

    PubMed

    Karali, E; Lambropoulou, S; Koutsouris, D

    2011-01-01

    In this work various methods of parametric elastic models are compared, namely the classical snake, the gradient vector field snake (GVF snake) and the topology-adaptive snake (t-snake), as well as the method of self-affine mapping system as an alternative to elastic models. We also give a brief overview of the methods used. The self-affine mapping system is implemented using an adapting scheme and minimum distance as optimization criterion, which is more suitable for weak edges detection. All methods are applied to glaucomatic retinal images with the purpose of segmenting the optical disk. The methods are compared in terms of segmentation accuracy and speed, as these are derived from cross-correlation coefficients between real and algorithm extracted contours and segmentation time, respectively. As a result, the method of self-affine mapping system presents adequate segmentation time and segmentation accuracy, and significant independence from initialization.

  9. Comparison of different particles and methods for magnetic isolation of circulating tumor cells

    NASA Astrophysics Data System (ADS)

    Sieben, S.; Bergemann, C.; Lübbe, A.; Brockmann, B.; Rescheleit, D.

    2001-01-01

    A more effective method for tumor cell separation from peripheral blood was established. The results of optimized magnetic particles verified by analyzing yield, purity and viability of isolated epithelial tumor cells were compared with a commercial kit for immunomagnetic cell separation. Porous silica particles of 230 nm were found to give best recovery rates and high viability of extracted cells.

  10. Comparison of Different Protein Extraction Methods for Gel-Based Proteomic Analysis of Ganoderma spp.

    PubMed

    Al-Obaidi, Jameel R; Saidi, Noor Baity; Usuldin, Siti Rokhiyah Ahmad; Hussin, Siti Nahdatul Isnaini Said; Yusoff, Noornabeela Md; Idris, Abu Seman

    2016-04-01

    Ganoderma species are a group of fungi that have the ability to degrade lignin polymers and cause severe diseases such as stem and root rot and can infect economically important plants and perennial crops such as oil palm, especially in tropical countries such as Malaysia. Unfortunately, very little is known about the complex interplay between oil palm and Ganoderma in the pathogenesis of the diseases. Proteomic technologies are simple yet powerful tools in comparing protein profile and have been widely used to study plant-fungus interaction. A critical step to perform a good proteome research is to establish a method that gives the best quality and a wide coverage of total proteins. Despite the availability of various protein extraction protocols from pathogenic fungi in the literature, no single extraction method was found suitable for all types of pathogenic fungi. To develop an optimized protein extraction protocol for 2-DE gel analysis of Ganoderma spp., three previously reported protein extraction protocols were compared: trichloroacetic acid, sucrose and phenol/ammonium acetate in methanol. The third method was found to give the most reproducible gels and highest protein concentration. Using the later method, a total of 10 protein spots (5 from each species) were successfully identified. Hence, the results from this study propose phenol/ammonium acetate in methanol as the most effective protein extraction method for 2-DE proteomic studies of Ganoderma spp.

  11. Radiative corrections to quantum sticking on graphene

    NASA Astrophysics Data System (ADS)

    Sengupta, Sanghita; Clougherty, Dennis P.

    2017-07-01

    We study the sticking rate of atomic hydrogen to suspended graphene using four different methods that include contributions from processes with multiphonon emission. We compare the numerical results of the sticking rate obtained by: (i) the loop expansion of the atom self-energy; (ii) the noncrossing approximation (NCA); (iii) the independent boson model approximation (IBMA); and (iv) a leading-order soft-phonon resummation method (SPR). The loop expansion reveals an infrared problem, analogous to the infamous infrared problem in QED. The two-loop contribution to the sticking rate gives a result that tends to diverge for large membranes. The latter three methods remedy this infrared problem and give results that are finite in the limit of an infinite membrane. We find that for micromembranes (sizes ranging 100 nm to 10 μ m ), the latter three methods give results that are in good agreement with each other and yield sticking rates that are mildly suppressed relative to the lowest-order golden rule rate. Lastly, we find that the SPR sticking rate decreases slowly to zero with increasing membrane size, while both the NCA and IBMA rates tend to a nonzero constant in this limit. Thus, approximations to the sticking rate can be sensitive to the effects of soft-phonon emission for large membranes.

  12. DFT Study of Small Gold Clusters, Au n (2≤ n ≤ 6): Stability and Charge Distribution Using M08-SO Functional

    NASA Astrophysics Data System (ADS)

    Carvalho, F. S.; Braga, J. P.

    2018-05-01

    We have investigated the more stable structures for small gold clusters, Aun (2≤ n ≤ 6), using the density functional theory method. Two functionals used in the literature, the well-known B3LYP and M06-L, were compared with the one that has not been used for this system yet, M08-SO, and the results for dimer were compared with experimental data. It was found that M08-SO gives the best results for the effective core potential and basis set tested. Therefore, the functional M08-SO was used for other structures. The planar geometries were found to have the lowest energies. After the geometry optimization, Mulliken populational analysis (MPA) and natural populational analysis (NPA) were carried out and the results for charge distribution in gold trimer and tetramer were compared with data found in literature. The MPA calculation does not give results in agreement with the literature. On the other hand, the NPA calculation gives coherent data. The results showed that the charge distribution will not always predict the more favorable site of interaction.

  13. A Simple Method for Principal Strata Effects When the Outcome Has Been Truncated Due to Death

    PubMed Central

    Chiba, Yasutaka; VanderWeele, Tyler J.

    2011-01-01

    In randomized trials with follow-up, outcomes such as quality of life may be undefined for individuals who die before the follow-up is complete. In such settings, restricting analysis to those who survive can give rise to biased outcome comparisons. An alternative approach is to consider the “principal strata effect” or “survivor average causal effect” (SACE), defined as the effect of treatment on the outcome among the subpopulation that would have survived under either treatment arm. The authors describe a very simple technique that can be used to assess the SACE. They give both a sensitivity analysis technique and conditions under which a crude comparison provides a conservative estimate of the SACE. The method is illustrated using data from the ARDSnet (Acute Respiratory Distress Syndrome Network) clinical trial comparing low-volume ventilation and traditional ventilation methods for individuals with acute respiratory distress syndrome. PMID:21354986

  14. A simple method for principal strata effects when the outcome has been truncated due to death.

    PubMed

    Chiba, Yasutaka; VanderWeele, Tyler J

    2011-04-01

    In randomized trials with follow-up, outcomes such as quality of life may be undefined for individuals who die before the follow-up is complete. In such settings, restricting analysis to those who survive can give rise to biased outcome comparisons. An alternative approach is to consider the "principal strata effect" or "survivor average causal effect" (SACE), defined as the effect of treatment on the outcome among the subpopulation that would have survived under either treatment arm. The authors describe a very simple technique that can be used to assess the SACE. They give both a sensitivity analysis technique and conditions under which a crude comparison provides a conservative estimate of the SACE. The method is illustrated using data from the ARDSnet (Acute Respiratory Distress Syndrome Network) clinical trial comparing low-volume ventilation and traditional ventilation methods for individuals with acute respiratory distress syndrome.

  15. An Assessment of Artificial Compressibility and Pressure Projection Methods for Incompressible Flow Simulations

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Kiris, C.; Smith, Charles A. (Technical Monitor)

    1998-01-01

    Performance of the two commonly used numerical procedures, one based on artificial compressibility method and the other pressure projection method, are compared. These formulations are selected primarily because they are designed for three-dimensional applications. The computational procedures are compared by obtaining steady state solutions of a wake vortex and unsteady solutions of a curved duct flow. For steady computations, artificial compressibility was very efficient in terms of computing time and robustness. For an unsteady flow which requires small physical time step, pressure projection method was found to be computationally more efficient than an artificial compressibility method. This comparison is intended to give some basis for selecting a method or a flow solution code for large three-dimensional applications where computing resources become a critical issue.

  16. Autocorrelated process control: Geometric Brownian Motion approach versus Box-Jenkins approach

    NASA Astrophysics Data System (ADS)

    Salleh, R. M.; Zawawi, N. I.; Gan, Z. F.; Nor, M. E.

    2018-04-01

    Existing of autocorrelation will bring a significant effect on the performance and accuracy of process control if the problem does not handle carefully. When dealing with autocorrelated process, Box-Jenkins method will be preferred because of the popularity. However, the computation of Box-Jenkins method is too complicated and challenging which cause of time-consuming. Therefore, an alternative method which known as Geometric Brownian Motion (GBM) is introduced to monitor the autocorrelated process. One real case of furnace temperature data is conducted to compare the performance of Box-Jenkins and GBM methods in monitoring autocorrelation process. Both methods give the same results in terms of model accuracy and monitoring process control. Yet, GBM is superior compared to Box-Jenkins method due to its simplicity and practically with shorter computational time.

  17. Estimating Body Composition in Adolescent Sprint Athletes: Comparison of Different Methods in a 3 Years Longitudinal Design

    PubMed Central

    Aerenhouts, Dirk

    2015-01-01

    A recommended field method to assess body composition in adolescent sprint athletes is currently lacking. Existing methods developed for non-athletic adolescents were not longitudinally validated and do not take maturation status into account. This longitudinal study compared two field methods, i.e., a Bio Impedance Analysis (BIA) and a skinfold based equation, with underwater densitometry to track body fat percentage relative to years from age at peak height velocity in adolescent sprint athletes. In this study, adolescent sprint athletes (34 girls, 35 boys) were measured every 6 months during 3 years (age at start = 14.8 ± 1.5yrs in girls and 14.7 ± 1.9yrs in boys). Body fat percentage was estimated in 3 different ways: 1) using BIA with the TANITA TBF 410; 2) using a skinfold based equation; 3) using underwater densitometry which was considered as the reference method. Height for age since birth was used to estimate age at peak height velocity. Cross-sectional analyses were performed using repeated measures ANOVA and Pearson correlations between measurement methods at each occasion. Data were analyzed longitudinally using a multilevel cross-classified model with the PROC Mixed procedure. In boys, compared to underwater densitometry, the skinfold based formula revealed comparable values for body fatness during the study period whereas BIA showed a different pattern leading to an overestimation of body fatness starting from 4 years after age at peak height velocity. In girls, both the skinfold based formula and BIA overestimated body fatness across the whole range of years from peak height velocity. The skinfold based method appears to give an acceptable estimation of body composition during growth as compared to underwater densitometry in male adolescent sprinters. In girls, caution is warranted when interpreting estimations of body fatness by both BIA and a skinfold based formula since both methods tend to give an overestimation. PMID:26317426

  18. Disinfection of Escherichia coli bacteria using hybrid method of ozonation and hydrodynamic cavitation with orifice plate

    NASA Astrophysics Data System (ADS)

    Karamah, Eva F.; Ghaudenson, Rioneli; Amalia, Fitri; Bismo, Setijo

    2017-11-01

    This research aims to evaluate the performance of hybrid method of ozonation and hydrodynamic cavitation with orifice plate on E.coli bacteria disinfection. In this research, ozone dose, circulation flowrate, and disinfection method were varied. Ozone was produced by commercial ozonator with ozone dose of 64.83 mg/hour, 108.18 mg/hour, and 135.04 mg/hour. Meanwhile, hydrodynamic cavitation was generated by an orifice plate. The disinfection method compared in this research were: hydrodynamic cavitation, ozonation, and the combination of both. The best result on each method was achieved on the 60th minutes and with a circulation flowrate of 7 L/min. The hybrid method attained final concentration of 0 CFU/mL from the initial concentration of 2.10 × 105 CFU/mL. The ozonation method attained final concentration of 0 CFU/mL from the initial concentration of 1.32 × 105 CFU/mL. Cavitation method gives the least disinfection with final concentration of 5.20 × 104 CFU/mL from the initial concentration of 2.17 × 105 CFU/mL. In conclusion, hybrid method gives a faster and better disinfection of E.coli than each method on its own.

  19. Reduced reciprocal giving in social anxiety - Evidence from the Trust Game.

    PubMed

    Anderl, Christine; Steil, Regina; Hahn, Tim; Hitzeroth, Patricia; Reif, Andreas; Windmann, Sabine

    2018-06-01

    Social anxiety is known to impair interpersonal relationships. These impairments are thought to partly arise from difficulties to engage in affiliative interactions with others, such as sharing favors or reciprocating prosocial acts. Here, we examined whether individuals high compared to low in social anxiety differ in giving towards strangers in an economic game paradigm. One hundred and twenty seven non-clinical participants who had been pre-screened to be either particularly high or low in social anxiety played an incentivized Trust Game to assess trustful and reciprocal giving towards strangers in addition to providing information on real life interpersonal functioning (perceived social support and attachment style). We found that reciprocal, but not trustful giving, was significantly decreased among highly socially anxious individuals. Both social anxiety and reciprocal giving furthermore showed significant associations with self-reported real life interpersonal functioning. Participants played the Trust Game with the strategy method; results need replication with a clinical sample. Individuals high in social anxiety showed reduced reciprocal, but intact trustful giving, pointing to a constraint in responsiveness. The research may contribute to the development of new treatment and prevention programs to reduce the interpersonal impairments in socially anxious individuals. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Quality of Rapeseed Bio-Fuel Waste: Optical Properties

    NASA Astrophysics Data System (ADS)

    Sujak, Agnieszka; Muszyñski, Siemowit; Kachel-Jakubowska, Magdalena

    2014-04-01

    The objective of the presented work was to examine the optical properties of selected bio-fuel waste. Three independent optical methods: UV-Vis spectroscopy, infrared spectroscopy and chromametric measurements were applied to establish the possible quality control test for the obtained substances. The following by-products were tested: distilled glycerine, technical glycerine and matter organic non glycerine fraction from rapeseed oil bio-fuel production. The results show that analysis of UV-Vis spectra can give rapid information about the purity of distilled glycerine, while no direct information can be obtained concerning the concentration and kind of impurities. Transmission mode is more useful as compared to absorption, concerning the detection abilities of average UV-Vis spectrometers. Infrared spectroscopy can be used as a complementary method for determining impurities/admixtures in samples. Measurements of chroma give the quickest data to compare the colour of biofuel by-products obtained by different producers. The condition is, however, that the products are received through the same or similar chemical processes. The other important factor is application of well defined measuring background. All the discussed analyses are quick, cheap and non-destructive, and can help to compare the quality of products.

  1. Lecture 3: Some Suggestions and Remarks upon Observing Children

    ERIC Educational Resources Information Center

    Montessori, Maria

    2016-01-01

    These next two lectures succinctly discuss the necessary preparation and methods for observation. Using the naturalist Fabre as an example of scientific training of the faculties for sharp observation, Montessori compares the observer to a researcher and gives many suggestions for conducting thorough yet unobtrusive observation. Self-awareness of…

  2. Management Education: An Experimental Course.

    ERIC Educational Resources Information Center

    Gutelius, Paul Payne

    The thesis describes the design, implementation, and evaluation of a course in the theory and practice of management. It gives an appraisal of programmed learning techniques and compares three methods of teaching management--by readings, by cases, and by computer gaming. Additionally, it relates student reactions to the opportunity to select one…

  3. REVIEW AND EVALUATION OF CURRENT METHODS AND USER NEEDS FOR OTHER STATIONARY COMBUSTION SOURCES

    EPA Science Inventory

    The report gives results of Phase 1 of an effort to develop improved methodologies for estimating area source emissions of air pollutants from stationary combustion sources. The report (1) evaluates Area and Mobile Source (AMS) subsystem methodologies; (2) compares AMS results w...

  4. A Comparison of Methods to Measure the Magnetic Moment of Magnetotactic Bacteria through Analysis of Their Trajectories in External Magnetic Fields

    PubMed Central

    Fradin, Cécile

    2013-01-01

    Magnetotactic bacteria possess organelles called magnetosomes that confer a magnetic moment on the cells, resulting in their partial alignment with external magnetic fields. Here we show that analysis of the trajectories of cells exposed to an external magnetic field can be used to measure the average magnetic dipole moment of a cell population in at least five different ways. We apply this analysis to movies of Magnetospirillum magneticum AMB-1 cells, and compare the values of the magnetic moment obtained in this way to that obtained by direct measurements of magnetosome dimension from electron micrographs. We find that methods relying on the viscous relaxation of the cell orientation give results comparable to that obtained by magnetosome measurements, whereas methods relying on statistical mechanics assumptions give systematically lower values of the magnetic moment. Since the observed distribution of magnetic moments in the population is not sufficient to explain this discrepancy, our results suggest that non-thermal random noise is present in the system, implying that a magnetotactic bacterial population should not be considered as similar to a paramagnetic material. PMID:24349185

  5. Normal uniform mixture differential gene expression detection for cDNA microarrays

    PubMed Central

    Dean, Nema; Raftery, Adrian E

    2005-01-01

    Background One of the primary tasks in analysing gene expression data is finding genes that are differentially expressed in different samples. Multiple testing issues due to the thousands of tests run make some of the more popular methods for doing this problematic. Results We propose a simple method, Normal Uniform Differential Gene Expression (NUDGE) detection for finding differentially expressed genes in cDNA microarrays. The method uses a simple univariate normal-uniform mixture model, in combination with new normalization methods for spread as well as mean that extend the lowess normalization of Dudoit, Yang, Callow and Speed (2002) [1]. It takes account of multiple testing, and gives probabilities of differential expression as part of its output. It can be applied to either single-slide or replicated experiments, and it is very fast. Three datasets are analyzed using NUDGE, and the results are compared to those given by other popular methods: unadjusted and Bonferroni-adjusted t tests, Significance Analysis of Microarrays (SAM), and Empirical Bayes for microarrays (EBarrays) with both Gamma-Gamma and Lognormal-Normal models. Conclusion The method gives a high probability of differential expression to genes known/suspected a priori to be differentially expressed and a low probability to the others. In terms of known false positives and false negatives, the method outperforms all multiple-replicate methods except for the Gamma-Gamma EBarrays method to which it offers comparable results with the added advantages of greater simplicity, speed, fewer assumptions and applicability to the single replicate case. An R package called nudge to implement the methods in this paper will be made available soon at . PMID:16011807

  6. Timing performance comparison of digital methods in positron emission tomography

    NASA Astrophysics Data System (ADS)

    Aykac, Mehmet; Hong, Inki; Cho, Sanghee

    2010-11-01

    Accurate timing information is essential in positron emission tomography (PET). Recent improvements in high speed electronics made digital methods more attractive to find alternative solutions to create a time mark for an event. Two new digital methods (mean PMT pulse model, MPPM, and median filtered zero crossing method, MFZCM) were introduced in this work and compared to traditional methods such as digital leading edge (LE) and digital constant fraction discrimination (CFD). In addition, the performances of all four digital methods were compared to analog based LE and CFD. The time resolution values for MPPM and MFZCM were measured below 300 ps at 1.6 GS/s and above that was similar to the analog based coincidence timing results. In addition, the two digital methods were insensitive to the changes in threshold setting that might give some improvement in system dead time.

  7. Birth Outcomes of Latin Americans in Two Countries with Contrasting Immigration Admission Policies: Canada and Spain

    PubMed Central

    Urquia, Marcelo L.

    2015-01-01

    Background We delved into the selective migration hypothesis on health by comparing birth outcomes of Latin American immigrants giving birth in two receiving countries with dissimilar immigration admission policies: Canada and Spain. We hypothesized that a stronger immigrant selection in Canada will reflect more favourable outcomes among Latin Americans giving birth in Canada than among their counterparts giving birth in Spain. Materials and Methods We conducted a cross-sectional bi-national comparative study. We analyzed birth data of singleton infants born in Canada (2000–2005) (N = 31,767) and Spain (1998–2007) (N = 150,405) to mothers born in Spanish-speaking Latin American countries. We compared mean birthweight at 37–41 weeks gestation, and low birthweight and preterm birth rates between Latin American immigrants to Canada vs. Spain. Regression analysis for aggregate data was used to obtain Odds Ratios and Mean birthweight differences adjusted for infant sex, maternal age, parity, marital status, and father born in same source country. Results Latin American women in Canada had heavier newborns than their same-country counterparts giving birth in Spain, overall [adjusted mean birthweight difference: 101 grams; 95% confidence interval (CI): 98, 104], and within each maternal country of origin. Latin American women in Canada had fewer low birthweight and preterm infants than those giving birth in Spain [adjusted Odds Ratio: 0.88; 95% CI: 0.82, 0.94 for low birthweight, and 0.88; 95% CI: 0.84, 0.93 for preterm birth, respectively]. Conclusion Latin American immigrant women had better birth outcomes in Canada than in Spain, suggesting a more selective migration in Canada than in Spain. PMID:26308857

  8. Consistent characterization of semiconductor saturable absorber mirrors with single-pulse and pump-probe spectroscopy.

    PubMed

    Fleischhaker, R; Krauss, N; Schättiger, F; Dekorsy, T

    2013-03-25

    We study the comparability of the two most important measurement methods used for the characterization of semiconductor saturable absorber mirrors (SESAMs). For both methods, single-pulse spectroscopy (SPS) and pump-probe spectroscopy (PPS), we analyze in detail the time-dependent saturation dynamics inside a SESAM. Based on this analysis, we find that fluence-dependent PPS at complete spatial overlap and zero time delay is equivalent to SPS. We confirm our findings experimentally by comparing data from SPS and PPS of two samples. We show how to interpret this data consistently and we give explanations for possible deviations.

  9. Comparison of on-site field measured inorganic arsenic in rice with laboratory measurements using a field deployable method: Method validation.

    PubMed

    Mlangeni, Angstone Thembachako; Vecchi, Valeria; Norton, Gareth J; Raab, Andrea; Krupp, Eva M; Feldmann, Joerg

    2018-10-15

    A commercial arsenic field kit designed to measure inorganic arsenic (iAs) in water was modified into a field deployable method (FDM) to measure iAs in rice. While the method has been validated to give precise and accurate results in the laboratory, its on-site field performance has not been evaluated. This study was designed to test the method on-site in Malawi in order to evaluate its accuracy and precision in determination of iAs on-site by comparing with a validated reference method and giving original data on inorganic arsenic in Malawian rice and rice-based products. The method was validated by using the established laboratory-based HPLC-ICPMS. Statistical tests indicated there were no significant differences between on-site and laboratory iAs measurements determined using the FDM (p = 0.263, ά = 0.05) and between on-site measurements and measurements determined using HPLC-ICP-MS (p = 0.299, ά = 0.05). This method allows quick (within 1 h) and efficient screening of rice containing iAs concentrations on-site. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Comparison of Climatological Planetary Boundary Layer Depth Estimates Using the GEOS-5 AGCM

    NASA Technical Reports Server (NTRS)

    Mcgrath-Spangler, Erica Lynn; Molod, Andrea M.

    2014-01-01

    Planetary boundary layer (PBL) processes, including those influencing the PBL depth, control many aspects of weather and climate and accurate models of these processes are important for forecasting changes in the future. However, evaluation of model estimates of PBL depth are difficult because no consensus on PBL depth definition currently exists and various methods for estimating this parameter can give results that differ by hundreds of meters or more. In order to facilitate comparisons between the Goddard Earth Observation System (GEOS-5) and other modeling and observational systems, seven PBL depth estimation methods are used to produce PBL depth climatologies and are evaluated and compared here. All seven methods evaluate the same atmosphere so all differences are related solely to the definition chosen. These methods depend on the scalar diffusivity, bulk and local Richardson numbers, and the diagnosed horizontal turbulent kinetic energy (TKE). Results are aggregated by climate class in order to allow broad generalizations. The various PBL depth estimations give similar midday results with some exceptions. One method based on horizontal turbulent kinetic energy produces deeper PBL depths in the winter associated with winter storms. In warm, moist conditions, the method based on a bulk Richardson number gives results that are shallower than those given by the methods based on the scalar diffusivity. The impact of turbulence driven by radiative cooling at cloud top is most significant during the evening transition and along several regions across the oceans and methods sensitive to this cooling produce deeper PBL depths where it is most active. Additionally, Richardson number-based methods collapse better at night than methods that depend on the scalar diffusivity. This feature potentially affects tracer transport.

  11. The Influence of Training Strategy and Physical Condition toward Forehand Drive Ability in Table Tennis

    NASA Astrophysics Data System (ADS)

    Langitan, F. W.

    2018-02-01

    The objective of this research is to find out the influence of training strategy and physical condition toward forehand drive ability in table tennis of student in faculty of sport in university of Manado, department of health and recreation education. The method used in this research was factorial 2x2 design method. The population was taken from the student of Faculty of Sport at Manado State University, Indonesia, in 2017 of 76 students for sample research. The result of this research shows that: In general, this training strategy of wall bounce gives better influence toward forehand drive ability compare with the strategy of pair training in table tennis. For the students who have strong forehand muscle, the wall bounce training strategy give better influence to their ability of forehand drive in table tennis. For the student who have weak forehand muscle, pair training strategy give better influence than wall bound training toward forehand drive ability in table tennis. There is an interaction between training using hand muscle strength to the training result in table tennis using forehand drive.

  12. Tuning the photophysical properties of BODIPY dyes through extended aromatic pyrroles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swavey, Shawn; Quinn, John; Coladipietro, Michael

    Three new BODIPY dyes have been synthesized by a two-step synthetic route. Here, this expands the series to nine different BODIPY dyes by this method. Naphtha[1,2-c]pyrrole was combined with 1-pyrenecarboxaldehyde to give a symmetric dipyrrin followed by reaction with boron trifluoride to give a symmetric highly conjugated BODIPY dye. Expanding this synthetic route to a more conjugated pyrrole fluorantho[2,3-c]pyrrole was combined with 1-pyrenecarboxaldehyde followed by reaction with boron trifluoride to give the asymmetric BODIPY dye (9). Dyes with the more highly conjugated fluoranthopyrrole resulted in a bathochromic shift of ca. 50 nm in the electronic absorption and showed greater stabilitymore » of the LUMO energy, as determined by electrochemical measurements, compared to their naphthapyrrole analogs. All of the dyes synthesized by this method display molar absorptivities greater than 100 000 M -1 cm -1 with photoluminescence quantum efficiencies of 0.8–1.0. Excited state lifetimes of the dyes in dichloromethane are modest, ranging from 3.2 ns to 4.3 ns.« less

  13. Tuning the photophysical properties of BODIPY dyes through extended aromatic pyrroles

    DOE PAGES

    Swavey, Shawn; Quinn, John; Coladipietro, Michael; ...

    2016-12-22

    Three new BODIPY dyes have been synthesized by a two-step synthetic route. Here, this expands the series to nine different BODIPY dyes by this method. Naphtha[1,2-c]pyrrole was combined with 1-pyrenecarboxaldehyde to give a symmetric dipyrrin followed by reaction with boron trifluoride to give a symmetric highly conjugated BODIPY dye. Expanding this synthetic route to a more conjugated pyrrole fluorantho[2,3-c]pyrrole was combined with 1-pyrenecarboxaldehyde followed by reaction with boron trifluoride to give the asymmetric BODIPY dye (9). Dyes with the more highly conjugated fluoranthopyrrole resulted in a bathochromic shift of ca. 50 nm in the electronic absorption and showed greater stabilitymore » of the LUMO energy, as determined by electrochemical measurements, compared to their naphthapyrrole analogs. All of the dyes synthesized by this method display molar absorptivities greater than 100 000 M -1 cm -1 with photoluminescence quantum efficiencies of 0.8–1.0. Excited state lifetimes of the dyes in dichloromethane are modest, ranging from 3.2 ns to 4.3 ns.« less

  14. Improved Decision Making for School Organization. What and What for

    ERIC Educational Resources Information Center

    Myers, Donald A.; Sinclair, Robert

    1973-01-01

    A framework of 13 decision criteria to give educators help in comparing the relative merits of different forms of school organization. The methods of school organization judged to be in widespread use and defined in the article are (1) the self-contained classroom, team teaching, departmentalization, modular scheduling, differentiated staffing,…

  15. Efficient Synthesis of 4,8-Ditoluoyl-1,5-Dihydroxynaphthalene

    NASA Technical Reports Server (NTRS)

    Tyson, Daniel S.; Meador, Michael A.

    2003-01-01

    4,8-Ditoluoyl-1,5-dihydroxynaphthalene was synthesized in quantitative yield from the corresponding methylenequinone via base-catalyzed hydration. Alkaline treatment gives the title compound in one step with a 99% yield, an improvement of 80% compared to the acidic, two step literature method for preparing 4,8-dibenzoyl-1,5-dihydroxynaphthalene.

  16. Comparing Societies from the 1500s in the Sixth Grade

    ERIC Educational Resources Information Center

    Matson, Trista; Henning, Mary Beth

    2008-01-01

    Inquiry is the process by which teachers give students an open-ended question, and then students investigate the evidence and draw conclusions based upon their findings. This method promotes critical thinking, as students cite evidence to support their opinions. Inquiry is most effective when it builds upon students' prior knowledge. To promote…

  17. Hot-compress: A new postdeposition treatment for ZnO-based flexible dye-sensitized solar cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haque Choudhury, Mohammad Shamimul, E-mail: shamimul129@gmail.com; Department of Electrical and Electronic Engineering, International Islamic University Chittagong, b154/a, College Road, Chittagong 4203; Kishi, Naoki

    2016-08-15

    Highlights: • A new postdeposition treatment named hot-compress is introduced. • Hot-compression gives homogeneous compact layer ZnO photoanode. • I-V and EIS analysis data confirms the efficacy of this method. • Charge transport resistance was reduced by the application of hot-compression. - Abstract: This article introduces a new postdeposition treatment named hot-compress for flexible zinc oxide–base dye-sensitized solar cells. This postdeposition treatment includes the application of compression pressure at an elevated temperature. The optimum compression pressure of 130 Ma at an optimum compression temperature of 70 °C heating gives better photovoltaic performance compared to the conventional cells. The aptness ofmore » this method was confirmed by investigating scanning electron microscopy image, X-ray diffraction, current-voltage and electrochemical impedance spectroscopy analysis of the prepared cells. Proper heating during compression lowers the charge transport resistance, longer the electron lifetime of the device. As a result, the overall power conversion efficiency of the device was improved about 45% compared to the conventional room temperature compressed cell.« less

  18. Strong Measurements Give a Better Direct Measurement of the Quantum Wave Function.

    PubMed

    Vallone, Giuseppe; Dequal, Daniele

    2016-01-29

    Weak measurements have thus far been considered instrumental in the so-called direct measurement of the quantum wave function [4J. S. Lundeen, Nature (London) 474, 188 (2011).]. Here we show that a direct measurement of the wave function can be obtained by using measurements of arbitrary strength. In particular, in the case of strong measurements, i.e., those in which the coupling between the system and the measuring apparatus is maximum, we compared the precision and the accuracy of the two methods, by showing that strong measurements outperform weak measurements in both for arbitrary quantum states in most cases. We also give the exact expression of the difference between the original and reconstructed wave function obtained by the weak measurement approach; this will allow one to define the range of applicability of such a method.

  19. Evaluation of an alternative extraction procedure for enterotoxin determination in dairy products.

    PubMed

    Meyrand, A; Atrache, V; Bavai, C; Montet, M P; Vernozy-Rozand, C

    1999-06-01

    A concentration protocol based on trichloroacetic acid precipitation was evaluated and compared with the reference method using dialysis concentration. Different quantities of purified staphylococcal enterotoxins were added to pasteurized Camembert-type cheeses. Detection of enterotoxins in these cheeses was performed using an automated detection system. Raw goat milk Camembert-type cheeses involved in a staphylococcal food poisoning were also tested. Both enterotoxin extraction methods allowed detection of the lowest enterotoxin concentration level used in this study (0.5 ng g-1). Compared with the dialysis concentration method, TCA precipitation of staphylococcal enterotoxins was 'user-friendly' and less time-consuming. These results suggest that TCA precipitation is a rapid (1 h), simple and reliable method of extracting enterotoxin from food which gives excellent recovery from dairy products.

  20. Multicritical points for spin-glass models on hierarchical lattices.

    PubMed

    Ohzeki, Masayuki; Nishimori, Hidetoshi; Berker, A Nihat

    2008-06-01

    The locations of multicritical points on many hierarchical lattices are numerically investigated by the renormalization group analysis. The results are compared with an analytical conjecture derived by using the duality, the gauge symmetry, and the replica method. We find that the conjecture does not give the exact answer but leads to locations slightly away from the numerically reliable data. We propose an improved conjecture to give more precise predictions of the multicritical points than the conventional one. This improvement is inspired by a different point of view coming from the renormalization group and succeeds in deriving very consistent answers with many numerical data.

  1. Analytical and Experimental Vibration Analysis of a Faulty Gear System.

    DTIC Science & Technology

    1994-10-01

    Wigner - Ville Distribution ( WVD ) was used to give a comprehensive comparison of the predicted and...experimental results. The WVD method applied to the experimental results were also compared to other fault detection techniques to verify the WVD’s ability to...of the damaged test gear and the predicted vibration from the model with simulated gear tooth pitting damage. Results also verified that the WVD method can successfully detect and locate gear tooth wear and pitting damage.

  2. Ciona Genetics

    PubMed Central

    Veeman, Michael T.; Chiba, Shota; Smith, William C.

    2010-01-01

    Ascidians, such as Ciona, are invertebrate chordates with simple embryonic body plans and small, relatively non-redundant genomes. Ciona genetics is in its infancy compared to many other model systems, but it provides a powerful method for studying this important vertebrate outgroup. Here we give basic methods for genetic analysis of Ciona, including protocols for controlled crosses both by natural spawning and by the surgical isolation of gametes; the identification and propagation of mutant lines; and strategies for positional cloning. PMID:21805273

  3. Influence of Prosolv and Prosolv:Mannitol 200 direct compression fillers on the physicomechanical properties of atorvastatin oral dispersible tablets.

    PubMed

    Gowda, Veeran; Pabari, Ritesh M; Kelly, John G; Ramtoola, Zebunnissa

    2015-06-01

    The objective of the present study was to evaluate the influence of Prosolv® and Prosolv®: Mannitol 200 direct compression (DC) fillers on the physicomechanical characteristics of oral dispersible tablets (ODTs) of crystalline atorvastatin calcium. ODTs were formulated by DC and were analyzed for weight uniformity, hardness, friability, drug content, disintegration and dissolution. Three disintegration time (DT) test methods; European Pharmacopoeia (EP) method for conventional tablets (Method 1), a modification of this method (Method 2) and the EP method for oral lyophilisates (Method 3) were compared as part of this study. All ODTs showed low weight variation of <2.5%. Prosolv® only ODTs showed the highest tablet hardness of ∼ 73 N, hardness decreased with increasing mannitol content. Friability of all formulations was <1% although friability of Prosolv®:Mannitol ODTs was higher than for pure Prosolv®. DT of all ODTs was <30 s. Method 2 showed the fastest DT. Method 3 was non-discriminatory giving a DT of 13-15 s for all formulations. Atorvastatin dissolution from all ODTs was >60% within 5 min despite the drug being crystalline. Prosolv® and Prosolv®:Mannitol-based ODTs are suitable for ODT formulations by DC to give ODTs with high mechanical strength, rapid disintegration and dissolution.

  4. A short note on the paper of Liu et al. (2012). A relative Lempel-Ziv complexity: Application to comparing biological sequences. Chemical Physics Letters, volume 530, 19 March 2012, pages 107-112

    NASA Astrophysics Data System (ADS)

    Arit, Turkan; Keskin, Burak; Firuzan, Esin; Cavas, Cagin Kandemir; Liu, Liwei; Cavas, Levent

    2018-04-01

    The report entitled "L. Liu, D. Li, F. Bai, A relative Lempel-Ziv complexity: Application to comparing biological sequences, Chem. Phys. Lett. 530 (2012) 107-112" mentions on the powerful construction of phylogenetic trees based on Lempel-Ziv algorithm. On the other hand, the method explained in the paper does not give promising result on the data set on invasive Caulerpa taxifolia in the Mediterranean Sea. The phylogenetic trees are obtained by the proposed method of the aforementioned paper in this short note.

  5. Comparative study of landslides susceptibility mapping methods: Multi-Criteria Decision Making (MCDM) and Artificial Neural Network (ANN)

    NASA Astrophysics Data System (ADS)

    Salleh, S. A.; Rahman, A. S. A. Abd; Othman, A. N.; Mohd, W. M. N. Wan

    2018-02-01

    As different approach produces different results, it is crucial to determine the methods that are accurate in order to perform analysis towards the event. This research aim is to compare the Rank Reciprocal (MCDM) and Artificial Neural Network (ANN) analysis techniques in determining susceptible zones of landslide hazard. The study is based on data obtained from various sources such as local authority; Dewan Bandaraya Kuala Lumpur (DBKL), Jabatan Kerja Raya (JKR) and other agencies. The data were analysed and processed using Arc GIS. The results were compared by quantifying the risk ranking and area differential. It was also compared with the zonation map classified by DBKL. The results suggested that ANN method gives better accuracy compared to MCDM with 18.18% higher accuracy assessment of the MCDM approach. This indicated that ANN provides more reliable results and it is probably due to its ability to learn from the environment thus portraying realistic and accurate result.

  6. Efficiently estimating salmon escapement uncertainty using systematically sampled data

    USGS Publications Warehouse

    Reynolds, Joel H.; Woody, Carol Ann; Gove, Nancy E.; Fair, Lowell F.

    2007-01-01

    Fish escapement is generally monitored using nonreplicated systematic sampling designs (e.g., via visual counts from towers or hydroacoustic counts). These sampling designs support a variety of methods for estimating the variance of the total escapement. Unfortunately, all the methods give biased results, with the magnitude of the bias being determined by the underlying process patterns. Fish escapement commonly exhibits positive autocorrelation and nonlinear patterns, such as diurnal and seasonal patterns. For these patterns, poor choice of variance estimator can needlessly increase the uncertainty managers have to deal with in sustaining fish populations. We illustrate the effect of sampling design and variance estimator choice on variance estimates of total escapement for anadromous salmonids from systematic samples of fish passage. Using simulated tower counts of sockeye salmon Oncorhynchus nerka escapement on the Kvichak River, Alaska, five variance estimators for nonreplicated systematic samples were compared to determine the least biased. Using the least biased variance estimator, four confidence interval estimators were compared for expected coverage and mean interval width. Finally, five systematic sampling designs were compared to determine the design giving the smallest average variance estimate for total annual escapement. For nonreplicated systematic samples of fish escapement, all variance estimators were positively biased. Compared to the other estimators, the least biased estimator reduced bias by, on average, from 12% to 98%. All confidence intervals gave effectively identical results. Replicated systematic sampling designs consistently provided the smallest average estimated variance among those compared.

  7. Watching fat digestion: a microscopic method assessing intraluminal lipolysis.

    PubMed

    Alliet, P; Eggermont, E

    1990-01-01

    We investigated the utility of a microscopic method assessing lipolytic activity of duodenal fluid. The method is based on evaluating microscopically physicochemical changes along time when olive oil is mixed with duodenal fluid in the presence of excess bile salts (13 mM) and calcium ions (8 mM) at pH 6.5. Data are analyzed on duodenal aspirations from 155 children referred for failure to thrive or gastrointestinal disorders. The "fat digestion index" (FDI) is the percentage of intact olive oil droplets that underwent complete hydrolysis or are transformed into amorphous reticular bodies (ARB) at steady state. In all patients with proven exocrine pancreatic disorder, a FDI less than 25% was found. This value was thus considered as a cut-off value. When no microscopic lipolysis (FDI = 0) was observed, exocrine pancreatic enzyme assays are suggestive for a total exocrine pancreatic insufficiency. In the group of children with FDI ranging 5-25%, however, no statistical difference in exocrine pancreatic enzymes could be found, as compared to control values. Our tests thus evaluate fat digestion in a dynamic way. It further seems to give additional information on intraluminal lipolysis as compared to exocrine pancreatic enzyme concentrations, since it gives an idea about the integrated action of (co)lipase and bile salts.

  8. On the equivalence of spherical splines with least-squares collocation and Stokes's formula for regional geoid computation

    NASA Astrophysics Data System (ADS)

    Ophaug, Vegard; Gerlach, Christian

    2017-11-01

    This work is an investigation of three methods for regional geoid computation: Stokes's formula, least-squares collocation (LSC), and spherical radial base functions (RBFs) using the spline kernel (SK). It is a first attempt to compare the three methods theoretically and numerically in a unified framework. While Stokes integration and LSC may be regarded as classic methods for regional geoid computation, RBFs may still be regarded as a modern approach. All methods are theoretically equal when applied globally, and we therefore expect them to give comparable results in regional applications. However, it has been shown by de Min (Bull Géod 69:223-232, 1995. doi: 10.1007/BF00806734) that the equivalence of Stokes's formula and LSC does not hold in regional applications without modifying the cross-covariance function. In order to make all methods comparable in regional applications, the corresponding modification has been introduced also in the SK. Ultimately, we present numerical examples comparing Stokes's formula, LSC, and SKs in a closed-loop environment using synthetic noise-free data, to verify their equivalence. All agree on the millimeter level.

  9. Comparisons of Upper Tropospheric Humidity Retrievals from TOVS and METEOSAT

    NASA Technical Reports Server (NTRS)

    Escoffier, C.; Bates, J.; Chedin, A.; Rossow, W. B.; Schmetz, J.

    1999-01-01

    Two different methods for retrieving Upper Tropospheric Humidities (UTH) from the TOVS (TIROS Operational Vertical Sounder) instruments aboard NOAA polar orbiting satellites are presented and compared. The first one, from the Environmental Technology Laboratory, computed by J. Bates and D. Jackson (hereafter BJ method), estimates UTH from a simplified radiative transfer analysis of the upper tropospheric infrared water vapor channel at wavelength measured by HIRS (6.3 micrometer). The second one results from a neural network analysis of the TOVS (HIRS and MSU) data developed at, the Laboratoire de Meteorologie Dynamique (hereafter the 3I (Improved Initialization Inversion) method). Although the two methods give very similar retrievals in temperate regions (30-60 N and S), an absolute bias up to 16% appears in the convective zone of the tropics. The two datasets have also been compared with UTH retrievals from infrared radiance measurements in the 6.3 micrometer channel from the geostationary satellite METEOSAT (hereafter MET method). The METEOSAT retrievals are systematically drier than the TOVS-based results by an absolute bias between 5 and 25%. Despite the biases, the spatial and temporal correlations are very good. The purpose of this study is to explain the deviations observed between the three datasets. The sensitivity of UTH to air temperature and humidity profiles is analysed as are the clouds effects. Overall, the comparison of the three retrievals gives an assessment of the current uncertainties in water vapor amounts in the upper troposphere as determined from NOAA and METEOSAT satellites.

  10. Pilot Study: Impact of Computer Simulation on Students' Economic Policy Performance. Pilot Study.

    ERIC Educational Resources Information Center

    Domazlicky, Bruce; France, Judith

    Fiscal and monetary policies taught in macroeconomic principles courses are concepts that might require both lecture and simulation methods. The simulation models, which apply the principles gleened from comparative statistics to a dynamic world, may give students an appreciation for the problems facing policy makers. This paper is a report of a…

  11. Expected Utility Illustrated: A Graphical Analysis of Gambles with More than Two Possible Outcomes

    ERIC Educational Resources Information Center

    Chen, Frederick H.

    2010-01-01

    The author presents a simple geometric method to graphically illustrate the expected utility from a gamble with more than two possible outcomes. This geometric result gives economics students a simple visual aid for studying expected utility theory and enables them to analyze a richer set of decision problems under uncertainty compared to what…

  12. The Influences of Delay and Severity of Intellectual Disability on Event Memory in Children

    ERIC Educational Resources Information Center

    Brown, Deirdre A.; Lewis, Charlie N.; Lamb, Michael E.; Stephens, Emma

    2012-01-01

    Objective: To examine the ability of children with intellectual disabilities to give reliable accounts of personally experienced events, considering the effects of delay, severity of disability, and the types of interview prompt used. Method: In a between-subjects design, we compared children with intellectual disabilities (7-12 years) that fell…

  13. Underground Mining Method Selection Using WPM and PROMETHEE

    NASA Astrophysics Data System (ADS)

    Balusa, Bhanu Chander; Singam, Jayanthu

    2018-04-01

    The aim of this paper is to represent the solution to the problem of selecting suitable underground mining method for the mining industry. It is achieved by using two multi-attribute decision making techniques. These two techniques are weighted product method (WPM) and preference ranking organization method for enrichment evaluation (PROMETHEE). In this paper, analytic hierarchy process is used for weight's calculation of the attributes (i.e. parameters which are used in this paper). Mining method selection depends on physical parameters, mechanical parameters, economical parameters and technical parameters. WPM and PROMETHEE techniques have the ability to consider the relationship between the parameters and mining methods. The proposed techniques give higher accuracy and faster computation capability when compared with other decision making techniques. The proposed techniques are presented to determine the effective mining method for bauxite mine. The results of these techniques are compared with methods used in the earlier research works. The results show, conventional cut and fill method is the most suitable mining method.

  14. Numerical solution of the unsteady diffusion-convection-reaction equation based on improved spectral Galerkin method

    NASA Astrophysics Data System (ADS)

    Zhong, Jiaqi; Zeng, Cheng; Yuan, Yupeng; Zhang, Yuzhe; Zhang, Ye

    2018-04-01

    The aim of this paper is to present an explicit numerical algorithm based on improved spectral Galerkin method for solving the unsteady diffusion-convection-reaction equation. The principal characteristics of this approach give the explicit eigenvalues and eigenvectors based on the time-space separation method and boundary condition analysis. With the help of Fourier series and Galerkin truncation, we can obtain the finite-dimensional ordinary differential equations which facilitate the system analysis and controller design. By comparing with the finite element method, the numerical solutions are demonstrated via two examples. It is shown that the proposed method is effective.

  15. Researcher’s Perspective of Substitution Method on Text Steganography

    NASA Astrophysics Data System (ADS)

    Zamir Mansor, Fawwaz; Mustapha, Aida; Azah Samsudin, Noor

    2017-08-01

    The linguistic steganography studies are still in the stage of development and empowerment practices. This paper will present several text steganography on substitution methods based on the researcher’s perspective, all scholar paper will analyse and compared. The objective of this paper is to give basic information in the substitution method of text domain steganography that has been applied by previous researchers. The typical ways of this method also will be identified in this paper to reveal the most effective method in text domain steganography. Finally, the advantage of the characteristic and drawback on these techniques in generally also presented in this paper.

  16. Multistage Spectral Relaxation Method for Solving the Hyperchaotic Complex Systems

    PubMed Central

    Saberi Nik, Hassan; Rebelo, Paulo

    2014-01-01

    We present a pseudospectral method application for solving the hyperchaotic complex systems. The proposed method, called the multistage spectral relaxation method (MSRM) is based on a technique of extending Gauss-Seidel type relaxation ideas to systems of nonlinear differential equations and using the Chebyshev pseudospectral methods to solve the resulting system on a sequence of multiple intervals. In this new application, the MSRM is used to solve famous hyperchaotic complex systems such as hyperchaotic complex Lorenz system and the complex permanent magnet synchronous motor. We compare this approach to the Runge-Kutta based ode45 solver to show that the MSRM gives accurate results. PMID:25386624

  17. An evaluation of a bioelectrical impedance analyser for the estimation of body fat content.

    PubMed Central

    Maughan, R J

    1993-01-01

    Measurement of body composition is an important part of any assessment of health or fitness. Hydrostatic weighing is generally accepted as the most reliable method for the measurement of body fat content, but is inconvenient. Electrical impedance analysers have recently been proposed as an alternative to the measurement of skinfold thickness. Both these latter methods are convenient, but give values based on estimates obtained from population studies. This study compared values of body fat content obtained by hydrostatic weighing, skinfold thickness measurement and electrical impedance on 50 (28 women, 22 men) healthy volunteers. Mean(s.e.m.) values obtained by the three methods were: hydrostatic weighing, 20.5(1.2)%; skinfold thickness, 21.8(1.0)%; impedance, 20.8(0.9)%. The results indicate that the correlation between the skinfold method and hydrostatic weighing (0.931) is somewhat higher than that between the impedance method and hydrostatic weighing (0.830). This is, perhaps, not surprising given the fact that the impedance method is based on an estimate of total body water which is then used to calculate body fat content. The skinfold method gives an estimate of body density, and the assumptions involved in the conversion from body density to body fat content are the same for both methods. PMID:8457817

  18. 34 CFR 403.111 - How must funds be used under the Secondary School Vocational Education Program and the...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... individuals who are members of special populations. Examples: Methods by which an eligible recipient may give... special populations include, but are not limited to, the following: Example 1: Method to give priority to...: Method to give priority to a limited number of program areas. Based on data from the preceding fiscal...

  19. 34 CFR 403.111 - How must funds be used under the Secondary School Vocational Education Program and the...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... individuals who are members of special populations. Examples: Methods by which an eligible recipient may give... special populations include, but are not limited to, the following: Example 1: Method to give priority to...: Method to give priority to a limited number of program areas. Based on data from the preceding fiscal...

  20. 34 CFR 403.111 - How must funds be used under the Secondary School Vocational Education Program and the...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... individuals who are members of special populations. Examples: Methods by which an eligible recipient may give... special populations include, but are not limited to, the following: Example 1: Method to give priority to...: Method to give priority to a limited number of program areas. Based on data from the preceding fiscal...

  1. Methods for Multiloop Identification of Visual and Neuromuscular Pilot Responses.

    PubMed

    Olivari, Mario; Nieuwenhuizen, Frank M; Venrooij, Joost; Bülthoff, Heinrich H; Pollini, Lorenzo

    2015-12-01

    In this paper, identification methods are proposed to estimate the neuromuscular and visual responses of a multiloop pilot model. A conventional and widely used technique for simultaneous identification of the neuromuscular and visual systems makes use of cross-spectral density estimates. This paper shows that this technique requires a specific noninterference hypothesis, often implicitly assumed, that may be difficult to meet during actual experimental designs. A mathematical justification of the necessity of the noninterference hypothesis is given. Furthermore, two methods are proposed that do not have the same limitations. The first method is based on autoregressive models with exogenous inputs, whereas the second one combines cross-spectral estimators with interpolation in the frequency domain. The two identification methods are validated by offline simulations and contrasted to the classic method. The results reveal that the classic method fails when the noninterference hypothesis is not fulfilled; on the contrary, the two proposed techniques give reliable estimates. Finally, the three identification methods are applied to experimental data from a closed-loop control task with pilots. The two proposed techniques give comparable estimates, different from those obtained by the classic method. The differences match those found with the simulations. Thus, the two identification methods provide a good alternative to the classic method and make it possible to simultaneously estimate human's neuromuscular and visual responses in cases where the classic method fails.

  2. Comparing photonic band structure calculation methods for diamond and pyrochlore crystals.

    PubMed

    Vermolen, E C M; Thijssen, J H J; Moroz, A; Megens, M; van Blaaderen, A

    2009-04-27

    The photonic band diagrams of close-packed colloidal diamond and pyrochlore structures, have been studied using Korringa-Kohn-Rostoker (KKR) and plane-wave calculations. In addition, the occurrence of a band gap has been investigated for the binary Laves structures and their constituent large- and small-sphere substructures. It was recently shown that these Laves structures give the possibility to fabricate the diamond and pyrochlore structures by self-organization. The comparison of the two calculation methods opens the possibility to study the validity and the convergence of the results, which have been an issue for diamond-related structures in the past. The KKR calculations systematically give a lower value for the gap width than the plane-wave calculations. This difference can partly be ascribed to a convergence issue in the plane-wave code when a contact point of two spheres coincides with the grid.

  3. Effects of tunnelling and asymmetry for system-bath models of electron transfer

    NASA Astrophysics Data System (ADS)

    Mattiat, Johann; Richardson, Jeremy O.

    2018-03-01

    We apply the newly derived nonadiabatic golden-rule instanton theory to asymmetric models describing electron-transfer in solution. The models go beyond the usual spin-boson description and have anharmonic free-energy surfaces with different values for the reactant and product reorganization energies. The instanton method gives an excellent description of the behaviour of the rate constant with respect to asymmetry for the whole range studied. We derive a general formula for an asymmetric version of the Marcus theory based on the classical limit of the instanton and find that this gives significant corrections to the standard Marcus theory. A scheme is given to compute this rate based only on equilibrium simulations. We also compare the rate constants obtained by the instanton method with its classical limit to study the effect of tunnelling and other quantum nuclear effects. These quantum effects can increase the rate constant by orders of magnitude.

  4. Properties of a Formal Method for Prediction of Emergent Behaviors in Swarm-based Systems

    NASA Technical Reports Server (NTRS)

    Rouff, Christopher; Vanderbilt, Amy; Hinchey, Mike; Truszkowski, Walt; Rash, James

    2004-01-01

    Autonomous intelligent swarms of satellites are being proposed for NASA missions that have complex behaviors and interactions. The emergent properties of swarms make these missions powerful, but at the same time more difficult to design and assure that proper behaviors will emerge. This paper gives the results of research into formal methods techniques for verification and validation of NASA swarm-based missions. Multiple formal methods were evaluated to determine their effectiveness in modeling and assuring the behavior of swarms of spacecraft. The NASA ANTS mission was used as an example of swarm intelligence for which to apply the formal methods. This paper will give the evaluation of these formal methods and give partial specifications of the ANTS mission using four selected methods. We then give an evaluation of the methods and the needed properties of a formal method for effective specification and prediction of emergent behavior in swarm-based systems.

  5. Information for patients with cancer. Does personalization make a difference? Pilot study results and randomised trial in progress.

    PubMed Central

    Jones, R.; Pearson, J.; Cawsey, A.; Barrett, A.

    1996-01-01

    Although there are a number of groups working on the provision of personalized patient information there has been little evaluation. We have developed and piloted a method of giving patients on-line access to their own medical records with associated explanations. We are comparing, in a randomised trial, personalized with general computer based information for patients undergoing radiotherapy for cancer. We present results from the pilot study and the evaluation methods to be employed. PMID:8947701

  6. Development of the triplet singularity for the analysis of wings and bodies in supersonic flow

    NASA Technical Reports Server (NTRS)

    Woodward, F. A.

    1981-01-01

    A supersonic triplet singularity was developed which eliminates internal waves generated by panels having supersonic edges. The triplet is a linear combination of source and vortex distributions which gives directional properties to the perturbation flow field surrounding the panel. The theoretical development of the triplet singularity is described together with its application to the calculation of surface pressures on wings and bodies. Examples are presented comparing the results of the new method with other supersonic methods and with experimental data.

  7. Propeller flow visualization techniques

    NASA Technical Reports Server (NTRS)

    Stefko, G. L.; Paulovich, F. J.; Greissing, J. P.; Walker, E. D.

    1982-01-01

    Propeller flow visualization techniques were tested. The actual operating blade shape as it determines the actual propeller performance and noise was established. The ability to photographically determine the advanced propeller blade tip deflections, local flow field conditions, and gain insight into aeroelastic instability is demonstrated. The analytical prediction methods which are being developed can be compared with experimental data. These comparisons contribute to the verification of these improved methods and give improved capability for designing future advanced propellers with enhanced performance and noise characteristics.

  8. Reactions of Chinese adults to warning labels on cigarette packages: A survey in Jiangsu Province

    PubMed Central

    2011-01-01

    Background To compare reactions to warning labels presented on cigarette packages with a specific focus on whether the new Chinese warning labels are better than the old labels and international labels. Methods Participants aged 18 and over were recruited in two cities of Jiangsu Province in 2008, and 876 face-to-face interviews were completed. Participants were shown six types of warning labels found on cigarette packages. They comprised one old Chinese label, one new label used within the Chinese market, and one Chinese overseas label and three foreign brand labels. Participants were asked about the impact of the warning labels on: their knowledge of harm from smoking, giving cigarettes as a gift, and quitting smoking. Results Compared with the old Chinese label, a higher proportion of participants said the new label provided clear information on harm caused by smoking (31.2% vs 18.3%). Participants were less likely to give cigarettes with the new label on the package compared with the old label (25.2% vs 20.8%). These proportions were higher when compared to the international labels. Overall, 26.8% of participants would quit smoking based on information from the old label and 31.5% from the new label. When comparing the Chinese overseas label and other foreign labels to the new Chinese label with regard to providing knowledge of harm warning, impact of quitting smoking and giving cigarettes as a gift, the overseas labels were more effective. Conclusion Both the old and the new Chinese warning label are not effective in this target population. PMID:21349205

  9. Variational approach to studying solitary waves in the nonlinear Schrödinger equation with complex potentials

    DOE PAGES

    Mertens, Franz G.; Cooper, Fred; Arevalo, Edward; ...

    2016-09-15

    Here in this paper, we discuss the behavior of solitary wave solutions of the nonlinear Schrödinger equation (NLSE) as they interact with complex potentials, using a four-parameter variational approximation based on a dissipation functional formulation of the dynamics. We concentrate on spatially periodic potentials with the periods of the real and imaginary part being either the same or different. Our results for the time evolution of the collective coordinates of our variational ansatz are in good agreement with direct numerical simulation of the NLSE. We compare our method with a collective coordinate approach of Kominis and give examples where themore » two methods give qualitatively different answers. In our variational approach, we are able to give analytic results for the small oscillation frequency of the solitary wave oscillating parameters which agree with the numerical solution of the collective coordinate equations. We also verify that instabilities set in when the slope dp(t)/dv(t) becomes negative when plotted parametrically as a function of time, where p(t) is the momentum of the solitary wave and v(t) the velocity.« less

  10. Variational approach to studying solitary waves in the nonlinear Schrödinger equation with complex potentials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mertens, Franz G.; Cooper, Fred; Arevalo, Edward

    Here in this paper, we discuss the behavior of solitary wave solutions of the nonlinear Schrödinger equation (NLSE) as they interact with complex potentials, using a four-parameter variational approximation based on a dissipation functional formulation of the dynamics. We concentrate on spatially periodic potentials with the periods of the real and imaginary part being either the same or different. Our results for the time evolution of the collective coordinates of our variational ansatz are in good agreement with direct numerical simulation of the NLSE. We compare our method with a collective coordinate approach of Kominis and give examples where themore » two methods give qualitatively different answers. In our variational approach, we are able to give analytic results for the small oscillation frequency of the solitary wave oscillating parameters which agree with the numerical solution of the collective coordinate equations. We also verify that instabilities set in when the slope dp(t)/dv(t) becomes negative when plotted parametrically as a function of time, where p(t) is the momentum of the solitary wave and v(t) the velocity.« less

  11. Abdominal fat thickness measurement using Focused Impedance Method (FIM) - phantom study

    NASA Astrophysics Data System (ADS)

    Haowlader, Salahuddin; Baig, Tanveer Noor; Siddique-e Rabbani, K.

    2010-04-01

    Abdominal fat thickness is a risk indicator of heart diseases, diabetes, etc., and its measurement is therefore important from the point of view of preventive care. Tetrapolar electrical impedance measurements (TPIM) could offer a simple and low cost alternative for such measurement compared to conventional techniques using CT scan and MRI, and has been tried by different groups. Focused Impedance Method (FIM) appears attractive as it can give localised information. An intuitive physical model was developed and experimental work was performed on a phantom designed to simulate abdominal subcutaneous fat layer in a body. TPIM measurements were performed with varying electrode separations. For small separations of current and potential electrodes, the measured impedance changed little, but started to decrease sharply beyond a certain separation, eventually diminishing gradually to negligible values. The finding could be explained using the intuitive physical model and gives an important practical information. TPIM and FIM may be useful for measurement of SFL thickness only if the electrode separations are within a certain specific range, and will fail to give reliable results if beyond this range. Further work, both analytical and experimental, are needed to establish this technique on a sound footing.

  12. Can the prevalence of high blood drug concentrations in a population be estimated by analysing oral fluid? A study of tetrahydrocannabinol and amphetamine.

    PubMed

    Gjerde, Hallvard; Verstraete, Alain

    2010-02-25

    To study several methods for estimating the prevalence of high blood concentrations of tetrahydrocannabinol and amphetamine in a population of drug users by analysing oral fluid (saliva). Five methods were compared, including simple calculation procedures dividing the drug concentrations in oral fluid by average or median oral fluid/blood (OF/B) drug concentration ratios or linear regression coefficients, and more complex Monte Carlo simulations. Populations of 311 cannabis users and 197 amphetamine users from the Rosita-2 Project were studied. The results of a feasibility study suggested that the Monte Carlo simulations might give better accuracies than simple calculations if good data on OF/B ratios is available. If using only 20 randomly selected OF/B ratios, a Monte Carlo simulation gave the best accuracy but not the best precision. Dividing by the OF/B regression coefficient gave acceptable accuracy and precision, and was therefore the best method. None of the methods gave acceptable accuracy if the prevalence of high blood drug concentrations was less than 15%. Dividing the drug concentration in oral fluid by the OF/B regression coefficient gave an acceptable estimation of high blood drug concentrations in a population, and may therefore give valuable additional information on possible drug impairment, e.g. in roadside surveys of drugs and driving. If good data on the distribution of OF/B ratios are available, a Monte Carlo simulation may give better accuracy. 2009 Elsevier Ireland Ltd. All rights reserved.

  13. Magnetic field effect on the energy levels of an exciton in a GaAs quantum dot: Application for excitonic lasers.

    PubMed

    Jahan, K Luhluh; Boda, A; Shankar, I V; Raju, Ch Narasimha; Chatterjee, Ashok

    2018-03-22

    The problem of an exciton trapped in a Gaussian quantum dot (QD) of GaAs is studied in both two and three dimensions in the presence of an external magnetic field using the Ritz variational method, the 1/N expansion method and the shifted 1/N expansion method. The ground state energy and the binding energy of the exciton are obtained as a function of the quantum dot size, confinement strength and the magnetic field and compared with those available in the literature. While the variational method gives the upper bound to the ground state energy, the 1/N expansion method gives the lower bound. The results obtained from the shifted 1/N expansion method are shown to match very well with those obtained from the exact diagonalization technique. The variation of the exciton size and the oscillator strength of the exciton are also studied as a function of the size of the quantum dot. The excited states of the exciton are computed using the shifted 1/N expansion method and it is suggested that a given number of stable excitonic bound states can be realized in a quantum dot by tuning the quantum dot parameters. This can open up the possibility of having quantum dot lasers using excitonic states.

  14. Effect of joint spacing and joint dip on the stress distribution around tunnels using different numerical methods

    NASA Astrophysics Data System (ADS)

    Nikadat, Nooraddin; Fatehi Marji, Mohammad; Rahmannejad, Reza; Yarahmadi Bafghi, Alireza

    2016-11-01

    Different conditions may affect the stability of tunnels by the geometry (spacing and orientation) of joints in the surrounded rock mass. In this study, by comparing the results obtained by the three novel numerical methods i.e. finite element method (Phase2), discrete element method (UDEC) and indirect boundary element method (TFSDDM), the effects of joint spacing and joint dips on the stress distribution around rock tunnels are numerically studied. These comparisons indicate the validity of the stress analyses around circular rock tunnels. These analyses also reveal that for a semi-continuous environment, boundary element method gives more accurate results compared to the results of finite element and distinct element methods. In the indirect boundary element method, the displacements due to joints of different spacing and dips are estimated by using displacement discontinuity (DD) formulations and the total stress distribution around the tunnel are obtained by using fictitious stress (FS) formulations.

  15. Program VSAERO theory document: A computer program for calculating nonlinear aerodynamic characteristics of arbitrary configurations

    NASA Technical Reports Server (NTRS)

    Maskew, Brian

    1987-01-01

    The VSAERO low order panel method formulation is described for the calculation of subsonic aerodynamic characteristics of general configurations. The method is based on piecewise constant doublet and source singularities. Two forms of the internal Dirichlet boundary condition are discussed and the source distribution is determined by the external Neumann boundary condition. A number of basic test cases are examined. Calculations are compared with higher order solutions for a number of cases. It is demonstrated that for comparable density of control points where the boundary conditions are satisfied, the low order method gives comparable accuracy to the higher order solutions. It is also shown that problems associated with some earlier low order panel methods, e.g., leakage in internal flows and junctions and also poor trailing edge solutions, do not appear for the present method. Further, the application of the Kutta conditions is extremely simple; no extra equation or trailing edge velocity point is required. The method has very low computing costs and this has made it practical for application to nonlinear problems requiring iterative solutions for wake shape and surface boundary layer effects.

  16. 14 CFR 221.140 - Method of giving concurrence.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Aviation shall be used by a carrier to give authority to another carrier to issue and file with the... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Method of giving concurrence. 221.140 Section 221.140 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION...

  17. 14 CFR 221.140 - Method of giving concurrence.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Aviation shall be used by a carrier to give authority to another carrier to issue and file with the... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false Method of giving concurrence. 221.140 Section 221.140 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION...

  18. 14 CFR 221.140 - Method of giving concurrence.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Aviation shall be used by a carrier to give authority to another carrier to issue and file with the... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false Method of giving concurrence. 221.140 Section 221.140 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION...

  19. 14 CFR 221.140 - Method of giving concurrence.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Aviation shall be used by a carrier to give authority to another carrier to issue and file with the... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false Method of giving concurrence. 221.140 Section 221.140 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION...

  20. A comparison of three radiation models for the calculation of nozzle arcs

    NASA Astrophysics Data System (ADS)

    Dixon, C. M.; Yan, J. D.; Fang, M. T. C.

    2004-12-01

    Three radiation models, the semi-empirical model based on net emission coefficients (Zhang et al 1987 J. Phys. D: Appl. Phys. 20 386-79), the five-band P1 model (Eby et al 1998 J. Phys. D: Appl. Phys. 31 1578-88), and the method of partial characteristics (Aubrecht and Lowke 1994 J. Phys. D: Appl.Phys. 27 2066-73, Sevast'yanenko 1979 J. Eng. Phys. 36 138-48), are used to calculate the radiation transfer in an SF6 nozzle arc. The temperature distributions computed by the three models are compared with the measurements of Leseberg and Pietsch (1981 Proc. 4th Int. Symp. on Switching Arc Phenomena (Lodz, Poland) pp 236-40) and Leseberg (1982 PhD Thesis RWTH Aachen, Germany). It has been found that all three models give similar distributions of radiation loss per unit time and volume. For arcs burning in axially dominated flow, such as arcs in nozzle flow, the semi-empirical model and the P1 model give accurate predictions when compared with experimental results. The prediction by the method of partial characteristics is poorest. The computational cost is the lowest for the semi-empirical model.

  1. An examination of the hexokinase method for serum glucose assay using external quality assessment data.

    PubMed

    Westwood, A; Bullock, D G; Whitehead, T P

    1986-01-01

    Hexokinase methods for serum glucose assay appeared to give slightly but consistently higher inter-laboratory coefficients of variation than all methods combined in the UK External Quality Assessment Scheme; their performance over a two-year period was therefore compared with that for three groups of glucose oxidase methods. This assessment showed no intrinsic inferiority in the hexokinase method. The greater variation may be due to the more heterogeneous group of instruments, particularly discrete analysers, on which the method is used. The Beckman Glucose Analyzer and Astra group (using a glucose oxidase method) showed the least inter-laboratory variability but also the lowest mean value. No comment is offered on the absolute accuracy of any of the methods.

  2. Development and application of a local linearization algorithm for the integration of quaternion rate equations in real-time flight simulation problems

    NASA Technical Reports Server (NTRS)

    Barker, L. E., Jr.; Bowles, R. L.; Williams, L. H.

    1973-01-01

    High angular rates encountered in real-time flight simulation problems may require a more stable and accurate integration method than the classical methods normally used. A study was made to develop a general local linearization procedure of integrating dynamic system equations when using a digital computer in real-time. The procedure is specifically applied to the integration of the quaternion rate equations. For this application, results are compared to a classical second-order method. The local linearization approach is shown to have desirable stability characteristics and gives significant improvement in accuracy over the classical second-order integration methods.

  3. A Fast and Effective Pyridine-Free Method for the Determination of Hydroxyl Value of Hydroxyl-Terminated Polybutadiene and Other Hydroxy Compounds

    NASA Astrophysics Data System (ADS)

    Alex, Ancy Smitha; Kumar, Vijendra; Sekkar, V.; Bandyopadhyay, G. G.

    2017-07-01

    Hydroxyl-terminated polybutadiene (HTPB) is the workhorse propellant binder for launch vehicle and missile applications. Accurate determination of the hydroxyl value (OHV) of HTPB is crucial for tailoring the ultimate mechanical and ballistic properties of the propellant derived. This article describes a fast and effective methodology free of pyridine based on acetic anhydride, N-methyl imidazole, and toluene for the determination of OHV of nonpolar polymers like HTPB and other hydroxyl compounds. This method gives accurate and reproducible results comparable to standard methods and is superior to existing methods in terms of user friendliness, efficiency, and time requirement.

  4. Image object recognition based on the Zernike moment and neural networks

    NASA Astrophysics Data System (ADS)

    Wan, Jianwei; Wang, Ling; Huang, Fukan; Zhou, Liangzhu

    1998-03-01

    This paper first give a comprehensive discussion about the concept of artificial neural network its research methods and the relations with information processing. On the basis of such a discussion, we expound the mathematical similarity of artificial neural network and information processing. Then, the paper presents a new method of image recognition based on invariant features and neural network by using image Zernike transform. The method not only has the invariant properties for rotation, shift and scale of image object, but also has good fault tolerance and robustness. Meanwhile, it is also compared with statistical classifier and invariant moments recognition method.

  5. An efficient computational method for the approximate solution of nonlinear Lane-Emden type equations arising in astrophysics

    NASA Astrophysics Data System (ADS)

    Singh, Harendra

    2018-04-01

    The key purpose of this article is to introduce an efficient computational method for the approximate solution of the homogeneous as well as non-homogeneous nonlinear Lane-Emden type equations. Using proposed computational method given nonlinear equation is converted into a set of nonlinear algebraic equations whose solution gives the approximate solution to the Lane-Emden type equation. Various nonlinear cases of Lane-Emden type equations like standard Lane-Emden equation, the isothermal gas spheres equation and white-dwarf equation are discussed. Results are compared with some well-known numerical methods and it is observed that our results are more accurate.

  6. Comparison Study of Three Different Image Reconstruction Algorithms for MAT-MI

    PubMed Central

    Xia, Rongmin; Li, Xu

    2010-01-01

    We report a theoretical study on magnetoacoustic tomography with magnetic induction (MAT-MI). According to the description of signal generation mechanism using Green’s function, the acoustic dipole model was proposed to describe acoustic source excited by the Lorentz force. Using Green’s function, three kinds of reconstruction algorithms based on different models of acoustic source (potential energy, vectored acoustic pressure, and divergence of Lorenz force) are deduced and compared, and corresponding numerical simulations were conducted to compare these three kinds of reconstruction algorithms. The computer simulation results indicate that the potential energy method and vectored pressure method can directly reconstruct the Lorentz force distribution and give a more accurate reconstruction of electrical conductivity. PMID:19846363

  7. Comparative study on deposition of fluorine-doped tin dioxide thin films by conventional and ultrasonic spray pyrolysis methods for dye-sensitized solar modules

    NASA Astrophysics Data System (ADS)

    Icli, Kerem Cagatay; Kocaoglu, Bahadir Can; Ozenbas, Macit

    2018-01-01

    Fluorine-doped tin dioxide (FTO) thin films were produced via conventional spray pyrolysis and ultrasonic spray pyrolysis (USP) methods using alcohol-based solutions. The prepared films were compared in terms of crystal structure, morphology, surface roughness, visible light transmittance, and electronic properties. Upon investigation of the grain structures and morphologies, the films prepared using ultrasonic spray method provided relatively larger grains and due to this condition, carrier mobilities of these films exhibited slightly higher values. Dye-sensitized solar cells and 10×10 cm modules were prepared using commercially available and USP-deposited FTO/glass substrates, and solar performances were compared. It is observed that there exists no remarkable efficiency difference for both cells and modules, where module efficiency of the USP-deposited FTO glass substrates is 3.06% compared to commercial substrate giving 2.85% under identical conditions. We demonstrated that USP deposition is a low cost and versatile method of depositing commercial quality FTO thin films on large substrates employed in large area dye-sensitized solar modules or other thin film technologies.

  8. Classification of accelerometer wear and non-wear events in seconds for monitoring free-living physical activity

    PubMed Central

    Zhou, Shang-Ming; Hill, Rebecca A; Morgan, Kelly; Stratton, Gareth; Gravenor, Mike B; Bijlsma, Gunnar; Brophy, Sinead

    2015-01-01

    Objective To classify wear and non-wear time of accelerometer data for accurately quantifying physical activity in public health or population level research. Design A bi-moving-window-based approach was used to combine acceleration and skin temperature data to identify wear and non-wear time events in triaxial accelerometer data that monitor physical activity. Setting Local residents in Swansea, Wales, UK. Participants 50 participants aged under 16 years (n=23) and over 17 years (n=27) were recruited in two phases: phase 1: design of the wear/non-wear algorithm (n=20) and phase 2: validation of the algorithm (n=30). Methods Participants wore a triaxial accelerometer (GeneActiv) against the skin surface on the wrist (adults) or ankle (children). Participants kept a diary to record the timings of wear and non-wear and were asked to ensure that events of wear/non-wear last for a minimum of 15 min. Results The overall sensitivity of the proposed method was 0.94 (95% CI 0.90 to 0.98) and specificity 0.91 (95% CI 0.88 to 0.94). It performed equally well for children compared with adults, and females compared with males. Using surface skin temperature data in combination with acceleration data significantly improved the classification of wear/non-wear time when compared with methods that used acceleration data only (p<0.01). Conclusions Using either accelerometer seismic information or temperature information alone is prone to considerable error. Combining both sources of data can give accurate estimates of non-wear periods thus giving better classification of sedentary behaviour. This method can be used in population studies of physical activity in free-living environments. PMID:25968000

  9. Radar analysis of free oscillations of rail for diagnostics defects

    NASA Astrophysics Data System (ADS)

    Shaydurov, G. Y.; Kudinov, D. S.; Kokhonkova, E. A.; Potylitsyn, V. S.

    2018-05-01

    One of the tasks of developing and implementing defectoscopy devices is the minimal influence of the human factor in their exploitation. At present, rail inspection systems do not have sufficient depth of rail research, and ultrasonic diagnostics systems need to contact the sensor with the surface being studied, which leads to low productivity. The article gives a comparative analysis of existing noncontact methods of flaw detection, offers a contactless method of diagnostics by excitation of acoustic waves and extraction of information about defects from the frequency of free rail oscillations using the radar method.

  10. Infrared target recognition based on improved joint local ternary pattern

    NASA Astrophysics Data System (ADS)

    Sun, Junding; Wu, Xiaosheng

    2016-05-01

    This paper presents a simple, efficient, yet robust approach, named joint orthogonal combination of local ternary pattern, for automatic forward-looking infrared target recognition. It gives more advantages to describe the macroscopic textures and microscopic textures by fusing variety of scales than the traditional LBP-based methods. In addition, it can effectively reduce the feature dimensionality. Further, the rotation invariant and uniform scheme, the robust LTP, and soft concave-convex partition are introduced to enhance its discriminative power. Experimental results demonstrate that the proposed method can achieve competitive results compared with the state-of-the-art methods.

  11. NΩ interaction from two approaches in lattice QCD

    NASA Astrophysics Data System (ADS)

    Etminan, Faisal; Firoozabadi, Mohammad Mehdi

    2014-10-01

    We compare the standard finite volume method by Lüscher with the potential method by HAL QCD collaboration, by calculating the ground state energy of N(nucleon)-Ω(Omega) system in 5 S2 channel. We employ 2+1 flavor full QCD configurations on a (1.9 fm)3×3.8 fm lattice at the lattice spacing a≃0.12 fm, whose ud(s) quark mass corresponds to mπ = 875(1) (mK = 916(1)) MeV. We have found that both methods give reasonably consistent results that there is one NΩ bound state at this parameter.

  12. Comparing Value of Urban Green Space Using Contingent Valuation and Travel Cost Methods

    NASA Astrophysics Data System (ADS)

    Chintantya, Dea; Maryono

    2018-02-01

    Green urban open space are an important element of the city. They gives multiple benefits for social life, human health, biodiversity, air quality, carbon sequestration, and water management. Travel Cost Method (TCM) and Contingent Valuation Method (CVM) are the most frequently used method in various studies that assess environmental good and services in monetary term for valuing urban green space. Both of those method are determined the value of urban green space through willingness to pay (WTP) for ecosystem benefit and collected data through direct interview and questionnaire. Findings of this study showed the weaknesses and strengths of both methods for valuing urban green space and provided factors influencing the probability of user's willingness to pay in each method.

  13. Curve fitting methods for solar radiation data modeling

    NASA Astrophysics Data System (ADS)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-10-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R2. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  14. On the numerical calculation of hydrodynamic shock waves in atmospheres by an FCT method

    NASA Astrophysics Data System (ADS)

    Schmitz, F.; Fleck, B.

    1993-11-01

    The numerical calculation of vertically propagating hydrodynamic shock waves in a plane atmosphere by the ETBFCT-version of the Flux Corrected Transport (FCT) method by Boris and Book is discussed. The results are compared with results obtained by a characteristic method with shock fitting. We show that the use of the internal energy density as a dependent variable instead of the total energy density can give very inaccurate results. Consequent discretization rules for the gravitational source terms are derived. The improvement of the results by an additional iteration step is discussed. It appears that the FCT method is an excellent method for the accurate calculation of shock waves in an atmosphere.

  15. Mean-field approximation for spacing distribution functions in classical systems

    NASA Astrophysics Data System (ADS)

    González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.

    2012-01-01

    We propose a mean-field method to calculate approximately the spacing distribution functions p(n)(s) in one-dimensional classical many-particle systems. We compare our method with two other commonly used methods, the independent interval approximation and the extended Wigner surmise. In our mean-field approach, p(n)(s) is calculated from a set of Langevin equations, which are decoupled by using a mean-field approximation. We find that in spite of its simplicity, the mean-field approximation provides good results in several systems. We offer many examples illustrating that the three previously mentioned methods give a reasonable description of the statistical behavior of the system. The physical interpretation of each method is also discussed.

  16. Curve fitting methods for solar radiation data modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my

    2014-10-24

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both withmore » two terms) gives better results as compare with the other fitting methods.« less

  17. A perceptive method for handwritten text segmentation

    NASA Astrophysics Data System (ADS)

    Lemaitre, Aurélie; Camillerapp, Jean; Coüasnon, Bertrand

    2011-01-01

    This paper presents a new method to address the problem of handwritten text segmentation into text lines and words. Thus, we propose a method based on the cooperation among points of view that enables the localization of the text lines in a low resolution image, and then to associate the pixels at a higher level of resolution. Thanks to the combination of levels of vision, we can detect overlapping characters and re-segment the connected components during the analysis. Then, we propose a segmentation of lines into words based on the cooperation among digital data and symbolic knowledge. The digital data are obtained from distances inside a Delaunay graph, which gives a precise distance between connected components, at the pixel level. We introduce structural rules in order to take into account some generic knowledge about the organization of a text page. This cooperation among information gives a bigger power of expression and ensures the global coherence of the recognition. We validate this work using the metrics and the database proposed for the segmentation contest of ICDAR 2009. Thus, we show that our method obtains very interesting results, compared to the other methods of the literature. More precisely, we are able to deal with slope and curvature, overlapping text lines and varied kinds of writings, which are the main difficulties met by the other methods.

  18. Sensitivity enhancement by chromatographic peak concentration with ultra-high performance liquid chromatography-nuclear magnetic resonance spectroscopy for minor impurity analysis.

    PubMed

    Tokunaga, Takashi; Akagi, Ken-Ichi; Okamoto, Masahiko

    2017-07-28

    High performance liquid chromatography can be coupled with nuclear magnetic resonance (NMR) spectroscopy to give a powerful analytical method known as liquid chromatography-nuclear magnetic resonance (LC-NMR) spectroscopy, which can be used to determine the chemical structures of the components of complex mixtures. However, intrinsic limitations in the sensitivity of NMR spectroscopy have restricted the scope of this procedure, and resolving these limitations remains a critical problem for analysis. In this study, we coupled ultra-high performance liquid chromatography (UHPLC) with NMR to give a simple and versatile analytical method with higher sensitivity than conventional LC-NMR. UHPLC separation enabled the concentration of individual peaks to give a volume similar to that of the NMR flow cell, thereby maximizing the sensitivity to the theoretical upper limit. The UHPLC concentration of compound peaks present at typical impurity levels (5.0-13.1 nmol) in a mixture led to at most three-fold increase in the signal-to-noise ratio compared with LC-NMR. Furthermore, we demonstrated the use of UHPLC-NMR for obtaining structural information of a minor impurity in a reaction mixture in actual laboratory-scale development of a synthetic process. Using UHPLC-NMR, the experimental run times for chromatography and NMR were greatly reduced compared with LC-NMR. UHPLC-NMR successfully overcomes the difficulties associated with analyses of minor components in a complex mixture by LC-NMR, which are problematic even when an ultra-high field magnet and cryogenic probe are used. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Comparison of air space measurement imaged by CT, small-animal CT, and hyperpolarized Xe MRI

    NASA Astrophysics Data System (ADS)

    Madani, Aniseh; White, Steven; Santyr, Giles; Cunningham, Ian

    2005-04-01

    Lung disease is the third leading cause of death in the western world. Lung air volume measurements are thought to be early indicators of lung disease and markers in pharmaceutical research. The purpose of this work is to develop a lung phantom for assessing and comparing the quantitative accuracy of hyperpolarized xenon 129 magnetic resonance imaging (HP 129Xe MRI), conventional computed tomography (HRCT), and highresolution small-animal CT (μCT) in measuring lung gas volumes. We developed a lung phantom consisting of solid cellulose acetate spheres (1, 2, 3, 4 and 5 mm diameter) uniformly packed in circulated air or HP 129Xe gas. Air volume is estimated based on simple thresholding algorithm. Truth is calculated from the sphere diameters and validated using μCT. While this phantom is not anthropomorphic, it enables us to directly measure air space volume and compare these imaging methods as a function of sphere diameter for the first time. HP 129Xe MRI requires partial volume analysis to distinguish regions with and without 129Xe gas and results are within %5 of truth but settling of the heavy 129Xe gas complicates this analysis. Conventional CT demonstrated partial-volume artifacts for the 1mm spheres. μCT gives the most accurate air-volume results. Conventional CT and HP 129Xe MRI give similar results although non-uniform densities of 129Xe require more sophisticated algorithms than simple thresholding. The threshold required to give the true air volume in both HRCT and μCT, varies with sphere diameters calling into question the validity of thresholding method.

  20. A Simplified and Reliable Damage Method for the Prediction of the Composites Pieces

    NASA Astrophysics Data System (ADS)

    Viale, R.; Coquillard, M.; Seytre, C.

    2012-07-01

    Structural engineers are often faced to test results on composite structures largely tougher than predicted. By attempting to reduce this frequent gap, a survey of some extensive synthesis works relative to the prediction methods and to the failure criteria was led. This inquiry dealts with the plane stress state only. All classical methods have strong and weak points wrt practice and reliability aspects. The main conclusion is that in the plane stress case, the best usaul industrial methods give predictions rather similar. But very generally they do not explain the often large discrepancies wrt the tests, mainly in the cases of strong stress gradients or of bi-axial laminate loadings. It seems that only the methods considering the complexity of the composites damages (so-called physical methods or Continuum Damage Mechanics “CDM”) bring a clear mending wrt the usual methods..The only drawback of these methods is their relative intricacy mainly in urged industrial conditions. A method with an approaching but simplified representation of the CDM phenomenology is presented. It was compared to tests and other methods: - it brings a fear improvement of the correlation with tests wrt the usual industrial methods, - it gives results very similar to the painstaking CDM methods and very close to the test results. Several examples are provided. In addition this method is really thrifty wrt the material characterization as well as for the modelisation and the computation efforts.

  1. Comparative study of novel versus conventional two-wavelength spectrophotometric methods for analysis of spectrally overlapping binary mixture.

    PubMed

    Lotfy, Hayam M; Hegazy, Maha A; Rezk, Mamdouh R; Omran, Yasmin Rostom

    2015-09-05

    Smart spectrophotometric methods have been applied and validated for the simultaneous determination of a binary mixture of chloramphenicol (CPL) and prednisolone acetate (PA) without preliminary separation. Two novel methods have been developed; the first method depends upon advanced absorbance subtraction (AAS), while the other method relies on advanced amplitude modulation (AAM); in addition to the well established dual wavelength (DW), ratio difference (RD) and constant center coupled with spectrum subtraction (CC-SS) methods. Accuracy, precision and linearity ranges of these methods were determined. Moreover, selectivity was assessed by analyzing synthetic mixtures of both drugs. The proposed methods were successfully applied to the assay of drugs in their pharmaceutical formulations. No interference was observed from common additives and the validity of the methods was tested. The obtained results have been statistically compared to that of official spectrophotometric methods to give a conclusion that there is no significant difference between the proposed methods and the official ones with respect to accuracy and precision. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Tobacco information in two grade school newsweeklies: a content analysis.

    PubMed Central

    Balbach, E D; Glantz, S A

    1995-01-01

    OBJECTIVES. This study compared tobacco-related articles from two elementary school publications, Weekly Reader and Scholastic News, published in 1989 through 1994. METHODS. Articles for grades 4 through 6 were evaluated, and the publications were compared with each other. Also, issues of Weekly Reader published after acquisition by K-III, which is owned by the firm that formerly owned RJR Tobacco, were compared with the earlier ones. RESULTS. Weekly Reader was less likely than Scholastic News to mention short-term consequences of smoking (32% vs 64%) or to give a clear "no-use" message (35% vs 79%). Weekly Reader was more likely to give the tobacco industry position (68% vs 32%). Post-K-III issues of Weekly Reader were less likely to provide a clear no-use message than earlier ones (62% vs 24%). CONCLUSIONS. Health professionals need to monitor the health information carried in these publications, which reach between 1 and 2 million students per grade level each week. Although neither publication had perfect tobacco coverage, Scholastic News was significantly better than Weekly Reader. PMID:7503339

  3. Quality management benchmarking: FDA compliance in pharmaceutical industry.

    PubMed

    Jochem, Roland; Landgraf, Katja

    2010-01-01

    By analyzing and comparing industry and business best practice, processes can be optimized and become more successful mainly because efficiency and competitiveness increase. This paper aims to focus on some examples. Case studies are used to show knowledge exchange in the pharmaceutical industry. Best practice solutions were identified in two companies using a benchmarking method and five-stage model. Despite large administrations, there is much potential regarding business process organization. This project makes it possible for participants to fully understand their business processes. The benchmarking method gives an opportunity to critically analyze value chains (a string of companies or players working together to satisfy market demands for a special product). Knowledge exchange is interesting for companies that like to be global players. Benchmarking supports information exchange and improves competitive ability between different enterprises. Findings suggest that the five-stage model improves efficiency and effectiveness. Furthermore, the model increases the chances for reaching targets. The method gives security to partners that did not have benchmarking experience. The study identifies new quality management procedures. Process management and especially benchmarking is shown to support pharmaceutical industry improvements.

  4. Low Dose PET Image Reconstruction with Total Variation Using Alternating Direction Method.

    PubMed

    Yu, Xingjian; Wang, Chenye; Hu, Hongjie; Liu, Huafeng

    2016-01-01

    In this paper, a total variation (TV) minimization strategy is proposed to overcome the problem of sparse spatial resolution and large amounts of noise in low dose positron emission tomography (PET) imaging reconstruction. Two types of objective function were established based on two statistical models of measured PET data, least-square (LS) TV for the Gaussian distribution and Poisson-TV for the Poisson distribution. To efficiently obtain high quality reconstructed images, the alternating direction method (ADM) is used to solve these objective functions. As compared with the iterative shrinkage/thresholding (IST) based algorithms, the proposed ADM can make full use of the TV constraint and its convergence rate is faster. The performance of the proposed approach is validated through comparisons with the expectation-maximization (EM) method using synthetic and experimental biological data. In the comparisons, the results of both LS-TV and Poisson-TV are taken into consideration to find which models are more suitable for PET imaging, in particular low-dose PET. To evaluate the results quantitatively, we computed bias, variance, and the contrast recovery coefficient (CRC) and drew profiles of the reconstructed images produced by the different methods. The results show that both Poisson-TV and LS-TV can provide a high visual quality at a low dose level. The bias and variance of the proposed LS-TV and Poisson-TV methods are 20% to 74% less at all counting levels than those of the EM method. Poisson-TV gives the best performance in terms of high-accuracy reconstruction with the lowest bias and variance as compared to the ground truth (14.3% less bias and 21.9% less variance). In contrast, LS-TV gives the best performance in terms of the high contrast of the reconstruction with the highest CRC.

  5. Low Dose PET Image Reconstruction with Total Variation Using Alternating Direction Method

    PubMed Central

    Yu, Xingjian; Wang, Chenye; Hu, Hongjie; Liu, Huafeng

    2016-01-01

    In this paper, a total variation (TV) minimization strategy is proposed to overcome the problem of sparse spatial resolution and large amounts of noise in low dose positron emission tomography (PET) imaging reconstruction. Two types of objective function were established based on two statistical models of measured PET data, least-square (LS) TV for the Gaussian distribution and Poisson-TV for the Poisson distribution. To efficiently obtain high quality reconstructed images, the alternating direction method (ADM) is used to solve these objective functions. As compared with the iterative shrinkage/thresholding (IST) based algorithms, the proposed ADM can make full use of the TV constraint and its convergence rate is faster. The performance of the proposed approach is validated through comparisons with the expectation-maximization (EM) method using synthetic and experimental biological data. In the comparisons, the results of both LS-TV and Poisson-TV are taken into consideration to find which models are more suitable for PET imaging, in particular low-dose PET. To evaluate the results quantitatively, we computed bias, variance, and the contrast recovery coefficient (CRC) and drew profiles of the reconstructed images produced by the different methods. The results show that both Poisson-TV and LS-TV can provide a high visual quality at a low dose level. The bias and variance of the proposed LS-TV and Poisson-TV methods are 20% to 74% less at all counting levels than those of the EM method. Poisson-TV gives the best performance in terms of high-accuracy reconstruction with the lowest bias and variance as compared to the ground truth (14.3% less bias and 21.9% less variance). In contrast, LS-TV gives the best performance in terms of the high contrast of the reconstruction with the highest CRC. PMID:28005929

  6. Applying TM-polarization geoelectric exploration for study of low-contrast three-dimensional targets

    NASA Astrophysics Data System (ADS)

    Zlobinskiy, Arkadiy; Mogilatov, Vladimir; Shishmarev, Roman

    2018-03-01

    With using new field and theoretical data, it has been shown that applying the electromagnetic field of transverse magnetic (TM) polarization will give new opportunities for electrical prospecting by the method of transient processes. Only applying a pure field of the TM polarization permits poor three-dimensional objects (required metalliferous deposits) to be revealed in a host horizontally-layered medium. This position has good theoretical grounds. There is given the description of the transient electromagnetic method, that uses only the TM polarization field. The pure TM mode is excited by a special source, which is termed as a circular electric dipole (CED). The results of three-dimensional simulation (by the method of finite elements) are discussed for three real geological situations for which applying electromagnetic fields of transverse electric (TE) and transverse magnetic (TM) polarizations are compared. It has been shown that applying the TE mode gives no positive results, while applying the TM polarization field permits the problem to be tackled. Finally, the results of field works are offered, which showed inefficiency of application of the classical TEM method, whereas in contrast, applying the field of TM polarization makes it easy to identify the target.

  7. Globescope: Student Involvement in Culture Trait Studies as Part of the Social Studies Curriculum in Grades 5-12.

    ERIC Educational Resources Information Center

    Peters, Richard

    The program described in this guide provides a method of researching and comparing diverse cultures for middle and high school students. Teams of students investigate cultures from around the world and present findings to the entire class. The team approach enables the class to be exposed to a variety of materials and gives students experience in…

  8. 'Nose method' of calculating critical cooling rates for glass formation

    NASA Technical Reports Server (NTRS)

    Weinberg, Michael C.; Uhlmann, Donald R.; Zanotto, Edgar D.

    1989-01-01

    The use of the so-called 'nose method' for computing critical cooling rates for glass formation is examined and compared with other methods, presenting data for the glass-forming systems SiO2, GeO2, and P2O5. It is shown that, for homogeneous crystallization, the nose-method will give an overestimate of Rc, a conclusion which was drawn after assessing the enfluence of a range of values for the parameters which control crystal growth and nucleation. The paper also proposes an alternative simple procedure (termed the 'cutoff method') for computing critical cooling rates from T-T-T diagrams, which was shown in the SiO2 and GeO2 systems to be superior to the nose method.

  9. Warm ''pasta'' phase in the Thomas-Fermi approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avancini, Sidney S.; Menezes, Debora P.; Chiacchiera, Silvia

    In the present article, the 'pasta' phase is studied at finite temperatures within a Thomas-Fermi (TF) approach. Relativistic mean-field models, both with constant and density-dependent couplings, are used to describe this frustrated system. We compare the present results with previous ones obtained within a phase-coexistence description and conclude that the TF approximation gives rise to a richer inner ''pasta'' phase structure and the homogeneous matter appears at higher densities. Finally, the transition density calculated within TF is compared with the results for this quantity obtained with other methods.

  10. Neural network-based run-to-run controller using exposure and resist thickness adjustment

    NASA Astrophysics Data System (ADS)

    Geary, Shane; Barry, Ronan

    2003-06-01

    This paper describes the development of a run-to-run control algorithm using a feedforward neural network, trained using the backpropagation training method. The algorithm is used to predict the critical dimension of the next lot using previous lot information. It is compared to a common prediction algorithm - the exponentially weighted moving average (EWMA) and is shown to give superior prediction performance in simulations. The manufacturing implementation of the final neural network showed significantly improved process capability when compared to the case where no run-to-run control was utilised.

  11. New equation of state models for hydrodynamic applications

    NASA Astrophysics Data System (ADS)

    Young, David A.; Barbee, Troy W.; Rogers, Forrest J.

    1998-07-01

    Two new theoretical methods for computing the equation of state of hot, dense matter are discussed. The ab initio phonon theory gives a first-principles calculation of lattice frequencies, which can be used to compare theory and experiment for isothermal and shock compression of solids. The ACTEX dense plasma theory has been improved to allow it to be compared directly with ultrahigh pressure shock data on low-Z materials. The comparisons with experiment are good, suggesting that these models will be useful in generating global EOS tables for hydrodynamic simulations.

  12. Virtual edge illumination and one dimensional beam tracking for absorption, refraction, and scattering retrieval

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vittoria, Fabio A., E-mail: fabio.vittoria.12@ucl.ac.uk; Diemoz, Paul C.; Research Complex at Harwell, Harwell Oxford Campus, OX11 0FA Didcot

    2014-03-31

    We propose two different approaches to retrieve x-ray absorption, refraction, and scattering signals using a one dimensional scan and a high resolution detector. The first method can be easily implemented in existing procedures developed for edge illumination to retrieve absorption and refraction signals, giving comparable image quality while reducing exposure time and delivered dose. The second method tracks the variations of the beam intensity profile on the detector through a multi-Gaussian interpolation, allowing the additional retrieval of the scattering signal.

  13. An efficient synthesis of 3,4-Dihydropyrimidin-2(1H)-ones and thiones catalyzed by a novel Brønsted acidic ionic liquid under solvent-free conditions.

    PubMed

    Zhang, Yonghong; Wang, Bin; Zhang, Xiaomei; Huang, Jianbin; Liu, Chenjiang

    2015-02-26

    We report here an efficient and green method for Biginelli condensation reaction of aldehydes, β-ketoesters and urea or thiourea catalyzed by Brønsted acidic ionic liquid [Btto][p-TSA] under solvent-free conditions. Compared to the classical Biginelli reaction conditions, the present method has the advantages of giving good yields, short reaction times, near room temperature conditions and the avoidance of the use of organic solvents and metal catalyst.

  14. Validation of light water reactor calculation methods and JEF-1-based data libraries by TRX and BAPL critical experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paratte, J.M.; Pelloni, S.; Grimm, P.

    1991-04-01

    This paper analyzes the capability of various code systems and JEF-1-based nuclear data libraries to compute light water reactor lattices by comparing calculations with results from thermal reactor benchmark experiments TRX and BAPL and with previously published values. With the JEF-1 evaluation, eigenvalues are generally well predicted within 8 mk (1 mk = 0.001) or less by all code systems, and all methods give reasonable results for the measured reaction rate ratios within, or not too far from, the experimental uncertainty.

  15. Dimension from covariance matrices.

    PubMed

    Carroll, T L; Byers, J M

    2017-02-01

    We describe a method to estimate embedding dimension from a time series. This method includes an estimate of the probability that the dimension estimate is valid. Such validity estimates are not common in algorithms for calculating the properties of dynamical systems. The algorithm described here compares the eigenvalues of covariance matrices created from an embedded signal to the eigenvalues for a covariance matrix of a Gaussian random process with the same dimension and number of points. A statistical test gives the probability that the eigenvalues for the embedded signal did not come from the Gaussian random process.

  16. Study on Synchronization of the Heart in a Nursing Art.

    PubMed

    Sakaki, Soh; Ishigame, Atsushi; Majima, Yukie

    2016-01-01

    Compared to rookie nurses, it is often said that a skilled nurse's injection is less degree of pain. The authors believe that the reason why the pain is reduced is because skilled nurses can make themselves relaxed and synchronize their state to the patients. So, if we can make people relaxed and synchronized intentionally by giving artificial stimulation, the technique will be so valuable not only in the inheritance of injection skills but also in various medical situations including the care of aged, nursing of infant and so on. In this paper, we focused on the synchronization of brain waves, and examined the method of inducing the relaxed state and the synchronization in brain waves of subjects by giving a vibratory stimulation.

  17. Uncertainty in Random Forests: What does it mean in a spatial context?

    NASA Astrophysics Data System (ADS)

    Klump, Jens; Fouedjio, Francky

    2017-04-01

    Geochemical surveys are an important part of exploration for mineral resources and in environmental studies. The samples and chemical analyses are often laborious and difficult to obtain and therefore come at a high cost. As a consequence, these surveys are characterised by datasets with large numbers of variables but relatively few data points when compared to conventional big data problems. With more remote sensing platforms and sensor networks being deployed, large volumes of auxiliary data of the surveyed areas are becoming available. The use of these auxiliary data has the potential to improve the prediction of chemical element concentrations over the whole study area. Kriging is a well established geostatistical method for the prediction of spatial data but requires significant pre-processing and makes some basic assumptions about the underlying distribution of the data. Some machine learning algorithms, on the other hand, may require less data pre-processing and are non-parametric. In this study we used a dataset provided by Kirkwood et al. [1] to explore the potential use of Random Forest in geochemical mapping. We chose Random Forest because it is a well understood machine learning method and has the advantage that it provides us with a measure of uncertainty. By comparing Random Forest to Kriging we found that both methods produced comparable maps of estimated values for our variables of interest. Kriging outperformed Random Forest for variables of interest with relatively strong spatial correlation. The measure of uncertainty provided by Random Forest seems to be quite different to the measure of uncertainty provided by Kriging. In particular, the lack of spatial context can give misleading results in areas without ground truth data. In conclusion, our preliminary results show that the model driven approach in geostatistics gives us more reliable estimates for our target variables than Random Forest for variables with relatively strong spatial correlation. However, in cases of weak spatial correlation Random Forest, as a nonparametric method, may give the better results once we have a better understanding of the meaning of its uncertainty measures in a spatial context. References [1] Kirkwood, C., M. Cave, D. Beamish, S. Grebby, and A. Ferreira (2016), A machine learning approach to geochemical mapping, Journal of Geochemical Exploration, 163, 28-40, doi:10.1016/j.gexplo.2016.05.003.

  18. Subsurface water parameters: optimization approach to their determination from remotely sensed water color data.

    PubMed

    Jain, S C; Miller, J R

    1976-04-01

    A method, using an optimization scheme, has been developed for the interpretation of spectral albedo (or spectral reflectance) curves obtained from remotely sensed water color data. This method used a two-flow model of the radiation flow and solves for the albedo. Optimization fitting of predicted to observed reflectance data is performed by a quadratic interpolation method for the variables chlorophyll concentration and scattering coefficient. The technique is applied to airborne water color data obtained from Kawartha Lakes, Sargasso Sea, and Nova Scotia coast. The modeled spectral albedo curves are compared to those obtained experimentally, and the computed optimum water parameters are compared to ground truth values. It is shown that the backscattered spectral signal contains information that can be interpreted to give quantitative estimates of the chlorophyll concentration and turbidity in the waters studied.

  19. PIXE Analysis of Ceramic Artifacts

    NASA Astrophysics Data System (ADS)

    High, Elizabeth; Lamm, Larry; Schurr, Mark; Stech, Edward; Wiescher, Michael

    2009-10-01

    Particle Induced X-ray Emissions, or PIXE, is a nuclear physics technique used as a non-destructive material analysis method which gives a detailed and comprehensive profile of the elemental composition of a target. Using the University of Notre Dame KN and FN accelerators in the ISNAP laboratory a beam of particles, here protons, is accelerated and used to knock out electrons from lower orbitals within the target resulting in characteristic X-rays. Under optimum operating conditions data from PIXE can not only give information about which elements are present in a sample but also their relative abundances in parts per million. In a previous run done in collaboration with the Anthropology Department at the University of Notre Dame pottery shards from the Collier Lodge, located in northwest Indiana, were analyzed and only relative abundances were able to be compared between samples. We are now implementing a new setup into the beam-line which will incorporate the ability to take Rutherford Back Scattering, or RBS, measurements of the beam during the PIXE runs, which will allow for a standard normalization for the runs and give the facility the ability to acquire a more absolute and quantitative analysis of the data. Initial results using the same pottery shards as a comparative data set will be presented.

  20. Automated Visual Inspection Of Integrated Circuits

    NASA Astrophysics Data System (ADS)

    Noppen, G.; Oosterlinck, Andre J.

    1989-07-01

    One of the major application fields of image processing techniques is the 'visual inspection'. For a number of rea-sons, the automated visual inspection of Integrated Circuits (IC's) has drawn a lot of attention. : Their very strict design makes them very suitable for an automated inspection. : There is already a lot of experience in the comparable Printed Circuit Board (PCB) and mask inspection. : The mechanical handling of wafers and dice is already an established technology. : Military and medical IC's should be a 100 % failproof. : IC inspection gives a high and allinost immediate payback. In this paper we wil try to give an outline of the problems involved in IC inspection, and the algorithms and methods used to overcome these problems. We will not go into de-tail, but we will try to give a general understanding. Our attention will go to the following topics. : An overview of the inspection process, with an emphasis on the second visual inspection. : The problems encountered in IC inspection, as opposed to the comparable PCB and mask inspection. : The image acquisition devices that can be used to obtain 'inspectable' images. : A general overview of the algorithms that can be used. : A short description of the algorithms developed at the ESAT-MI2 division of the katholieke Universiteit Leuven.

  1. Determination of accuracy of winding deformation method using kNN based classifier used for 3 MVA transformer

    NASA Astrophysics Data System (ADS)

    Ahmed, Mustafa Wasir; Baishya, Manash Jyoti; Sharma, Sasanka Sekhor; Hazarika, Manash

    2018-04-01

    This paper presents a detecting system on power transformer in transformer winding, core and on load tap changer (OLTC). Accuracy of winding deformation is determined using kNN based classifier. Winding deformation in power transformer can be measured using sweep frequency response analysis (SFRA), which can enhance the diagnosis accuracy to a large degree. It is suggested that in the results minor deformation faults can be detected at frequency range of 1 mHz to 2 MHz. The values of RCL parameters are changed when faults occur and hence frequency response of the winding will change accordingly. The SFRA data of tested transformer is compared with reference trace. The difference between two graphs indicate faults in the transformer. The deformation between 1 mHz to 1kHz gives winding deformation, 1 kHz to 100 kHz gives core deformation and 100 kHz to 2 MHz gives OLTC deformation.

  2. A comparative study between shielded and open coplanar waveguide discontinuities

    NASA Technical Reports Server (NTRS)

    Dib, Nihad I.; Harokopus, W. P., Jr.; Ponchak, G. E.; Katehi, L. P. B.

    1993-01-01

    A comparative study between open and shielded coplanar waveguide (CPW) discontinuities is presented. The space domain integral equation method is used to characterize several discontinuities such as the open-end CPW and CPW series stubs. Two different geometries of CPW series stubs (straight and bent stubs) are compared with respect to resonant frequency and radiation loss. In addition, the encountered radiation loss due to different CPW shunt stubs is evaluated experimentally. The notion of forced radiation simulation is presented, and the results of such a simulation are compared to the actual radiation loss obtained rigorously. It is shown that such a simulation cannot give reliable results concerning radiation loss from printed circuits.

  3. A comparative study on generating simulated Landsat NDVI images using data fusion and regression method-the case of the Korean Peninsula.

    PubMed

    Lee, Mi Hee; Lee, Soo Bong; Eo, Yang Dam; Kim, Sun Woong; Woo, Jung-Hun; Han, Soo Hee

    2017-07-01

    Landsat optical images have enough spatial and spectral resolution to analyze vegetation growth characteristics. But, the clouds and water vapor degrade the image quality quite often, which limits the availability of usable images for the time series vegetation vitality measurement. To overcome this shortcoming, simulated images are used as an alternative. In this study, weighted average method, spatial and temporal adaptive reflectance fusion model (STARFM) method, and multilinear regression analysis method have been tested to produce simulated Landsat normalized difference vegetation index (NDVI) images of the Korean Peninsula. The test results showed that the weighted average method produced the images most similar to the actual images, provided that the images were available within 1 month before and after the target date. The STARFM method gives good results when the input image date is close to the target date. Careful regional and seasonal consideration is required in selecting input images. During summer season, due to clouds, it is very difficult to get the images close enough to the target date. Multilinear regression analysis gives meaningful results even when the input image date is not so close to the target date. Average R 2 values for weighted average method, STARFM, and multilinear regression analysis were 0.741, 0.70, and 0.61, respectively.

  4. Effectiveness of a television advertisement campaign on giving cigarettes in a chinese population.

    PubMed

    Qin, Yu; Su, Jian; Xiang, Quanyong; Hu, Yihe; Xu, Guanqun; Ma, Jiuhua; Shi, Zumin

    2014-01-01

    Anti-tobacco television advertisement campaigns may convey messages on smoking-related health consequences and create norms against giving cigarettes. Altogether, 156 and 112 slots of a television advertisement "Giving cigarettes is giving harm" were aired on Suzhou and Yizheng, respectively, over one month in 2010. Participants were recruited from 15 locations in Suzhou and 8 locations in Yizheng using a street intercept method. Overall 2306 residents aged 18-45 years completed questionnaires, including 1142 before the campaign and 1164 after, with respective response rates of 79.1% and 79.7%. Chi square tests were used to compare the difference between categorical variables. After the campaign, 36.0% of subjects recalled that they had seen the advertisement. Residents of Suzhou had a higher recall rate than those of Yizheng (47.6% vs. 20.6%, P < 0.001). The rate of not giving cigarettes dropped from 32.1% before the campaign to 28.5% after (P = 0.05). In the post-campaign evaluation, participants who reported seeing the advertisement were more likely not to give cigarettes in the future than those who reported not seeing the advertisement (38.7% vs. 27.5%, P < 0.001). Our study showed that an anti-tobacco television advertisements helped change societal norms and improve health behavior. Continuous and adequate funding of anti-tobacco media campaigns targeted at different levels of the general population is needed, in conjunction with a comprehensive tobacco control effort.

  5. A rapid method combining Golgi and Nissl staining to study neuronal morphology and cytoarchitecture.

    PubMed

    Pilati, Nadia; Barker, Matthew; Panteleimonitis, Sofoklis; Donga, Revers; Hamann, Martine

    2008-06-01

    The Golgi silver impregnation technique gives detailed information on neuronal morphology of the few neurons it labels, whereas the majority remain unstained. In contrast, the Nissl staining technique allows for consistent labeling of the whole neuronal population but gives very limited information on neuronal morphology. Most studies characterizing neuronal cell types in the context of their distribution within the tissue slice tend to use the Golgi silver impregnation technique for neuronal morphology followed by deimpregnation as a prerequisite for showing that neuron's histological location by subsequent Nissl staining. Here, we describe a rapid method combining Golgi silver impregnation with cresyl violet staining that provides a useful and simple approach to combining cellular morphology with cytoarchitecture without the need for deimpregnating the tissue. Our method allowed us to identify neurons of the facial nucleus and the supratrigeminal nucleus, as well as assessing cellular distribution within layers of the dorsal cochlear nucleus. With this method, we also have been able to directly compare morphological characteristics of neuronal somata at the dorsal cochlear nucleus when labeled with cresyl violet with those obtained with the Golgi method, and we found that cresyl violet-labeled cell bodies appear smaller at high cellular densities. Our observation suggests that cresyl violet staining is inadequate to quantify differences in soma sizes.

  6. Preliminary studies of the CHIM electrogeochemical method at the Kokomo Mine, Russell Gulch, Colorado

    USGS Publications Warehouse

    Smith, D.B.; Hoover, D.B.; Sanzolone, R.F.

    1993-01-01

    The CHIM electrogeochemical exploration technique was developed in the former Soviet Union about 20 years ago and is claimed to be effective in exploration for concealed mineral deposits that are not detectable by other geochemical or geophysical techniques. The method involves providing a high-voltage direct current to an anode and an array of special collector cathodes. Cations mobile in the electric field are collected at the cathodes and their concentrations determined. The U.S. Geological Survey started a study of the CHIM method by conducting tests over a precious- and base-metal-bearing quartz vein covered with 3 m of colluvial soil and weathered bedrock near the Kokomo Mine, Colorado. The tests show that the CHIM method gives better definition of the vein than conventional soil geochemistry based on a total-dissolution technique. The CHIM technique gives reproducible geochemical anomaly patterns, but the absolute concentrations depend on local site variability as well as temporal variations. Weak partial dissolutions of soils at the Kokomo Mine by an enzyme leach, a dilute acetic acid leach, and a dilute hydrochloric acid leach show results comparable to those from the CHIM method. This supports the idea that the CHIM technique is essentially a weak in-situ partial extraction involving only ions able to move in a weak electric field. ?? 1993.

  7. Optimization of cutting parameters for machining time in turning process

    NASA Astrophysics Data System (ADS)

    Mavliutov, A. R.; Zlotnikov, E. G.

    2018-03-01

    This paper describes the most effective methods for nonlinear constraint optimization of cutting parameters in the turning process. Among them are Linearization Programming Method with Dual-Simplex algorithm, Interior Point method, and Augmented Lagrangian Genetic Algorithm (ALGA). Every each of them is tested on an actual example – the minimization of production rate in turning process. The computation was conducted in the MATLAB environment. The comparative results obtained from the application of these methods show: The optimal value of the linearized objective and the original function are the same. ALGA gives sufficiently accurate values, however, when the algorithm uses the Hybrid function with Interior Point algorithm, the resulted values have the maximal accuracy.

  8. Efficient Jacobi-Gauss collocation method for solving initial value problems of Bratu type

    NASA Astrophysics Data System (ADS)

    Doha, E. H.; Bhrawy, A. H.; Baleanu, D.; Hafez, R. M.

    2013-09-01

    In this paper, we propose the shifted Jacobi-Gauss collocation spectral method for solving initial value problems of Bratu type, which is widely applicable in fuel ignition of the combustion theory and heat transfer. The spatial approximation is based on shifted Jacobi polynomials J {/n (α,β)}( x) with α, β ∈ (-1, ∞), x ∈ [0, 1] and n the polynomial degree. The shifted Jacobi-Gauss points are used as collocation nodes. Illustrative examples have been discussed to demonstrate the validity and applicability of the proposed technique. Comparing the numerical results of the proposed method with some well-known results show that the method is efficient and gives excellent numerical results.

  9. Denoising imaging polarimetry by adapted BM3D method.

    PubMed

    Tibbs, Alexander B; Daly, Ilse M; Roberts, Nicholas W; Bull, David R

    2018-04-01

    In addition to the visual information contained in intensity and color, imaging polarimetry allows visual information to be extracted from the polarization of light. However, a major challenge of imaging polarimetry is image degradation due to noise. This paper investigates the mitigation of noise through denoising algorithms and compares existing denoising algorithms with a new method, based on BM3D (Block Matching 3D). This algorithm, Polarization-BM3D (PBM3D), gives visual quality superior to the state of the art across all images and noise standard deviations tested. We show that denoising polarization images using PBM3D allows the degree of polarization to be more accurately calculated by comparing it with spectral polarimetry measurements.

  10. Elemental analysis by IBA and NAA — A critical comparison

    NASA Astrophysics Data System (ADS)

    Watterson, J. I. W.

    1988-12-01

    In this review neutron activation analysis (NAA) and ion beam analysis (IBA) have been compared in the context of the entire field of analytical science using the discipline of scientometrics, as developed by Braun and Lyon. This perspective on the relative achievements of the two methods is modified by considering and comparing their particular attributes and characteristics, particularly in relation to their differing degree of maturity. This assessment shows that NAA, as the more mature method, is the most widely applied nuclear technique, but the special capabilities of IBA give it the ability to provide information about surface composition and elemental distribution that is unique, while it is still relatively immature and it is not yet possible to define its ultimate role with any confidence.

  11. Mutual Comparative Filtering for Change Detection in Videos with Unstable Illumination Conditions

    NASA Astrophysics Data System (ADS)

    Sidyakin, Sergey V.; Vishnyakov, Boris V.; Vizilter, Yuri V.; Roslov, Nikolay I.

    2016-06-01

    In this paper we propose a new approach for change detection and moving objects detection in videos with unstable, abrupt illumination changes. This approach is based on mutual comparative filters and background normalization. We give the definitions of mutual comparative filters and outline their strong advantage for change detection purposes. Presented approach allows us to deal with changing illumination conditions in a simple and efficient way and does not have drawbacks, which exist in models that assume different color transformation laws. The proposed procedure can be used to improve a number of background modelling methods, which are not specifically designed to work under illumination changes.

  12. Comparison of Zirconium Phosphonate-Modified Surfaces for Immobilizing Phosphopeptides and Phosphate-Tagged Proteins.

    PubMed

    Forato, Florian; Liu, Hao; Benoit, Roland; Fayon, Franck; Charlier, Cathy; Fateh, Amina; Defontaine, Alain; Tellier, Charles; Talham, Daniel R; Queffélec, Clémence; Bujoli, Bruno

    2016-06-07

    Different routes for preparing zirconium phosphonate-modified surfaces for immobilizing biomolecular probes are compared. Two chemical-modification approaches were explored to form self-assembled monolayers on commercially available primary amine-functionalized slides, and the resulting surfaces were compared to well-characterized zirconium phosphonate monolayer-modified supports prepared using Langmuir-Blodgett methods. When using POCl3 as the amine phosphorylating agent followed by treatment with zirconyl chloride, the result was not a zirconium-phosphonate monolayer, as commonly assumed in the literature, but rather the process gives adsorbed zirconium oxide/hydroxide species and to a lower extent adsorbed zirconium phosphate and/or phosphonate. Reactions giving rise to these products were modeled in homogeneous-phase studies. Nevertheless, each of the three modified surfaces effectively immobilized phosphopeptides and phosphopeptide tags fused to an affinity protein. Unexpectedly, the zirconium oxide/hydroxide modified surface, formed by treating the amine-coated slides with POCl3/Zr(4+), afforded better immobilization of the peptides and proteins and efficient capture of their targets.

  13. Effects of Artificial Viscosity on the Accuracy of High-reynolds-number Kappa-epsilon Turbulence Model

    NASA Technical Reports Server (NTRS)

    Chitsomboon, Tawit

    1994-01-01

    Wall functions, as used in the typical high Reynolds number k-epsilon turbulence model, can be implemented in various ways. A least disruptive method (to the flow solver) is to directly solve for the flow variables at the grid point next to the wall while prescribing the values of k and epsilon. For the centrally-differenced finite-difference scheme employing artificial viscocity (AV) as a stabilizing mechanism, this methodology proved to be totally useless. This is because the AV gives rise to a large error at the wall due to too steep a velocity gradient resulting from the use of a coarse grid as required by the wall function methodology. This error can be eliminated simply by extrapolating velocities at the wall, instead of using the physical values of the no-slip velocities (i.e. the zero value). The applicability of the technique used in this paper is demonstrated by solving a flow over a flat plate and comparing the results with those of experiments. It was also observed that AV gives rise to a velocity overshoot (about 1 percent) near the edge of the boundary layer. This small velocity error, however, can yield as much as 10 percent error in the momentum thickness. A method which integrates the boundary layer up to only the edge of the boundary (instead of infinity) was proposed and demonstrated to give better results than the standard method.

  14. 3D Parallel Multigrid Methods for Real-Time Fluid Simulation

    NASA Astrophysics Data System (ADS)

    Wan, Feifei; Yin, Yong; Zhang, Suiyu

    2018-03-01

    The multigrid method is widely used in fluid simulation because of its strong convergence. In addition to operating accuracy, operational efficiency is also an important factor to consider in order to enable real-time fluid simulation in computer graphics. For this problem, we compared the performance of the Algebraic Multigrid and the Geometric Multigrid in the V-Cycle and Full-Cycle schemes respectively, and analyze the convergence and speed of different methods. All the calculations are done on the parallel computing of GPU in this paper. Finally, we experiment with the 3D-grid for each scale, and give the exact experimental results.

  15. Suggestions for presenting the results of data analyses

    USGS Publications Warehouse

    Anderson, David R.; Link, William A.; Johnson, Douglas H.; Burnham, Kenneth P.

    2001-01-01

    We give suggestions for the presentation of research results from frequentist, information-theoretic, and Bayesian analysis paradigms, followed by several general suggestions. The information-theoretic and Bayesian methods offer alternative approaches to data analysis and inference compared to traditionally used methods. Guidance is lacking on the presentation of results under these alternative procedures and on nontesting aspects of classical frequentists methods of statistical analysis. Null hypothesis testing has come under intense criticism. We recommend less reporting of the results of statistical tests of null hypotheses in cases where the null is surely false anyway, or where the null hypothesis is of little interest to science or management.

  16. Passive quantum error correction of linear optics networks through error averaging

    NASA Astrophysics Data System (ADS)

    Marshman, Ryan J.; Lund, Austin P.; Rohde, Peter P.; Ralph, Timothy C.

    2018-02-01

    We propose and investigate a method of error detection and noise correction for bosonic linear networks using a method of unitary averaging. The proposed error averaging does not rely on ancillary photons or control and feedforward correction circuits, remaining entirely passive in its operation. We construct a general mathematical framework for this technique and then give a series of proof of principle examples including numerical analysis. Two methods for the construction of averaging are then compared to determine the most effective manner of implementation and probe the related error thresholds. Finally we discuss some of the potential uses of this scheme.

  17. Structural and dynamic properties of liquid tin from a new modified embedded-atom method force field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vella, Joseph R.; Chen, Mohan; Stillinger, Frank H.

    We developed a new modified embedded-atom method (MEAM) force field for liquid tin. Starting from the Ravelo and Baskes force field [Phys. Rev. Lett. 79, 2482 (1997)], the parameters are adjusted using a simulated annealing optimization procedure in order to obtain better agreement with liquid-phase data. The predictive capabilities of the new model and the Ravelo and Baskes force field are evaluated using molecular dynamics by comparing to a wide range of first-principles and experimental data. The quantities studied include crystal properties (cohesive energy, bulk modulus, equilibrium density, and lattice constant of various crystal structures), melting temperature, liquid structure, liquidmore » density, self-diffusivity, viscosity, and vapor-liquid surface tension. We show that although the Ravelo and Baskes force field generally gives better agreement with the properties related to the solid phases of tin, the new MEAM force field gives better agreement with liquid tin properties.« less

  18. Determination of capacity of single-toggle jaw crusher, taking into account parameters of kinematics of its working mechanism

    NASA Astrophysics Data System (ADS)

    Golikov, N. S.; Timofeev, I. P.

    2018-05-01

    Efficiency increase of jaw crushers makes the foundation of rational kinematics and stiffening of the elements of the machine possible. Foundation of rational kinematics includes establishment of connection between operation mode parameters of the crusher and its technical characteristics. The main purpose of this research is just to establish such a connection. Therefore this article shows analytical procedure of getting connection between operation mode parameters of the crusher and its capacity. Theoretical, empirical and semi-empirical methods of capacity determination of a single-toggle jaw crusher are given, taking into account physico-mechanical properties of crushed material and kinematics of the working mechanism. When developing a mathematical model, the method of closed vector polygons by V. A. Zinoviev was used. The expressions obtained in the article give an opportunity to solve important scientific and technical problems, connected with finding the rational kinematics of the jaw crusher mechanism, carrying out a comparative assessment of different crushers and giving the recommendations about updating the available jaw crushers.

  19. Intracochlear pressure measurements to study bone conduction transmission: State-of-the art and proof of concept of the experimental Procedure

    NASA Astrophysics Data System (ADS)

    Borgers, Charlotte; van Wieringen, Astrid; D'hondt, Christiane; Verhaert, Nicolas

    2018-05-01

    The cochlea is the main contributor in bone conduction perception. Measurements of differential pressure in the cochlea give a good estimation of the cochlear input provided by bone conduction stimulation. Recent studies have proven the feasibility of intracochlear pressure measurements in chinchillas and in human temporal bones to study bone conduction. However, similar measurements in fresh-frozen whole human cadaveric heads could give a more realistic representation of the five different transmission pathways of bone conduction to the cochlea compared to human temporal bones. The aim of our study is to develop and validate a framework for intracochlear pressure measurements to evaluate different aspects of bone conduction in whole human cadaveric heads. A proof of concept describing our experimental setup is provided together with the procedure. Additionally, we also present a method to fix the stapes footplate in order to simulate otosclerosis in human temporal bones. The effectiveness of this method is verified by some preliminary results.

  20. Structural and dynamic properties of liquid tin from a new modified embedded-atom method force field

    NASA Astrophysics Data System (ADS)

    Vella, Joseph R.; Chen, Mohan; Stillinger, Frank H.; Carter, Emily A.; Debenedetti, Pablo G.; Panagiotopoulos, Athanassios Z.

    2017-02-01

    A new modified embedded-atom method (MEAM) force field is developed for liquid tin. Starting from the Ravelo and Baskes force field [Phys. Rev. Lett. 79, 2482 (1997), 10.1103/PhysRevLett.79.2482], the parameters are adjusted using a simulated annealing optimization procedure in order to obtain better agreement with liquid-phase data. The predictive capabilities of the new model and the Ravelo and Baskes force field are evaluated using molecular dynamics by comparing to a wide range of first-principles and experimental data. The quantities studied include crystal properties (cohesive energy, bulk modulus, equilibrium density, and lattice constant of various crystal structures), melting temperature, liquid structure, liquid density, self-diffusivity, viscosity, and vapor-liquid surface tension. It is shown that although the Ravelo and Baskes force field generally gives better agreement with the properties related to the solid phases of tin, the new MEAM force field gives better agreement with liquid tin properties.

  1. Structural and dynamic properties of liquid tin from a new modified embedded-atom method force field

    DOE PAGES

    Vella, Joseph R.; Chen, Mohan; Stillinger, Frank H.; ...

    2017-02-01

    We developed a new modified embedded-atom method (MEAM) force field for liquid tin. Starting from the Ravelo and Baskes force field [Phys. Rev. Lett. 79, 2482 (1997)], the parameters are adjusted using a simulated annealing optimization procedure in order to obtain better agreement with liquid-phase data. The predictive capabilities of the new model and the Ravelo and Baskes force field are evaluated using molecular dynamics by comparing to a wide range of first-principles and experimental data. The quantities studied include crystal properties (cohesive energy, bulk modulus, equilibrium density, and lattice constant of various crystal structures), melting temperature, liquid structure, liquidmore » density, self-diffusivity, viscosity, and vapor-liquid surface tension. We show that although the Ravelo and Baskes force field generally gives better agreement with the properties related to the solid phases of tin, the new MEAM force field gives better agreement with liquid tin properties.« less

  2. Flow and Force Equations for a Body Revolving in a Fluid

    NASA Technical Reports Server (NTRS)

    Zahm, A F

    1930-01-01

    Part I gives a general method for finding the steady-flow velocity relative to a body in plane curvilinear motion, whence the pressure is found by Bernoulli's energy principle. Integration of the pressure supplies basic formulas for the zonal forces and moments on the revolving body. Part II, applying this steady-flow method, finds the velocity and pressure at all points of the flow inside and outside an ellipsoid and some of its limiting forms, and graphs those quantities for the latter forms. Part III finds the pressure, and thence the zonal force and moment, on hulls in plane curvilinear flight. Part IV derives general equations for the resultant fluid forces and moments on trisymmetrical bodies moving through a perfect fluid, and in some cases compares the moment values with those found for bodies moving in air. Part V furnishes ready formulas for potential coefficients and inertia coefficients for an ellipsoid and its limiting forms. Thence are derived tables giving numerical values of those coefficients for a comprehensive range of shapes.

  3. DISTANCES TO DARK CLOUDS: COMPARING EXTINCTION DISTANCES TO MASER PARALLAX DISTANCES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foster, Jonathan B.; Jackson, James M.; Stead, Joseph J.

    We test two different methods of using near-infrared extinction to estimate distances to dark clouds in the first quadrant of the Galaxy using large near-infrared (Two Micron All Sky Survey and UKIRT Infrared Deep Sky Survey) surveys. Very long baseline interferometry parallax measurements of masers around massive young stars provide the most direct and bias-free measurement of the distance to these dark clouds. We compare the extinction distance estimates to these maser parallax distances. We also compare these distances to kinematic distances, including recent re-calibrations of the Galactic rotation curve. The extinction distance methods agree with the maser parallax distancesmore » (within the errors) between 66% and 100% of the time (depending on method and input survey) and between 85% and 100% of the time outside of the crowded Galactic center. Although the sample size is small, extinction distance methods reproduce maser parallax distances better than kinematic distances; furthermore, extinction distance methods do not suffer from the kinematic distance ambiguity. This validation gives us confidence that these extinction methods may be extended to additional dark clouds where maser parallaxes are not available.« less

  4. Determination of NH2 concentration on 3-aminopropyl tri-ethoxy silane layers and cyclopropylamine plasma polymers by liquid-phase derivatization with 5-iodo 2-furaldehyde

    NASA Astrophysics Data System (ADS)

    Manakhov, Anton; Čechal, Jan; Michlíček, Miroslav; Shtansky, Dmitry V.

    2017-08-01

    The quantification of concentration of primary amines, e.g. in plasma polymerized layers is a very important task for surface analysis. However, the commonly used procedure, such as gas phase derivatization with benzaldehydes, shows several drawbacks, the most important of which are the side reaction effects. In the present study we propose and validate a liquid phase derivatization using 5-iodo 2-furaldehyde (IFA). It was demonstrated that the content of NH2 groups can be determined from the atomic concentrations measured by X-ray photoelectron spectroscopy (XPS), in particular from the ratio of I 3d and N 1s peak intensities. First, we demonstrate the method on a prototypical system such as 3-aminopropyl tri-ethoxy silane (APTES) layer. Here the XPS analysis carried out after reaction of APTES layer with IFA gives the fraction of primary amines (NH2/N) of 38.3 ± 7.9%. Comparing this value with that obtained by N 1s curve fitting of APTES layer giving 40.9 ± 9.5% of amine groups, it can be concluded that all primary amines were derivatized by reaction with IFA. The second system to demonstrate the method comprises cyclopropylamine (CPA) plasma polymers that were free from conjugated imines. In this case the method gives the NH2 fraction ∼8.5%. This value is closely matching the NH2/N ratio estimated by 4-trifluoromethyl benzaldehyde (TFBA) derivatization. The reaction of IFA with CPA plasma polymer exhibiting high density of conjugated imines revealed the NH2/N fraction of ∼10.8%. This value was significantly lower compared to 17.3% estimated by TFBA derivatization. As the overestimated density of primary amines measured by TFBA derivatization is probably related to the side reaction of benzaldehydes with conjugated imines, the proposed IFA derivatization of primary amines can be an alternative procedure for the quantification of surface amine groups.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatterjee, Anupam; Department of Chemistry, Indian Institute of Technology Bombay, Powai, Mumbai 400076; Higham, Jonathan

    A range of methods are presented to calculate a solute’s hydration shell from computer simulations of dilute solutions of monatomic ions and noble gas atoms. The methods are designed to be parameter-free and instantaneous so as to make them more general, accurate, and consequently applicable to disordered systems. One method is a modified nearest-neighbor method, another considers solute-water Lennard-Jones overlap followed by hydrogen-bond rearrangement, while three methods compare various combinations of water-solute and water-water forces. The methods are tested on a series of monatomic ions and solutes and compared with the values from cutoffs in the radial distribution function, themore » nearest-neighbor distribution functions, and the strongest-acceptor hydrogen bond definition for anions. The Lennard-Jones overlap method and one of the force-comparison methods are found to give a hydration shell for cations which is in reasonable agreement with that using a cutoff in the radial distribution function. Further modifications would be required, though, to make them capture the neighboring water molecules of noble-gas solutes if these weakly interacting molecules are considered to constitute the hydration shell.« less

  6. Acoustic vector tomography and its application to magnetoacoustic tomography with magnetic induction (MAT-MI).

    PubMed

    Li, Xu; Xia, Rongmin; He, Bin

    2008-01-01

    A new tomographic algorithm for reconstructing a curl-free vector field, whose divergence serves as acoustic source is proposed. It is shown that under certain conditions, the scalar acoustic measurements obtained from a surface enclosing the source area can be vectorized according to the known measurement geometry and then be used to reconstruct the vector field. The proposed method is validated by numerical experiments. This method can be easily applied to magnetoacoustic tomography with magnetic induction (MAT-MI). A simulation study of applying this method to MAT-MI shows that compared to existing methods, the proposed method can give an accurate estimation of the induced current distribution and a better reconstruction of electrical conductivity within an object.

  7. Mean-field approximation for spacing distribution functions in classical systems.

    PubMed

    González, Diego Luis; Pimpinelli, Alberto; Einstein, T L

    2012-01-01

    We propose a mean-field method to calculate approximately the spacing distribution functions p((n))(s) in one-dimensional classical many-particle systems. We compare our method with two other commonly used methods, the independent interval approximation and the extended Wigner surmise. In our mean-field approach, p((n))(s) is calculated from a set of Langevin equations, which are decoupled by using a mean-field approximation. We find that in spite of its simplicity, the mean-field approximation provides good results in several systems. We offer many examples illustrating that the three previously mentioned methods give a reasonable description of the statistical behavior of the system. The physical interpretation of each method is also discussed. © 2012 American Physical Society

  8. Dynamical analysis of the avian-human influenza epidemic model using the semi-analytical method

    NASA Astrophysics Data System (ADS)

    Jabbari, Azizeh; Kheiri, Hossein; Bekir, Ahmet

    2015-03-01

    In this work, we present a dynamic behavior of the avian-human influenza epidemic model by using efficient computational algorithm, namely the multistage differential transform method(MsDTM). The MsDTM is used here as an algorithm for approximating the solutions of the avian-human influenza epidemic model in a sequence of time intervals. In order to show the efficiency of the method, the obtained numerical results are compared with the fourth-order Runge-Kutta method (RK4M) and differential transform method(DTM) solutions. It is shown that the MsDTM has the advantage of giving an analytical form of the solution within each time interval which is not possible in purely numerical techniques like RK4M.

  9. Linear Elastic and Cohesive Fracture Analysis to Model Hydraulic Fracture in Brittle and Ductile Rocks

    NASA Astrophysics Data System (ADS)

    Yao, Yao

    2012-05-01

    Hydraulic fracturing technology is being widely used within the oil and gas industry for both waste injection and unconventional gas production wells. It is essential to predict the behavior of hydraulic fractures accurately based on understanding the fundamental mechanism(s). The prevailing approach for hydraulic fracture modeling continues to rely on computational methods based on Linear Elastic Fracture Mechanics (LEFM). Generally, these methods give reasonable predictions for hard rock hydraulic fracture processes, but still have inherent limitations, especially when fluid injection is performed in soft rock/sand or other non-conventional formations. These methods typically give very conservative predictions on fracture geometry and inaccurate estimation of required fracture pressure. One of the reasons the LEFM-based methods fail to give accurate predictions for these materials is that the fracture process zone ahead of the crack tip and softening effect should not be neglected in ductile rock fracture analysis. A 3D pore pressure cohesive zone model has been developed and applied to predict hydraulic fracturing under fluid injection. The cohesive zone method is a numerical tool developed to model crack initiation and growth in quasi-brittle materials considering the material softening effect. The pore pressure cohesive zone model has been applied to investigate the hydraulic fracture with different rock properties. The hydraulic fracture predictions of a three-layer water injection case have been compared using the pore pressure cohesive zone model with revised parameters, LEFM-based pseudo 3D model, a Perkins-Kern-Nordgren (PKN) model, and an analytical solution. Based on the size of the fracture process zone and its effect on crack extension in ductile rock, the fundamental mechanical difference of LEFM and cohesive fracture mechanics-based methods is discussed. An effective fracture toughness method has been proposed to consider the fracture process zone effect on the ductile rock fracture.

  10. Moments of inertia of several airplanes

    NASA Technical Reports Server (NTRS)

    Miller, Marvel P; Soule, Hartley A

    1931-01-01

    This paper, which is the first of a series presenting the results of such measurements, gives the momental ellipsoids of ten army and naval biplanes and one commercial monoplane. The data were obtained by the use of a pendulum method, previously described. The moments of inertia are expressed in coefficient as well as in dimensional form, so that those for airplanes of widely different weights and dimensions can be compared.

  11. Image Analysis Using Quantum Entropy Scale Space and Diffusion Concepts

    DTIC Science & Technology

    2009-11-01

    images using a combination of analytic methods and prototype Matlab and Mathematica programs. We investigated concepts of generalized entropy and...Schmidt strength from quantum logic gate decomposition. This form of entropy gives a measure of the nonlocal content of an entangling logic gate...11 We recall that the Schmidt number is an indicator of entanglement , but not a measure of entanglement . For instance, let us compare

  12. Modelling the aggregation process of cellular slime mold by the chemical attraction.

    PubMed

    Atangana, Abdon; Vermeulen, P D

    2014-01-01

    We put into exercise a comparatively innovative analytical modus operandi, the homotopy decomposition method (HDM), for solving a system of nonlinear partial differential equations arising in an attractor one-dimensional Keller-Segel dynamics system. Numerical solutions are given and some properties show evidence of biologically practical reliance on the parameter values. The reliability of HDM and the reduction in computations give HDM a wider applicability.

  13. Numerical simulation of tunneling through arbitrary potential barriers applied on MIM and MIIM rectenna diodes

    NASA Astrophysics Data System (ADS)

    Abdolkader, Tarek M.; Shaker, Ahmed; Alahmadi, A. N. M.

    2018-07-01

    With the continuous miniaturization of electronic devices, quantum-mechanical effects such as tunneling become more effective in many device applications. In this paper, a numerical simulation tool is developed under a MATLAB environment to calculate the tunneling probability and current through an arbitrary potential barrier comparing three different numerical techniques: the finite difference method, transfer matrix method, and transmission line method. For benchmarking, the tool is applied to many case studies such as the rectangular single barrier, rectangular double barrier, and continuous bell-shaped potential barrier, each compared to analytical solutions and giving the dependence of the error on the number of mesh points. In addition, a thorough study of the J ‑ V characteristics of MIM and MIIM diodes, used as rectifiers for rectenna solar cells, is presented and simulations are compared to experimental results showing satisfactory agreement. On the undergraduate level, the tool provides a deeper insight for students to compare numerical techniques used to solve various tunneling problems and helps students to choose a suitable technique for a certain application.

  14. High-frequency asymptotic methods for analyzing the EM scattering by open-ended waveguide cavities

    NASA Technical Reports Server (NTRS)

    Burkholder, R. J.; Pathak, P. H.

    1989-01-01

    Four high-frequency methods are described for analyzing the electromagnetic (EM) scattering by electrically large open-ended cavities. They are: (1) a hybrid combination of waveguide modal analysis and high-frequency asymptotics, (2) geometrical optics (GO) ray shooting, (3) Gaussian beam (GB) shooting, and (4) the generalized ray expansion (GRE) method. The hybrid modal method gives very accurate results but is limited to cavities which are made up of sections of uniform waveguides for which the modal fields are known. The GO ray shooting method can be applied to much more arbitrary cavity geometries and can handle absorber treated interior walls, but it generally only predicts the major trends of the RCS pattern and not the details. Also, a very large number of rays need to be tracked for each new incidence angle. Like the GO ray shooting method, the GB shooting method can handle more arbitrary cavities, but it is much more efficient and generally more accurate than the GO method because it includes the fields diffracted by the rim at the open end which enter the cavity. However, due to beam divergence effects the GB method is limited to cavities which are not very long compared to their width. The GRE method overcomes the length-to-width limitation of the GB method by replacing the GB's with GO ray tubes which are launched in the same manner as the GB's to include the interior rim diffracted field. This method gives good accuracy and is generally more efficient than the GO method, but a large number of ray tubes needs to be tracked.

  15. Fourth order difference methods for hyperbolic IBVP's

    NASA Technical Reports Server (NTRS)

    Gustafsson, Bertil; Olsson, Pelle

    1994-01-01

    Fourth order difference approximations of initial-boundary value problems for hyperbolic partial differential equations are considered. We use the method of lines approach with both explicit and compact implicit difference operators in space. The explicit operator satisfies an energy estimate leading to strict stability. For the implicit operator we develop boundary conditions and give a complete proof of strong stability using the Laplace transform technique. We also present numerical experiments for the linear advection equation and Burgers' equation with discontinuities in the solution or in its derivative. The first equation is used for modeling contact discontinuities in fluid dynamics, the second one for modeling shocks and rarefaction waves. The time discretization is done with a third order Runge-Kutta TVD method. For solutions with discontinuities in the solution itself we add a filter based on second order viscosity. In case of the non-linear Burger's equation we use a flux splitting technique that results in an energy estimate for certain different approximations, in which case also an entropy condition is fulfilled. In particular we shall demonstrate that the unsplit conservative form produces a non-physical shock instead of the physically correct rarefaction wave. In the numerical experiments we compare our fourth order methods with a standard second order one and with a third order TVD-method. The results show that the fourth order methods are the only ones that give good results for all the considered test problems.

  16. A new method for calculation of the chlorine demand of natural and treated waters.

    PubMed

    Warton, Ben; Heitz, Anna; Joll, Cynthia; Kagi, Robert

    2006-08-01

    Conventional methods of calculating chlorine demand are dose dependent, making intercomparison of samples difficult, especially in cases where the samples contain substantially different concentrations of dissolved organic carbon (DOC), or other chlorine-consuming species. Using the method presented here, the values obtained for chlorine demand are normalised, allowing valid comparison of chlorine demand between samples, independent of the chlorine dose. Since the method is not dose dependent, samples with substantially differing water quality characteristics can be reliably compared. In our method, we dosed separate aliquots of a water sample with different chlorine concentrations, and periodically measured the residual chlorine concentrations in these subsamples. The chlorine decay data obtained in this way were then fitted to first-order exponential decay functions, corresponding to short-term demand (0-4h) and long-term demand (4-168 h). From the derived decay functions, the residual concentrations at a given time within the experimental time window were calculated and plotted against the corresponding initial chlorine concentrations, giving a linear relationship. From this linear function, it was then possible to determine the residual chlorine concentration for any initial concentration (i.e. dose). Thus, using this method, the initial chlorine dose required to give any residual chlorine concentration can be calculated for any time within the experimental time window, from a single set of experimental data.

  17. Full Waveform Inversion Using Student's t Distribution: a Numerical Study for Elastic Waveform Inversion and Simultaneous-Source Method

    NASA Astrophysics Data System (ADS)

    Jeong, Woodon; Kang, Minji; Kim, Shinwoong; Min, Dong-Joo; Kim, Won-Ki

    2015-06-01

    Seismic full waveform inversion (FWI) has primarily been based on a least-squares optimization problem for data residuals. However, the least-squares objective function can suffer from its weakness and sensitivity to noise. There have been numerous studies to enhance the robustness of FWI by using robust objective functions, such as l 1-norm-based objective functions. However, the l 1-norm can suffer from a singularity problem when the residual wavefield is very close to zero. Recently, Student's t distribution has been applied to acoustic FWI to give reasonable results for noisy data. Student's t distribution has an overdispersed density function compared with the normal distribution, and is thus useful for data with outliers. In this study, we investigate the feasibility of Student's t distribution for elastic FWI by comparing its basic properties with those of the l 2-norm and l 1-norm objective functions and by applying the three methods to noisy data. Our experiments show that the l 2-norm is sensitive to noise, whereas the l 1-norm and Student's t distribution objective functions give relatively stable and reasonable results for noisy data. When noise patterns are complicated, i.e., due to a combination of missing traces, unexpected outliers, and random noise, FWI based on Student's t distribution gives better results than l 1- and l 2-norm FWI. We also examine the application of simultaneous-source methods to acoustic FWI based on Student's t distribution. Computing the expectation of the coefficients of gradient and crosstalk noise terms and plotting the signal-to-noise ratio with iteration, we were able to confirm that crosstalk noise is suppressed as the iteration progresses, even when simultaneous-source FWI is combined with Student's t distribution. From our experiments, we conclude that FWI based on Student's t distribution can retrieve subsurface material properties with less distortion from noise than l 1- and l 2-norm FWI, and the simultaneous-source method can be adopted to improve the computational efficiency of FWI based on Student's t distribution.

  18. Comparison of viscous-shock-layer solutions by time-asymptotic and steady-state methods. [flow distribution around a Jupiter entry probe

    NASA Technical Reports Server (NTRS)

    Gupta, R. N.; Moss, J. N.; Simmonds, A. L.

    1982-01-01

    Two flow-field codes employing the time- and space-marching numerical techniques were evaluated. Both methods were used to analyze the flow field around a massively blown Jupiter entry probe under perfect-gas conditions. In order to obtain a direct point-by-point comparison, the computations were made by using identical grids and turbulence models. For the same degree of accuracy, the space-marching scheme takes much less time as compared to the time-marching method and would appear to provide accurate results for the problems with nonequilibrium chemistry, free from the effect of local differences in time on the final solution which is inherent in time-marching methods. With the time-marching method, however, the solutions are obtainable for the realistic entry probe shapes with massive or uniform surface blowing rates; whereas, with the space-marching technique, it is difficult to obtain converged solutions for such flow conditions. The choice of the numerical method is, therefore, problem dependent. Both methods give equally good results for the cases where results are compared with experimental data.

  19. Comparison of different static methods for assessment of AMD generation potential in mining waste dumps in the Muteh Gold Mines, Iran.

    PubMed

    Mohammadi, Zohreh; Modabberi, Soroush; Jafari, Mohammad Reza; Ajayebi, Kimia Sadat

    2015-06-01

    Acid mine drainage (AMD) gives rise to several problems in sulfide-bearing mineral deposits whether in an ore body or in the mining wastes and tailings. Hence, several methods and parameters have been proposed to evaluate the acid-producing and acid-neutralizing potential of a material. This research compares common static methods for evaluation of acid-production potential of mining wastes in the Muteh gold mines by using 62 samples taken from six waste dumps around Senjedeh and Chah-Khatoun mines. According to a detailed mineralogical study, the waste materials are composed of mica-schist and quartz veins with a high amount of pyrite and are supposed to be susceptible to acid production, and upon a rainfall, they release acid drainage. All parameters introduced in different methods were calculated and compared in this research in order to predict the acid-generating and neutralization potential, including APP, NNP, MPA, NPR, and NAGpH. Based on the analytical results and calculation of different parameters, all methods are in a general consensus that DWS-02 and DWS-03 waste dumps are acid-forming which is clearly attributed to high content of pyrite in samples. DWS-04 is considered as non-acid forming in all methods except method 8 which is uncertain about its acid-forming potential and method 7 which considers a low potential for it. DWC-01 is acid-forming based on all methods except 8, 9, 10, and 11 which are also uncertain about its potential. The methods used are not reached to a compromise on DWS-01 and DWC-02 waste dumps. It is supposed that method 7 gives the conservationist results in all cases. Method 8 is unable to decide on some cases. It is recommended to use and rely on results provided by methods 1, 2, 3, and 12 for taking decisions for further studies. Therefore, according to the static tests used, the aforementioned criteria in selected methods can be used with much confidence as a rule of thumb estimation.

  20. Semi-empirical spectrophotometric (SESp) method for the indirect determination of the ratio of cationic micellar binding constants of counterions X⁻ and Br⁻(K(X)/K(Br)).

    PubMed

    Khan, Mohammad Niyaz; Yusof, Nor Saadah Mohd; Razak, Norazizah Abdul

    2013-01-01

    The semi-empirical spectrophotometric (SESp) method, for the indirect determination of ion exchange constants (K(X)(Br)) of ion exchange processes occurring between counterions (X⁻ and Br⁻) at the cationic micellar surface, is described in this article. The method uses an anionic spectrophotometric probe molecule, N-(2-methoxyphenyl)phthalamate ion (1⁻), which measures the effects of varying concentrations of inert inorganic or organic salt (Na(v)X, v = 1, 2) on absorbance, (A(ob)) at 310 nm, of samples containing constant concentrations of 1⁻, NaOH and cationic micelles. The observed data fit satisfactorily to an empirical equation which gives the values of two empirical constants. These empirical constants lead to the determination of K(X)(Br) (= K(X)/K(Br) with K(X) and K(Br) representing cationic micellar binding constants of counterions X and Br⁻). This method gives values of K(X)(Br) for both moderately hydrophobic and hydrophilic X⁻. The values of K(X)(Br), obtained by using this method, are comparable with the corresponding values of K(X)(Br), obtained by the use of semi-empirical kinetic (SEK) method, for different moderately hydrophobic X. The values of K(X)(Br) for X = Cl⁻ and 2,6-Cl₂C6H₃CO₂⁻, obtained by the use of SESp and SEK methods, are similar to those obtained by the use of other different conventional methods.

  1. Determining the cost effectiveness of a smoke alarm give-away program using data from a randomized controlled trial.

    PubMed

    Ginnelly, Laura; Sculpher, Mark; Bojke, Chris; Roberts, Ian; Wade, Angie; Diguiseppi, Carolyn

    2005-10-01

    In 2001, 486 deaths and 17,300 injuries occurred in domestic fires in the UK. Domestic fires represent a significant cost to the UK economy, with the value of property loss alone estimated at pounds 375 million in 1999. In 2001 in the US, there were 383 500 home fires, resulting in 3110 deaths, 15,200 injuries and dollar 5.5 billion in direct property damage. A cluster RCT was conducted to determine whether a smoke alarm give-away program, directed to an inner-city UK population, is effective and cost-effective in reducing the risk of fire-related deaths/injuries. Forty areas were randomized to the give-away or control group. The number of injuries/deaths and the number of fires in each ward were collected prospectively. Cost-effectiveness analysis was undertaken to relate the number of deaths/injuries to resource use (damage, fire service, healthcare and give-away costs). Analytical methods were used which reflected the characteristics of the trial data including the cluster design of the trial and a large number of zero costs and effects. The mean cost for a household in a give-away ward, including the cost of the program, was pounds 12.76, compared to pounds 10.74 for the control ward. The total mean number of deaths and injuries was greater in the intervention wards then the control wards, 6.45 and 5.17. When an injury/death avoided is valued at pounds 1000, a smoke alarm give-away has a probability of being cost effective of 0.15. A smoke alarm give-away program, as administered in the trial, is unlikely to represent a cost-effective use of resources.

  2. Comparative Study of Wing Lift Distribution Analysis for High Altitude Long Endurance (HALE) Unmaned Aerial Vehicle

    NASA Astrophysics Data System (ADS)

    Silitonga, Faber Y.; Agoes Moelyadi, M.

    2018-04-01

    The development of High Altitude Long Endurance (HALE) Unmanned Aerial Vehicle (UAV) has been emerged for both civil and military purposes. Its ability of operating in high altitude with long endurance is important in supporting maritime applications.Preliminary analysis of HALE UAV lift distribution of the wing presented to give decisive consideration for its early development. Ensuring that the generated lift is enough to compensate its own weight. Therotical approach using Pradtl’s non-linear lifting line theory will be compared with modern numerical approach using Computational Fluid Dynamics (CFD). Results of wing lift distribution calculated from both methods will be compared to study the reliability of it. HALE UAV ITB has high aspect ratio wing and will be analyze at cruise flight condition. The result indicates difference between Non-linear Lifting Line and CFD method.

  3. A comparative physical evaluation of four X-ray films.

    PubMed

    Egyed, M; Shearer, D R

    1981-09-01

    In this study, four general purpose radiographic films (Agfa Gevaert Curix RP-1, duPont Cronex 4, Fuji RX, and Kodak XRP-1) were compared using three independent techniques. By examining the characteristic curves for the four films, film speed and contrast were compared over the diagnostically useful density range. These curves were generated using three methods: (1) irradiation of a standard film cassette lined with high-speed screens, covered by a twelve-step aluminum wedge; (2) direct exposure of film strips to an electro-luminescent sensitometer; and (3) direct irradiation of a standard film cassette lined with high-speed screens. The latter technique provided quantitative values for film speed and relative contrast. All three techniques provided virtually properly identical results and indicate that under properly controlled conditions simplified methods of film testing can give results equivalent to those obtained by more sophisticated techniques.

  4. Repressing the effects of variable speed harmonic orders in operational modal analysis

    NASA Astrophysics Data System (ADS)

    Randall, R. B.; Coats, M. D.; Smith, W. A.

    2016-10-01

    Discrete frequency components such as machine shaft orders can disrupt the operation of normal Operational Modal Analysis (OMA) algorithms. With constant speed machines, they have been removed using time synchronous averaging (TSA). This paper compares two approaches for varying speed machines. In one method, signals are transformed into the order domain, and after the removal of shaft speed related components by a cepstral notching method, are transformed back to the time domain to allow normal OMA. In the other simpler approach an exponential shortpass lifter is applied directly in the time domain cepstrum to enhance the modal information at the expense of other disturbances. For simulated gear signals with speed variations of both ±5% and ±15%, the simpler approach was found to give better results The TSA method is shown not to work in either case. The paper compares the results with those obtained using a stationary random excitation.

  5. Identifying the 630 nm auroral arc emission height: A comparison of the triangulation, FAC profile, and electron density methods

    NASA Astrophysics Data System (ADS)

    Megan Gillies, D.; Knudsen, D.; Donovan, E.; Jackel, B.; Gillies, R.; Spanswick, E.

    2017-08-01

    We present a comprehensive survey of 630 nm (red-line) emission discrete auroral arcs using the newly deployed Redline Emission Geospace Observatory. In this study we discuss the need for observations of 630 nm aurora and issues with the large-altitude range of the red-line aurora. We compare field-aligned currents (FACs) measured by the Swarm constellation of satellites with the location of 10 red-line (630 nm) auroral arcs observed by all-sky imagers (ASIs) and find that a characteristic emission height of 200 km applied to the ASI maps gives optimal agreement between the two observations. We also compare the new FAC method against the traditional triangulation method using pairs of all-sky imagers (ASIs), and against electron density profiles obtained from the Resolute Bay Incoherent Scatter Radar-Canadian radar, both of which are consistent with a characteristic emission height of 200 km.

  6. Element free Galerkin formulation of composite beam with longitudinal slip

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmad, Dzulkarnain; Mokhtaram, Mokhtazul Haizad; Badli, Mohd Iqbal

    2015-05-15

    Behaviour between two materials in composite beam is assumed partially interact when longitudinal slip at its interfacial surfaces is considered. Commonly analysed by the mesh-based formulation, this study used meshless formulation known as Element Free Galerkin (EFG) method in the beam partial interaction analysis, numerically. As meshless formulation implies that the problem domain is discretised only by nodes, the EFG method is based on Moving Least Square (MLS) approach for shape functions formulation with its weak form is developed using variational method. The essential boundary conditions are enforced by Langrange multipliers. The proposed EFG formulation gives comparable results, after beenmore » verified by analytical solution, thus signify its application in partial interaction problems. Based on numerical test results, the Cubic Spline and Quartic Spline weight functions yield better accuracy for the EFG formulation, compares to other proposed weight functions.« less

  7. Three-dimensional organization of block copolymers on "DNA-minimal" scaffolds.

    PubMed

    McLaughlin, Christopher K; Hamblin, Graham D; Hänni, Kevin D; Conway, Justin W; Nayak, Manoj K; Carneiro, Karina M M; Bazzi, Hassan S; Sleiman, Hanadi F

    2012-03-07

    Here, we introduce a 3D-DNA construction method that assembles a minimum number of DNA strands in quantitative yield, to give a scaffold with a large number of single-stranded arms. This DNA frame is used as a core structure to organize other functional materials in 3D as the shell. We use the ring-opening metathesis polymerization (ROMP) to generate block copolymers that are covalently attached to DNA strands. Site-specific hybridization of these DNA-polymer chains on the single-stranded arms of the 3D-DNA scaffold gives efficient access to DNA-block copolymer cages. These biohybrid cages possess polymer chains that are programmably positioned in three dimensions on a DNA core and display increased nuclease resistance as compared to unfunctionalized DNA cages. © 2012 American Chemical Society

  8. A comparative study of amplitude calibrations for the East Asia VLBI Network: A priori and template spectrum methods

    NASA Astrophysics Data System (ADS)

    Cho, Ilje; Jung, Taehyun; Zhao, Guang-Yao; Akiyama, Kazunori; Sawada-Satoh, Satoko; Kino, Motoki; Byun, Do-Young; Sohn, Bong Won; Shibata, Katsunori M.; Hirota, Tomoya; Niinuma, Kotaro; Yonekura, Yoshinori; Fujisawa, Kenta; Oyama, Tomoaki

    2017-12-01

    We present the results of a comparative study of amplitude calibrations for the East Asia VLBI Network (EAVN) at 22 and 43 GHz using two different methods of an "a priori" and a "template spectrum", particularly on lower declination sources. Using observational data sets of early EAVN observations, we investigated the elevation-dependence of the gain values at seven stations of the KaVA (KVN and VERA Array) and three additional telescopes in Japan (Takahagi 32 m, Yamaguchi 32 m, and Nobeyama 45 m). By comparing the independently obtained gain values based on these two methods, we found that the gain values from each method were consistent within 10% at elevations higher than 10°. We also found that the total flux densities of two images produced from the different amplitude calibrations were in agreement within 10% at both 22 and 43 GHz. By using the template spectrum method, furthermore, the additional radio telescopes can participate in KaVA (i.e., EAVN), giving a notable sensitivity increase. Therefore, our results will constrain the detailed conditions in order to measure the VLBI amplitude reliably using EAVN, and discuss the potential of possible expansion to telescopes comprising EAVN.

  9. An easy access to nanocrystalline alkaline earth metal fluorides - just by shaking

    NASA Astrophysics Data System (ADS)

    Dreger, M.; Scholz, G.; Kemnitz, E.

    2012-04-01

    High energy ball milling as fast, direct and solvent free method allows an easy access to nanocrystalline alkaline earth metal fluorides MF2 (M: Mg, Ca, Sr, Ba). Comparable metal sources (acetates, carbonates, hydroxides, alkoxides) were used for the reaction with NH4F as fluorinating agent. Even very simple manual shaking experiments between NH4F and the corresponding hydroxides in the stoichiometric ratio (M:F = 1:2, M: Ca, Sr, Ba) give phase pure fluorides. Moreover, comparable classical thermal reactions in closed crucibles at higher temperatures provide phase pure crystalline fluorides in nearly all cases as well.

  10. Near-surface shear-wave velocity measurements in unlithified sediment

    USGS Publications Warehouse

    Richards, B.T.; Steeples, D.; Miller, R.; Ivanov, J.; Peterie, S.; Sloan, S.D.; McKenna, J.R.

    2011-01-01

    S-wave velocity can be directly correlated to material stiffness and lithology making it a valuable physical property that has found uses in construction, engineering, and environmental projects. This study compares different methods for measuring S-wave velocities, investigating and identifying the differences among the methods' results, and prioritizing the different methods for optimal S-wave use at the U. S. Army's Yuma Proving Grounds YPG. Multichannel Analysis of Surface Waves MASW and S-wave tomography were used to generate S-wave velocity profiles. Each method has advantages and disadvantages. A strong signal-to-noise ratio at the study site gives the MASW method promising resolution. S-wave first arrivals are picked on impulsive sledgehammer data which were then used for the tomography process. Three-component downhole seismic data were collected in-line with a locking geophone, providing ground truth to compare the data and to draw conclusions about the validity of each data set. Results from these S-wave measurement techniques are compared with borehole seismic data and with lithology data from continuous samples to help ascertain the accuracy, and therefore applicability, of each method. This study helps to select the best methods for obtaining S-wave velocities for media much like those found in unconsolidated sediments at YPG. ?? 2011 Society of Exploration Geophysicists.

  11. Integration of Educational and Research Activities of Medical Students (Experience of the Medical Faculty of Saint Petersburg State University).

    PubMed

    Balakhonov, Aleksei V; Churilov, Leonid P; Erman, Mikhail V; Shishkin, Aleksandr N; Slepykh, Lyudmila A; Stroev, Yuri I; Utekhin, Vladimir J; Basantsova, Natalia Y

    2017-12-01

    The article is devoted to the role of research activity of the medical students in higher education of physicians. The teaching of physicians in classical universities and specialized medical schools is compared. The history of physicians' training in Russia in imperial, Soviet and post-Soviet periods is reviewed and compared to development of higher medical education in other countries. Article gives the the description of all failed attempts to establish a Medical Faculty within oldest classical university of Russia, crowned by history of last and successful attempt of its establishment. Authors' experience of adjoining education and research in curriculum and extra-curricular life of this Medical Faculty is discussed. The problems of specialization and fundamentalization of medical education are subjected to analysis. Clinical reasoning and reasoning of scholar-experimentalist are compared. The article reviews the role of term and course papers and significance of self-studies and graduation thesis in education of a physician. The paper gives original definition of interactive learning, and discusses the methods and pathways of intermingling the fundamental science and clinical medicine in medical teaching for achievement of admixed competencies of medical doctor and biomedical researcher.

  12. Effects of nitrogenous substituent groups on the benzene dication

    NASA Astrophysics Data System (ADS)

    Forgy, C. C.; Schlimgen, A. W.; Mazziotti, D. A.

    2018-05-01

    The benzene dication possesses a pentagonal-pyramidal structure with a hexacoordinated carbon. In contrast, halogenated benzene dications retain a similar structure to their parent molecules. In this work, we report on theoretical studies of the structures of the dications of benzene with nitrogenous substituents. We find that the nitrobenzene dication favours a near ideal pentagonal-pyramidal structure, while the aniline dication favours a flat, hexagonal structure. Reduced-density-matrices methods give predictions in agreement with available ab initio calculations and experiment. These results are also compared with those from the Hartree-Fock method and density functional theory.

  13. Inventory control of raw material using silver meal heuristic method in PR. Trubus Alami Malang

    NASA Astrophysics Data System (ADS)

    Ikasari, D. M.; Lestari, E. R.; Prastya, E.

    2018-03-01

    The purpose of this study was to compare the total inventory cost calculated using the method applied by PR. Trubus Alami and Silver Meal Heuristic (SMH) method. The study was started by forecasting the cigarette demand from July 2016 to June 2017 (48 weeks) using additive decomposition forecasting method. The additive decomposition was used because it has the lowest value of Mean Abosolute Deviation (MAD) and Mean Squared Deviation (MSD) compared to other methods such as multiplicative decomposition, moving average, single exponential smoothing, and double exponential smoothing. The forcasting results was then converted as a raw material needs and further calculated using SMH method to obtain inventory cost. As expected, the result shows that the order frequency of using SMH methods was smaller than that of using the method applied by Trubus Alami. This affected the total inventory cost. The result suggests that using SMH method gave a 29.41% lower inventory cost, giving the cost different of IDR 21,290,622. The findings, is therefore, indicated that the PR. Trubus Alami should apply the SMH method if the company wants to reduce the total inventory cost.

  14. Variational symplectic algorithm for guiding center dynamics in the inner magnetosphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Jinxing; Pu Zuyin; Xie Lun

    Charged particle dynamics in magnetosphere has temporal and spatial multiscale; therefore, numerical accuracy over a long integration time is required. A variational symplectic integrator (VSI) [H. Qin and X. Guan, Phys. Rev. Lett. 100, 035006 (2008) and H. Qin, X. Guan, and W. M. Tang, Phys. Plasmas 16, 042510 (2009)] for the guiding-center motion of charged particles in general magnetic field is applied to study the dynamics of charged particles in magnetosphere. Instead of discretizing the differential equations of the guiding-center motion, the action of the guiding-center motion is discretized and minimized to obtain the iteration rules for advancing themore » dynamics. The VSI conserves exactly a discrete Lagrangian symplectic structure and has better numerical properties over a long integration time, compared with standard integrators, such as the standard and adaptive fourth order Runge-Kutta (RK4) methods. Applying the VSI method to guiding-center dynamics in the inner magnetosphere, we can accurately calculate the particles'orbits for an arbitrary long simulating time with good conservation property. When a time-independent convection and corotation electric field is considered, the VSI method can give the accurate single particle orbit, while the RK4 method gives an incorrect orbit due to its intrinsic error accumulation over a long integrating time.« less

  15. Who gives pain relief to children?

    PubMed Central

    Spedding, R L; Harley, D; Dunn, F J; McKinney, L A

    1999-01-01

    OBJECTIVE: To compare pre-hospital parental administration of pain relief for children with that of the accident and emergency (A&E) department staff and to ascertain the reason why pre-hospital analgesia is not being given. DESIGN/METHODS: An anonymous prospective questionnaire was given to parents/guardians of children < 17 years. The children were all self referred with head injuries or limb problems including burns. The first part asked for details of pain relief before attendance in the A&E department. The second part of the questionnaire contained a section for the examining doctor and triage nurse to fill in. The duration of the survey was 28 days. RESULTS: Altogether 203 of 276 (74%) of children did not receive pain relief before attendance at the A&E department. Reasons for parents not giving pain relief included 57/203 (28%) who thought that giving painkillers would be harmful; 43/203 (21%) who did not give painkillers because the accident did not happen at home; and 15/203 (7%) who thought analgesia was the responsibility of the hospital. Eighty eight of the 276 (32%) did not have any painkillers, suitable for children, at home. A&E staff administered pain relief in 189/276 (68%). CONCLUSIONS: Parents often do not give their children pain relief before attending the A&E department. Parents think that giving painkillers may be harmful and often do not have simple analgesics at home. Some parents do not perceive that their child is in pain. Parents require education about appropriate pre-hospital pain relief for their children. PMID:10417932

  16. Interaction between human blood platelets, viruses and antibodies. IV. Post-Rubella thrombocytopenic purpura and platelet aggregation by Rubella antigen–antibody interaction

    PubMed Central

    Myllylä, G.; Vaheri, A.; Vesikari, T.; Penttinen, K.

    1969-01-01

    A new method of measuring antibodies by observing sedimentation patterns of platelets has been compared with the complement fixation and haemagglutination inhibition techniques in ten cases of Rubella and seven cases of post-Rubella thrombocytopenic purpura. The method is based on the aggregation of platelets by the joint action of antibody and small size antigens. The platelet aggregation method gave exceptionally high titres in cases of post-Rubella thrombocytopenic purpura. Other serologic methods did not give these high titres. The hypothesis that small size virus antigen and antibody against it are both needed to induce thrombocytopenia during the recovery period is discussed. Large amounts of both may result in clinical symptoms. PMID:5814719

  17. A new serotyping method for Klebsiella species: evaluation of the technique.

    PubMed Central

    Riser, E; Noone, P; Bonnet, M L

    1976-01-01

    A new indirect fluorescent typing method for Klebsiella species is compared with an established method, capsular swelling. The fluorescent antibody (FA) technique was tested with standards and unknowns, and the results were checked by capsular swelling. Several unknowns were sent away for confirmation of typing, by capsular swelling. The FA method was also tried by a technician in the routine department for blind identification of standards. Fluorescence typing gives close correlation with the established capsular swelling technique but has greater sensitivity; allows more econimical use of expensive antisera; possesses greater objectivity as it requires less operator skill in the reading of results; resolves most of the cross reactions observed with capsular swelling; and has a higher per cent success rate in identification. PMID:777043

  18. Steering Quantum Dynamics of a Two-Qubit System via Optimal Bang-Bang Control

    NASA Astrophysics Data System (ADS)

    Hu, Juju; Ke, Qiang; Ji, Yinghua

    2018-02-01

    The optimization of control time for quantum systems has been an important field of control science attracting decades of focus, which is beneficial for efficiency improvement and decoherence suppression caused by the environment. Based on analyzing the advantages and disadvantages of the existing Lyapunov control, using a bang-bang optimal control technique, we investigate the fast state control in a closed two-qubit quantum system, and give three optimized control field design methods. Numerical simulation experiments indicate the effectiveness of the methods. Compared to the standard Lyapunov control or standard bang-bang control method, the optimized control field design methods effectively shorten the state control time and avoid high-frequency oscillation that occurs in bang-bang control.

  19. THTM: A template matching algorithm based on HOG descriptor and two-stage matching

    NASA Astrophysics Data System (ADS)

    Jiang, Yuanjie; Ruan, Li; Xiao, Limin; Liu, Xi; Yuan, Feng; Wang, Haitao

    2018-04-01

    We propose a novel method for template matching named THTM - a template matching algorithm based on HOG (histogram of gradient) and two-stage matching. We rely on the fast construction of HOG and the two-stage matching that jointly lead to a high accuracy approach for matching. TMTM give enough attention on HOG and creatively propose a twice-stage matching while traditional method only matches once. Our contribution is to apply HOG to template matching successfully and present two-stage matching, which is prominent to improve the matching accuracy based on HOG descriptor. We analyze key features of THTM and perform compared to other commonly used alternatives on a challenging real-world datasets. Experiments show that our method outperforms the comparison method.

  20. The integral line-beam method for gamma skyshine analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shultis, J.K.; Faw, R.E.; Bassett, M.S.

    1991-03-01

    This paper presents a refinement of a simplified method, based on line-beam response functions, for performing skyshine calculations for shielded and collimated gamma-ray sources. New coefficients for an empirical fit to the line-beam response function are provided and a prescription for making the response function continuous in energy and emission direction is introduced. For a shielded source, exponential attenuation and a buildup factor correction for scattered photons in the shield are used. Results of the new integral line-beam method of calculation are compared to a variety of benchmark experimental data and calculations and are found to give generally excellent agreementmore » at a small fraction of the computational expense required by other skyshine methods.« less

  1. Study by AES, EELS Spectroscopy of electron Irradiation on InP and InPO4/InP in comparison with Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Lounis, Z.; Bouslama, M.; Hamaida, K.; Jardin, C.; Abdellaoui, A.; Ouerdane, A.; Ghaffour, M.; Berrouachedi, N.

    2012-02-01

    We give the great interest to characterise the InP and InPO4/InP submitted to electron beam irradiation owing to the Auger Electron Spectroscopy (AES) associated to both methods Electron Energy Loss Spectroscopy (EELS). The incident electron produces breaking of (In-P) chemical bonds. The electron beam even acts to stimulate oxidation of InP surface involving on the top layers. Other, the oxide InPO4 developed on InP does appear very sensitive to the irradiation due to electron beam shown by the monitoring of EELS spectra recorded versus the irradiated times of the surface. There appears a new oxide thought to be In2O3. We give the simulation methods Casino (Carlo simulation of electron trajectory in solids) for determination with accuracy the loss energy of backscattered electrons and compared with reports results have been obtained with EELS Spectroscopy. These techniques of spectroscopy alone do not be able to verify the affected depth during interaction process. So, using this simulation method, we determine the interaction of electrons in the matter.

  2. Activity Recognition for Personal Time Management

    NASA Astrophysics Data System (ADS)

    Prekopcsák, Zoltán; Soha, Sugárka; Henk, Tamás; Gáspár-Papanek, Csaba

    We describe an accelerometer based activity recognition system for mobile phones with a special focus on personal time management. We compare several data mining algorithms for the automatic recognition task in the case of single user and multiuser scenario, and improve accuracy with heuristics and advanced data mining methods. The results show that daily activities can be recognized with high accuracy and the integration with the RescueTime software can give good insights for personal time management.

  3. Absolute rate of the reaction of Cl(p-2) with molecular hydrogen from 200 - 500 K

    NASA Technical Reports Server (NTRS)

    Whytock, D. A.; Lee, J. H.; Michael, J. V.; Payne, W. A.; Stief, L. J.

    1976-01-01

    Rate constants for the reaction of atomic chlorine with hydrogen are measured from 200 - 500 K using the flash photolysis-resonance fluorescence technique. The results are compared with previous work and are discussed with particular reference to the equilibrium constant for the reaction and to relative rate data for chlorine atom reactions. Theoretical calculations, using the BEBO method with tunneling, give excellent agreement with experiment.

  4. Gamma spectrometry in the ITWG CMX-4 exercise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lakosi, L.; Zsigrai, J.; Kocsonya, A.

    Low enriched uranium samples of unknown origin were analyzed by 16 laboratories in the context of a Collaborative Materials Exercise (CMX), organized by the Nuclear Forensics International Technical Working Group (ITWG). The purpose was to compare and prioritize nuclear forensic methods and techniques, and to evaluate attribution capabilities among participants. This paper gives a snapshot of the gamma spectrometric capabilities of the participating laboratories and summarizes the results achieved by gamma spectrometry.

  5. Improved Models for Precession and Nutation

    DTIC Science & Technology

    2000-03-01

    in the process of constructing the series. A series due to Shirai and Fukushima (2000) also gives a somewhat comparable t to data, improving on the...IERS 1996 have been e ected recently by Shirai and Fukushima (2000) through re nements of the method and the use of more extensive data, in their...once these series are implemented in the software used for estimation of nutation amplitudes from VLBI data. It is known ( Fukushima , 1991) that general

  6. Gamma spectrometry in the ITWG CMX-4 exercise

    DOE PAGES

    Lakosi, L.; Zsigrai, J.; Kocsonya, A.; ...

    2018-01-05

    Low enriched uranium samples of unknown origin were analyzed by 16 laboratories in the context of a Collaborative Materials Exercise (CMX), organized by the Nuclear Forensics International Technical Working Group (ITWG). The purpose was to compare and prioritize nuclear forensic methods and techniques, and to evaluate attribution capabilities among participants. This paper gives a snapshot of the gamma spectrometric capabilities of the participating laboratories and summarizes the results achieved by gamma spectrometry.

  7. Global analysis of bacterial transcription factors to predict cellular target processes.

    PubMed

    Doerks, Tobias; Andrade, Miguel A; Lathe, Warren; von Mering, Christian; Bork, Peer

    2004-03-01

    Whole-genome sequences are now available for >100 bacterial species, giving unprecedented power to comparative genomics approaches. We have applied genome-context methods to predict target processes that are regulated by transcription factors (TFs). Of 128 orthologous groups of proteins annotated as TFs, to date, 36 are functionally uncharacterized; in our analysis we predict a probable cellular target process or biochemical pathway for half of these functionally uncharacterized TFs.

  8. Simulated breeding with QU-GENE graphical user interface.

    PubMed

    Hathorn, Adrian; Chapman, Scott; Dieters, Mark

    2014-01-01

    Comparing the efficiencies of breeding methods with field experiments is a costly, long-term process. QU-GENE is a highly flexible genetic and breeding simulation platform capable of simulating the performance of a range of different breeding strategies and for a continuum of genetic models ranging from simple to complex. In this chapter we describe some of the basic mechanics behind the QU-GENE user interface and give a simplified example of how it works.

  9. A New Approach to Detect Mover Position in Linear Motors Using Magnetic Sensors

    PubMed Central

    Paul, Sarbajit; Chang, Junghwan

    2015-01-01

    A new method to detect the mover position of a linear motor is proposed in this paper. This method employs a simple cheap Hall Effect sensor-based magnetic sensor unit to detect the mover position of the linear motor. With the movement of the linear motor, Hall Effect sensor modules electrically separated 120° along with the idea of three phase balanced condition (va + vb + vc = 0) are used to produce three phase signals. The amplitude of the sensor output voltage signals are adjusted to unit amplitude to minimize the amplitude errors. With the unit amplitude signals three to two phase transformation is done to reduce the three multiples of harmonic components. The final output thus obtained is converted to position data by the use of arctangent function. The measurement accuracy of the new method is analyzed by experiments and compared with the conventional two phase method. Using the same number of sensor modules as the conventional two phase method, the proposed method gives more accurate position information compared to the conventional system where sensors are separated by 90° electrical angles. PMID:26506348

  10. Methods for engaging stakeholders in comparative effectiveness research: a patient-centered approach to improving diabetes care.

    PubMed

    Schmittdiel, Julie A; Desai, Jay; Schroeder, Emily B; Paolino, Andrea R; Nichols, Gregory A; Lawrence, Jean M; O'Connor, Patrick J; Ohnsorg, Kris A; Newton, Katherine M; Steiner, John F

    2015-06-01

    Engaging stakeholders in the research process has the potential to improve quality of care and the patient care experience. Online patient community surveys can elicit important topic areas for comparative effectiveness research. Stakeholder meetings with substantial patient representation, as well as representation from health care delivery systems and research funding agencies, are a valuable tool for selecting and refining pilot research and quality improvement projects. Giving patient stakeholders a deciding vote in selecting pilot research topics helps ensure their 'voice' is heard. Researchers and health care leaders should continue to develop best-practices and strategies for increasing patient involvement in comparative effectiveness and delivery science research.

  11. The effect of different calculation methods of flywheel parameters on the Wingate Anaerobic Test.

    PubMed

    Coleman, S G; Hale, T

    1998-08-01

    Researchers compared different methods of calculating kinetic parameters of friction-braked cycle ergometers, and the subsequent effects on calculating power outputs in the Wingate Anaerobic Test (WAnT). Three methods of determining flywheel moment of inertia and frictional torque were investigated, requiring "run-down" tests and segmental geometry. Parameters were used to calculate corrected power outputs from 10 males in a 30-s WAnT against a load related to body mass (0.075 kg.kg-1). Wingate Indices of maximum (5 s) power, work, and fatigue index were also compared. Significant differences were found between uncorrected and corrected power outputs and between correction methods (p < .05). The same finding was evident for all Wingate Indices (p < .05). Results suggest that WAnT must be corrected to give true power outputs and that choosing an appropriate correction calculation is important. Determining flywheel moment of inertia and frictional torque using unloaded run-down tests is recommended.

  12. Gradient light interference microscopy (GLIM) for imaging thick specimens (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Nguyen, Tan H.; Kandel, Mikhail E.; Popescu, Gabriel

    2016-03-01

    Compared to the Phase Contrast, Differential Interference Contrast (DIC) has been known to give higher depth sectioning as well as a halo-free images when investigating transparent specimens. Thanks to relying on generating two slightly shifted replicas with a small amount of shift, within the coherence area, DIC is able to operate with very low coherence light. More importantly, the method is able to work with very large numerical aperture of the illumination, which offer comparable sectioning capability to bright field microscopy. However, DIC is still a qualitative method, which limits potential applications of the technique. In this paper, we introduce a method that extends the capability of DIC by combining it with a phase shifting module to extract the phase gradient information. A theoretical model of the image formation is developed and the possibility of integrating the gradient function is analyzed.. Our method is benchmarked on imaging embryos during their 7-day development, HeLa cells during mitosis, and control samples.

  13. Comparative study on conventional, ultrasonication and microwave assisted extraction of γ-oryzanol from rice bran.

    PubMed

    Kumar, Pramod; Yadav, Devbrat; Kumar, Pradyuman; Panesar, Paramjeet Singh; Bunkar, Durga Shankar; Mishra, Diwaker; Chopra, H K

    2016-04-01

    In present study, conventional, ultrasonic and microwave assisted extraction methods were compared with the aim of optimizing best fitting solvent and method, solvent concentration and digestion time for high yield of γ-oryzanol from rice bran. Petroleum ether, hexane and methanol were used to prepare extracts. Extraction yield were evaluated for giving high crude oil yield, total phenolic content (TPC) and γ-oryzanol content. Gas chromatography-mass spectrophotometry was used for the determination of γ-oryzanol concentration. The highest concentration of γ-oryzanol was detected in methanolic extracts of microwave treatment (85.0 ppm) followed by ultrasonication (82.0 ppm) and conventional extraction method (73.5 ppm). Concentration of γ-oryzanol present in the extracts was found to be directly proportional to the total phenolic content. A combination of 80 % methanolic concentration and 55 minutes digestion time of microwave treatment yielded the best extraction method for TPC and thus γ-oryzanol (105 ppm).

  14. Using budget-friendly methods to analyze sport specific movements

    NASA Astrophysics Data System (ADS)

    Jackson, Lindsay; Williams, Sarah; Ferrara, Davon

    2015-03-01

    When breaking down the physics behind sport specific movements, athletes, usually professional, are often assessed in multimillion-dollar laboratories and facilities. Budget-friendly methods, such as video analysis using low-cost cameras, iPhone sensors, or inexpensive force sensors can make this process more accessible to amateur athletes, which in-turn can give insight into injury mechanisms. Here we present a comparison of two methods of determining the forces experienced by a cheerleader during co-education stunting and soccer goalies while side-diving. For the cheerleader, accelerometer measurements were taken by an iPhone 5 and compared to video analysis. The measurements done on the soccer players were taken using FlexiForce force sensors and again compared to video analysis. While these budget-friendly methods could use some refining, they show promise for producing usable measurements for possibly increasing our understanding of injury in amateur players. Furthermore, low-cost physics experiments with sports can foster an active learning environment for students with minimum physics and mathematical background.

  15. Comparison of normalization methods for differential gene expression analysis in RNA-Seq experiments

    PubMed Central

    Maza, Elie; Frasse, Pierre; Senin, Pavel; Bouzayen, Mondher; Zouine, Mohamed

    2013-01-01

    In recent years, RNA-Seq technologies became a powerful tool for transcriptome studies. However, computational methods dedicated to the analysis of high-throughput sequencing data are yet to be standardized. In particular, it is known that the choice of a normalization procedure leads to a great variability in results of differential gene expression analysis. The present study compares the most widespread normalization procedures and proposes a novel one aiming at removing an inherent bias of studied transcriptomes related to their relative size. Comparisons of the normalization procedures are performed on real and simulated data sets. Real RNA-Seq data sets analyses, performed with all the different normalization methods, show that only 50% of significantly differentially expressed genes are common. This result highlights the influence of the normalization step on the differential expression analysis. Real and simulated data sets analyses give similar results showing 3 different groups of procedures having the same behavior. The group including the novel method named “Median Ratio Normalization” (MRN) gives the lower number of false discoveries. Within this group the MRN method is less sensitive to the modification of parameters related to the relative size of transcriptomes such as the number of down- and upregulated genes and the gene expression levels. The newly proposed MRN method efficiently deals with intrinsic bias resulting from relative size of studied transcriptomes. Validation with real and simulated data sets confirmed that MRN is more consistent and robust than existing methods. PMID:26442135

  16. The comparative performance of PMI estimation in skeletal remains by three methods (C-14, luminol test and OHI): analysis of 20 cases.

    PubMed

    Cappella, Annalisa; Gibelli, Daniele; Muccino, Enrico; Scarpulla, Valentina; Cerutti, Elisa; Caruso, Valentina; Sguazza, Emanuela; Mazzarelli, Debora; Cattaneo, Cristina

    2015-01-27

    When estimating post-mortem interval (PMI) in forensic anthropology, the only method able to give an unambiguous result is the analysis of C-14, although the procedure is expensive. Other methods, such as luminol tests and histological analysis, can be performed as preliminary investigations and may allow the operators to gain a preliminary indication concerning PMI, but they lack scientific verification, although luminol testing has been somewhat more accredited in the past few years. Such methods in fact may provide some help as they are inexpensive and can give a fast response, especially in the phase of preliminary investigations. In this study, 20 court cases of human skeletonized remains were dated by the C-14 method. For two cases, results were chronologically set after the 1950s; for one case, the analysis was not possible technically. The remaining 17 cases showed an archaeological or historical collocation. The same bone samples were also screened with histological examination and with the luminol test. Results showed that only four cases gave a positivity to luminol and a high Oxford Histology Index (OHI) score at the same time: among these, two cases were dated as recent by the radiocarbon analysis. Thus, only two false-positive results were given by the combination of these methods and no false negatives. Thus, the combination of two qualitative methods (luminol test and microscopic analysis) may represent a promising solution to cases where many fragments need to be quickly tested.

  17. Enhancement of lung sounds based on empirical mode decomposition and Fourier transform algorithm.

    PubMed

    Mondal, Ashok; Banerjee, Poulami; Somkuwar, Ajay

    2017-02-01

    There is always heart sound (HS) signal interfering during the recording of lung sound (LS) signals. This obscures the features of LS signals and creates confusion on pathological states, if any, of the lungs. In this work, a new method is proposed for reduction of heart sound interference which is based on empirical mode decomposition (EMD) technique and prediction algorithm. In this approach, first the mixed signal is split into several components in terms of intrinsic mode functions (IMFs). Thereafter, HS-included segments are localized and removed from them. The missing values of the gap thus produced, is predicted by a new Fast Fourier Transform (FFT) based prediction algorithm and the time domain LS signal is reconstructed by taking an inverse FFT of the estimated missing values. The experiments have been conducted on simulated and recorded HS corrupted LS signals at three different flow rates and various SNR levels. The performance of the proposed method is evaluated by qualitative and quantitative analysis of the results. It is found that the proposed method is superior to the baseline method in terms of quantitative and qualitative measurement. The developed method gives better results compared to baseline method for different SNR levels. Our method gives cross correlation index (CCI) of 0.9488, signal to deviation ratio (SDR) of 9.8262, and normalized maximum amplitude error (NMAE) of 26.94 for 0 dB SNR value. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. A simplified fourwall interference assessment procedure for airfoil data obtained in the Langley 0.3-meter transonic cryogenic tunnel

    NASA Technical Reports Server (NTRS)

    Murthy, A. V.

    1987-01-01

    A simplified fourwall interference assessment method has been described, and a computer program developed to facilitate correction of the airfoil data obtained in the Langley 0.3-m Transonic Cryogenic Tunnel (TCT). The procedure adopted is to first apply a blockage correction due to sidewall boundary-layer effects by various methods. The sidewall boundary-layer corrected data are then used to calculate the top and bottom wall interference effects by the method of Capallier, Chevallier and Bouinol, using the measured wall pressure distribution and the model force coefficients. The interference corrections obtained by the present method have been compared with other methods and found to give good agreement for the experimental data obtained in the TCT with slotted top and bottom walls.

  19. Simplification of a scoring system maintained overall accuracy but decreased the proportion classified as low risk.

    PubMed

    Sanders, Sharon; Flaws, Dylan; Than, Martin; Pickering, John W; Doust, Jenny; Glasziou, Paul

    2016-01-01

    Scoring systems are developed to assist clinicians in making a diagnosis. However, their uptake is often limited because they are cumbersome to use, requiring information on many predictors, or complicated calculations. We examined whether, and how, simplifications affected the performance of a validated score for identifying adults with chest pain in an emergency department who have low risk of major adverse cardiac events. We simplified the Emergency Department Assessment of Chest pain Score (EDACS) by three methods: (1) giving equal weight to each predictor included in the score, (2) reducing the number of predictors, and (3) using both methods--giving equal weight to a reduced number of predictors. The diagnostic accuracy of the simplified scores was compared with the original score in the derivation (n = 1,974) and validation (n = 909) data sets. There was no difference in the overall accuracy of the simplified versions of the score compared with the original EDACS as measured by the area under the receiver operating characteristic curve (0.74 to 0.75 for simplified versions vs. 0.75 for the original score in the validation cohort). With score cut-offs set to maintain the sensitivity of the combination of score and tests (electrocardiogram and cardiac troponin) at a level acceptable to clinicians (99%), simplification reduced the proportion of patients classified as low risk from 50% with the original score to between 22% and 42%. Simplification of a clinical score resulted in similar overall accuracy but reduced the proportion classified as low risk and therefore eligible for early discharge compared with the original score. Whether the trade-off is acceptable, will depend on the context in which the score is to be used. Developers of clinical scores should consider simplification as a method to increase uptake, but further studies are needed to determine the best methods of deriving and evaluating simplified scores. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. A comparison of accelerated solvent extraction, Soxhlet extraction, and ultrasonic-assisted extraction for analysis of terpenoids and sterols in tobacco.

    PubMed

    Shen, Jinchao; Shao, Xueguang

    2005-11-01

    The performance of accelerated solvent extraction in the analysis of terpenoids and sterols in tobacco samples was investigated and compared with those of Soxhlet extraction and ultrasonically assisted extraction with respect to yield, extraction time, reproducibility and solvent consumption. The results indicate that although the highest yield was achieved by Soxhlet extraction, ASE appears to be a promising alternative to classical methods since it is faster and uses less solvent, especially when applied to the investigation of large batch tobacco samples. However, Soxhlet extraction is still the preferred method for analyzing sterols since it gives a higher extraction efficiency than other methods.

  1. Beta value coupled wave theory for nonslanted reflection gratings.

    PubMed

    Neipp, Cristian; Francés, Jorge; Gallego, Sergi; Bleda, Sergio; Martínez, Francisco Javier; Pascual, Inmaculada; Beléndez, Augusto

    2014-01-01

    We present a modified coupled wave theory to describe the properties of nonslanted reflection volume diffraction gratings. The method is based on the beta value coupled wave theory, which will be corrected by using appropriate boundary conditions. The use of this correction allows predicting the efficiency of the reflected order for nonslanted reflection gratings embedded in two media with different refractive indices. The results obtained by using this method will be compared to those obtained using a matrix method, which gives exact solutions in terms of Mathieu functions, and also to Kogelnik's coupled wave theory. As will be demonstrated, the technique presented in this paper means a significant improvement over Kogelnik's coupled wave theory.

  2. Beta Value Coupled Wave Theory for Nonslanted Reflection Gratings

    PubMed Central

    Neipp, Cristian; Francés, Jorge; Gallego, Sergi; Bleda, Sergio; Martínez, Francisco Javier; Pascual, Inmaculada; Beléndez, Augusto

    2014-01-01

    We present a modified coupled wave theory to describe the properties of nonslanted reflection volume diffraction gratings. The method is based on the beta value coupled wave theory, which will be corrected by using appropriate boundary conditions. The use of this correction allows predicting the efficiency of the reflected order for nonslanted reflection gratings embedded in two media with different refractive indices. The results obtained by using this method will be compared to those obtained using a matrix method, which gives exact solutions in terms of Mathieu functions, and also to Kogelnik's coupled wave theory. As will be demonstrated, the technique presented in this paper means a significant improvement over Kogelnik's coupled wave theory. PMID:24723811

  3. Breast volume assessment: comparing five different techniques.

    PubMed

    Bulstrode, N; Bellamy, E; Shrotria, S

    2001-04-01

    Breast volume assessment is not routinely performed pre-operatively because as yet there is no accepted technique. There have been a variety of methods published, but this is the first study to compare these techniques. We compared volume measurements obtained from mammograms (previously compared to mastectomy specimens) with estimates of volume obtained from four other techniques: thermoplastic moulding, magnetic resonance imaging, Archimedes principle and anatomical measurements. We also assessed the acceptability of each method to the patient. Measurements were performed on 10 women, which produced results for 20 breasts. We were able to calculate regression lines between volume measurements obtained from mammography to the other four methods: (1) magnetic resonance imaging (MRI), 379+(0.75 MRI) [r=0.48], (2) Thermoplastic moulding, 132+(1.46 Thermoplastic moulding) [r=0.82], (3) Anatomical measurements, 168+(1.55 Anatomical measurements) [r=0.83]. (4) Archimedes principle, 359+(0.6 Archimedes principle) [r=0.61] all units in cc. The regression curves for the different techniques are variable and it is difficult to reliably compare results. A standard method of volume measurement should be used when comparing volumes before and after intervention or between individual patients, and it is unreliable to compare volume measurements using different methods. Calculating the breast volume from mammography has previously been compared to mastectomy samples and shown to be reasonably accurate. However we feel thermoplastic moulding shows promise and should be further investigated as it gives not only a volume assessment but a three-dimensional impression of the breast shape, which may be valuable in assessing cosmesis following breast-conserving-surgery.

  4. Comparing deflection measurements of a magnetically steerable catheter using optical imaging and MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lillaney, Prasheel, E-mail: Prasheel.Lillaney@ucsf.edu; Caton, Curtis; Martin, Alastair J.

    2014-02-15

    Purpose: Magnetic resonance imaging (MRI) is an emerging modality for interventional radiology, giving clinicians another tool for minimally invasive image-guided interventional procedures. Difficulties associated with endovascular catheter navigation using MRI guidance led to the development of a magnetically steerable catheter. The focus of this study was to mechanically characterize deflections of two different prototypes of the magnetically steerable catheterin vitro to better understand their efficacy. Methods: A mathematical model for deflection of the magnetically steerable catheter is formulated based on the principle that at equilibrium the mechanical and magnetic torques are equal to each other. Furthermore, two different image basedmore » methods for empirically measuring the catheter deflection angle are presented. The first, referred to as the absolute tip method, measures the angle of the line that is tangential to the catheter tip. The second, referred to the base to tip method, is an approximation that is used when it is not possible to measure the angle of the tangent line. Optical images of the catheter deflection are analyzed using the absolute tip method to quantitatively validate the predicted deflections from the mathematical model. Optical images of the catheter deflection are also analyzed using the base to tip method to quantitatively determine the differences between the absolute tip and base to tip methods. Finally, the optical images are compared to MR images using the base to tip method to determine the accuracy of measuring the catheter deflection using MR. Results: The optical catheter deflection angles measured for both catheter prototypes using the absolute tip method fit very well to the mathematical model (R{sup 2} = 0.91 and 0.86 for each prototype, respectively). It was found that the angles measured using the base to tip method were consistently smaller than those measured using the absolute tip method. The deflection angles measured using optical data did not demonstrate a significant difference from the angles measured using MR image data when compared using the base to tip method. Conclusions: This study validates the theoretical description of the magnetically steerable catheter, while also giving insight into different methods and modalities for measuring the deflection angles of the prototype catheters. These results can be used to mechanically model future iterations of the design. Quantifying the difference between the different methods for measuring catheter deflection will be important when making deflection measurements in future studies. Finally, MR images can be used to reliably measure deflection angles since there is no significant difference between the MR and optical measurements.« less

  5. A comparison of heuristic and model-based clustering methods for dietary pattern analysis.

    PubMed

    Greve, Benjamin; Pigeot, Iris; Huybrechts, Inge; Pala, Valeria; Börnhorst, Claudia

    2016-02-01

    Cluster analysis is widely applied to identify dietary patterns. A new method based on Gaussian mixture models (GMM) seems to be more flexible compared with the commonly applied k-means and Ward's method. In the present paper, these clustering approaches are compared to find the most appropriate one for clustering dietary data. The clustering methods were applied to simulated data sets with different cluster structures to compare their performance knowing the true cluster membership of observations. Furthermore, the three methods were applied to FFQ data assessed in 1791 children participating in the IDEFICS (Identification and Prevention of Dietary- and Lifestyle-Induced Health Effects in Children and Infants) Study to explore their performance in practice. The GMM outperformed the other methods in the simulation study in 72 % up to 100 % of cases, depending on the simulated cluster structure. Comparing the computationally less complex k-means and Ward's methods, the performance of k-means was better in 64-100 % of cases. Applied to real data, all methods identified three similar dietary patterns which may be roughly characterized as a 'non-processed' cluster with a high consumption of fruits, vegetables and wholemeal bread, a 'balanced' cluster with only slight preferences of single foods and a 'junk food' cluster. The simulation study suggests that clustering via GMM should be preferred due to its higher flexibility regarding cluster volume, shape and orientation. The k-means seems to be a good alternative, being easier to use while giving similar results when applied to real data.

  6. A Rapid Method Combining Golgi and Nissl Staining to Study Neuronal Morphology and Cytoarchitecture

    PubMed Central

    Pilati, Nadia; Barker, Matthew; Panteleimonitis, Sofoklis; Donga, Revers; Hamann, Martine

    2008-01-01

    The Golgi silver impregnation technique gives detailed information on neuronal morphology of the few neurons it labels, whereas the majority remain unstained. In contrast, the Nissl staining technique allows for consistent labeling of the whole neuronal population but gives very limited information on neuronal morphology. Most studies characterizing neuronal cell types in the context of their distribution within the tissue slice tend to use the Golgi silver impregnation technique for neuronal morphology followed by deimpregnation as a prerequisite for showing that neuron's histological location by subsequent Nissl staining. Here, we describe a rapid method combining Golgi silver impregnation with cresyl violet staining that provides a useful and simple approach to combining cellular morphology with cytoarchitecture without the need for deimpregnating the tissue. Our method allowed us to identify neurons of the facial nucleus and the supratrigeminal nucleus, as well as assessing cellular distribution within layers of the dorsal cochlear nucleus. With this method, we also have been able to directly compare morphological characteristics of neuronal somata at the dorsal cochlear nucleus when labeled with cresyl violet with those obtained with the Golgi method, and we found that cresyl violet–labeled cell bodies appear smaller at high cellular densities. Our observation suggests that cresyl violet staining is inadequate to quantify differences in soma sizes. (J Histochem Cytochem 56:539–550, 2008) PMID:18285350

  7. Vibro-acoustic performance of newly designed tram track structures

    NASA Astrophysics Data System (ADS)

    Haladin, Ivo; Lakušić, Stjepan; Ahac, Maja

    2017-09-01

    Rail vehicles in interaction with a railway structure induce vibrations that are propagating to surrounding structures and cause noise disturbance in the surrounding areas. Since tram tracks in urban areas often share the running surface with road vehicles one of top priorities is to achieve low maintenance and long lasting structure. Research conducted in scope of this paper gives an overview of newly designed tram track structures designated for use on Zagreb tram network and their performance in terms of noise and vibration mitigation. Research has been conducted on a 150 m long test section consisted of three tram track types: standard tram track structure commonly used on tram lines in Zagreb, optimized tram structure for better noise and vibration mitigation and a slab track with double sleepers embedded in a concrete slab, which presents an entirely new approach of tram track construction in Zagreb. Track has been instrumented with acceleration sensors, strain gauges and revision shafts for inspection. Relative deformations give an insight into track structure dynamic load distribution through the exploitation period. Further the paper describes vibro-acoustic measurements conducted at the test site. To evaluate the track performance from the vibro-acoustical standpoint, detailed analysis of track decay rate has been analysed. Opposed to measurement technique using impact hammer for track decay rate measurements, newly developed measuring technique using vehicle pass by vibrations as a source of excitation has been proposed and analysed. Paper gives overview of the method, it’s benefits compared to standard method of track decay rate measurements and method evaluation based on noise measurements of the vehicle pass by.

  8. Publications on dementia in Medline 1974-2009: a quantitative bibliometric study.

    PubMed

    Theander, Sten S; Gustafson, Lars

    2013-05-01

    The aim is to describe the development of the scientific literature on dementia. We present a quantitative, bibliometric study of the literature on dementia, based on Medline, covering 36 years (1974-2009). Two samples of references to dementia papers were retrieved: The main sample based on the MeSH term Dementia holds more than 88,500 references. We have compared the annual additions of references on dementia with the addition to total Medline. Changes of 'the Dementia to Medline ratio' (%) give the best information on the development. Publications on dementia increased 5.6 times faster than Medline. Most of this relative acceleration took place during 1980-1997, when the references on dementia increased from 0.17 to 0.78%. During the recent 12 years, the publications on dementia have been keeping pace with Medline and have stabilized around 0.8%. We have shown a large increase of the literature on dementia, relative both to the development of all medical research and to all psychiatric research. The bibliometric approach may be questioned as quantitative methods treat articles as being of equal value, what is not true. If, for example, during a certain period, the research output is 'inflated' by a great number of repetitive papers, the quantitative method will give an unfair picture of the development. Our relative method, however, will give relevant results as, at each point of time, the proportion of 'valuable research' ought to be about the same in the dementia group as in total Medline. Copyright © 2012 John Wiley & Sons, Ltd.

  9. Analytical Approach to the Fuel Optimal Impulsive Transfer Problem Using Primer Vector Method

    NASA Astrophysics Data System (ADS)

    Fitrianingsih, E.; Armellin, R.

    2018-04-01

    One of the objectives of mission design is selecting an optimum orbital transfer which often translated as a transfer which requires minimum propellant consumption. In order to assure the selected trajectory meets the requirement, the optimality of transfer should first be analyzed either by directly calculating the ΔV of the candidate trajectories and select the one that gives a minimum value or by evaluating the trajectory according to certain criteria of optimality. The second method is performed by analyzing the profile of the modulus of the thrust direction vector which is known as primer vector. Both methods come with their own advantages and disadvantages. However, it is possible to use the primer vector method to verify if the result from the direct method is truly optimal or if the ΔV can be reduced further by implementing correction maneuver to the reference trajectory. In addition to its capability to evaluate the transfer optimality without the need to calculate the transfer ΔV, primer vector also enables us to identify the time and position to apply correction maneuver in order to optimize a non-optimum transfer. This paper will present the analytical approach to the fuel optimal impulsive transfer using primer vector method. The validity of the method is confirmed by comparing the result to those from the numerical method. The investigation of the optimality of direct transfer is used to give an example of the application of the method. The case under study is the prograde elliptic transfers from Earth to Mars. The study enables us to identify the optimality of all the possible transfers.

  10. Sensitivity test of derivative matrix isopotential synchronous fluorimetry and least squares fitting methods.

    PubMed

    Makkai, Géza; Buzády, Andrea; Erostyák, János

    2010-01-01

    Determination of concentrations of spectrally overlapping compounds has special difficulties. Several methods are available to calculate the constituents' concentrations in moderately complex mixtures. A method which can provide information about spectrally hidden components in mixtures is very useful. Two methods powerful in resolving spectral components are compared in this paper. The first method tested is the Derivative Matrix Isopotential Synchronous Fluorimetry (DMISF). It is based on derivative analysis of MISF spectra, which are constructed using isopotential trajectories in the Excitation-Emission Matrix (EEM) of background solution. For DMISF method, a mathematical routine fitting the 3D data of EEMs was developed. The other method tested uses classical Least Squares Fitting (LSF) algorithm, wherein Rayleigh- and Raman-scattering bands may lead to complications. Both methods give excellent sensitivity and have advantages against each other. Detection limits of DMISF and LSF have been determined at very different concentration and noise levels.

  11. The reliability and validity of the Saliba Postural Classification System

    PubMed Central

    Collins, Cristiana Kahl; Johnson, Vicky Saliba; Godwin, Ellen M.; Pappas, Evangelos

    2016-01-01

    Objectives To determine the reliability and validity of the Saliba Postural Classification System (SPCS). Methods Two physical therapists classified pictures of 100 volunteer participants standing in their habitual posture for inter and intra-tester reliability. For validity, 54 participants stood on a force plate in a habitual and a corrected posture, while a vertical force was applied through the shoulders until the clinician felt a postural give. Data were extracted at the time the give was felt and at a time in the corrected posture that matched the peak vertical ground reaction force (VGRF) in the habitual posture. Results Inter-tester reliability demonstrated 75% agreement with a Kappa = 0.64 (95% CI = 0.524–0.756, SE = 0.059). Intra-tester reliability demonstrated 87% agreement with a Kappa = 0.8, (95% CI = 0.702–0.898, SE = 0.05) and 80% agreement with a Kappa = 0.706, (95% CI = 0.594–0818, SE = 0.057). The examiner applied a significantly higher (p < 0.001) peak vertical force in the corrected posture prior to a postural give when compared to the habitual posture. Within the corrected posture, the %VGRF was higher when the test was ongoing vs. when a postural give was felt (p < 0.001). The %VGRF was not different between the two postures when comparing the peaks (p = 0.214). Discussion The SPCS has substantial agreement for inter- and intra-tester reliability and is largely a valid postural classification system as determined by the larger vertical forces in the corrected postures. Further studies on the correlation between the SPCS and diagnostic classifications are indicated. PMID:27559288

  12. Four points function fitted and first derivative procedure for determining the end points in potentiometric titration curves: statistical analysis and method comparison.

    PubMed

    Kholeif, S A

    2001-06-01

    A new method that belongs to the differential category for determining the end points from potentiometric titration curves is presented. It uses a preprocess to find first derivative values by fitting four data points in and around the region of inflection to a non-linear function, and then locate the end point, usually as a maximum or minimum, using an inverse parabolic interpolation procedure that has an analytical solution. The behavior and accuracy of the sigmoid and cumulative non-linear functions used are investigated against three factors. A statistical evaluation of the new method using linear least-squares method validation and multifactor data analysis are covered. The new method is generally applied to symmetrical and unsymmetrical potentiometric titration curves, and the end point is calculated using numerical procedures only. It outperforms the "parent" regular differential method in almost all factors levels and gives accurate results comparable to the true or estimated true end points. Calculated end points from selected experimental titration curves compatible with the equivalence point category of methods, such as Gran or Fortuin, are also compared with the new method.

  13. The evaluation of evaporation by infrared thermography: A critical analysis of the measurements on the Crau test site. [France

    NASA Technical Reports Server (NTRS)

    Seguin, B.; Petit, V.; Devillard, R.; Reich, P.; Thouy, G. (Principal Investigator)

    1980-01-01

    Evapotranspiration was calculated for both the dry and irrigated zone by four methods which were compared with the energy balance method serving as a reference. Two methods did not involve the surface temperature. They are ETR(n) = R(n), liable to be valid under wet conditions and ET(eq) = (delta/delta + gamma) R(n) i.e, the first term of Penman's equation, adapted to moderately dry conditions. The methods using surface temperature were the combined energy balance aerodynamic approach and a simplified approach proposed by Jackson et al. Tests show the surface temperature methods give relatively satisfactory results both in the dry and wet zone, with a precision of 10% to 15% compared with the reference method. As was to be expected, ET(eq) gave satisfactory results only in the dry zone and ET(Rn) in the irrigated zone. Thermography increased the precision in the estimate of ET relative to the most suitable classical method by 5% to 8% and is equally suitable for both dry and wet conditions. The Jackson method does not require extensive ground measurements and the evaluation of the surface roughness.

  14. Metal artifact reduction for CT-based luggage screening.

    PubMed

    Karimi, Seemeen; Martz, Harry; Cosman, Pamela

    2015-01-01

    In aviation security, checked luggage is screened by computed tomography scanning. Metal objects in the bags create artifacts that degrade image quality. Though there exist metal artifact reduction (MAR) methods mainly in medical imaging literature, they require knowledge of the materials in the scan, or are outlier rejection methods. To improve and evaluate a MAR method we previously introduced, that does not require knowledge of the materials in the scan, and gives good results on data with large quantities and different kinds of metal. We describe in detail an optimization which de-emphasizes metal projections and has a constraint for beam hardening and scatter. This method isolates and reduces artifacts in an intermediate image, which is then fed to a previously published sinogram replacement method. We evaluate the algorithm for luggage data containing multiple and large metal objects. We define measures of artifact reduction, and compare this method against others in MAR literature. Metal artifacts were reduced in our test images, even for multiple and large metal objects, without much loss of structure or resolution. Our MAR method outperforms the methods with which we compared it. Our approach does not make assumptions about image content, nor does it discard metal projections.

  15. Estimating and modelling cure in population-based cancer studies within the framework of flexible parametric survival models

    PubMed Central

    2011-01-01

    Background When the mortality among a cancer patient group returns to the same level as in the general population, that is, the patients no longer experience excess mortality, the patients still alive are considered "statistically cured". Cure models can be used to estimate the cure proportion as well as the survival function of the "uncured". One limitation of parametric cure models is that the functional form of the survival of the "uncured" has to be specified. It can sometimes be hard to find a survival function flexible enough to fit the observed data, for example, when there is high excess hazard within a few months from diagnosis, which is common among older age groups. This has led to the exclusion of older age groups in population-based cancer studies using cure models. Methods Here we have extended the flexible parametric survival model to incorporate cure as a special case to estimate the cure proportion and the survival of the "uncured". Flexible parametric survival models use splines to model the underlying hazard function, and therefore no parametric distribution has to be specified. Results We have compared the fit from standard cure models to our flexible cure model, using data on colon cancer patients in Finland. This new method gives similar results to a standard cure model, when it is reliable, and better fit when the standard cure model gives biased estimates. Conclusions Cure models within the framework of flexible parametric models enables cure modelling when standard models give biased estimates. These flexible cure models enable inclusion of older age groups and can give stage-specific estimates, which is not always possible from parametric cure models. PMID:21696598

  16. Crystallization mosaic effect generation by superpixels

    NASA Astrophysics Data System (ADS)

    Xie, Yuqi; Bo, Pengbo; Yuan, Ye; Wang, Kuanquan

    2015-03-01

    Art effect generation from digital images using computational tools has been a hot research topic in recent years. We propose a new method for generating crystallization mosaic effects from color images. Two key problems in generating pleasant mosaic effect are studied: grouping pixels into mosaic tiles and arrangement of mosaic tiles adapting to image features. To give visually pleasant mosaic effect, we propose to create mosaic tiles by pixel clustering in feature space of color information, taking compactness of tiles into consideration as well. Moreover, we propose a method for processing feature boundaries in images which gives guidance for arranging mosaic tiles near image features. This method gives nearly uniform shape of mosaic tiles, adapting to feature lines in an esthetic way. The new approach considers both color distance and Euclidean distance of pixels, and thus is capable of giving mosaic tiles in a more pleasing manner. Some experiments are included to demonstrate the computational efficiency of the present method and its capability of generating visually pleasant mosaic tiles. Comparisons with existing approaches are also included to show the superiority of the new method.

  17. Characterization of perovskite solar cells: Towards a reliable measurement protocol

    NASA Astrophysics Data System (ADS)

    Zimmermann, Eugen; Wong, Ka Kan; Müller, Michael; Hu, Hao; Ehrenreich, Philipp; Kohlstädt, Markus; Würfel, Uli; Mastroianni, Simone; Mathiazhagan, Gayathri; Hinsch, Andreas; Gujar, Tanaji P.; Thelakkat, Mukundan; Pfadler, Thomas; Schmidt-Mende, Lukas

    2016-09-01

    Lead halide perovskite solar cells have shown a tremendous rise in power conversion efficiency with reported record efficiencies of over 20% making this material very promising as a low cost alternative to conventional inorganic solar cells. However, due to a differently severe "hysteretic" behaviour during current density-voltage measurements, which strongly depends on scan rate, device and measurement history, preparation method, device architecture, etc., commonly used solar cell measurements do not give reliable or even reproducible results. For the aspect of commercialization and the possibility to compare results of different devices among different laboratories, it is necessary to establish a measurement protocol which gives reproducible results. Therefore, we compare device characteristics derived from standard current density-voltage measurements with stabilized values obtained from an adaptive tracking of the maximum power point and the open circuit voltage as well as characteristics extracted from time resolved current density-voltage measurements. Our results provide insight into the challenges of a correct determination of device performance and propose a measurement protocol for a reliable characterisation which is easy to implement and has been tested on varying perovskite solar cells fabricated in different laboratories.

  18. Label-free offline versus online activity methods for nucleoside diphosphate kinase b using high performance liquid chromatography.

    PubMed

    Lima, Juliana Maria; Salmazo Vieira, Plínio; Cavalcante de Oliveira, Arthur Henrique; Cardoso, Carmen Lúcia

    2016-08-07

    Nucleoside diphosphate kinase from Leishmania spp. (LmNDKb) has recently been described as a potential drug target to treat leishmaniasis disease. Therefore, screening of LmNDKb ligands requires methodologies that mimic the conditions under which LmNDKb acts in biological systems. Here, we compare two label-free methodologies that could help screen LmNDKb ligands and measure NDKb activity: an offline LC-UV assay for soluble LmNDKb and an online two-dimensional LC-UV system based on LmNDKb immobilised on a silica capillary. The target enzyme was immobilised on the silica capillary via Schiff base formation (to give LmNDKb-ICER-Schiff) or affinity attachment (to give LmNDKb-ICER-His). Several aspects of the ICERs resulting from these procedures were compared, namely kinetic parameters, stability, and procedure steps. Both the LmNDKb immobilisation routes minimised the conformational changes and preserved the substrate binding sites. However, considering the number of steps involved in the immobilisation procedure, the cost of reagents, and the stability of the immobilised enzyme, immobilisation via Schiff base formation proved to be the optimal procedure.

  19. Numerical study focusing on the entropy analysis of MHD squeezing flow of a nanofluid model using Cattaneo–Christov theory

    NASA Astrophysics Data System (ADS)

    Akmal, N.; Sagheer, M.; Hussain, S.

    2018-05-01

    The present study gives an account of the heat transfer characteristics of the squeezing flow of a nanofluid between two flat plates with upper plate moving vertically and the lower in the horizontal direction. Tiwari and Das nanofluid model has been utilized to give a comparative analysis of the heat transfer in the Cu-water and Al2O3-water nanofluids with entropy generation. The modeling is carried out with the consideration of Lorentz forces to observe the effect of magnetic field on the flow. The Joule heating effect is included to discuss the heat dissipation in the fluid and its effect on the entropy of the system. The nondimensional ordinary differential equations are solved using the Keller box method to assess the numerical results which are presented by the graphs and tables. An interesting observation is that the entropy is generated more near the lower plate as compared with that at the upper plate. Also, the heat transfer rate is found to be higher for the Cu nanoparticles in comparison with the Al2O3 nanoparticles.

  20. When measured spin polarization is not spin polarization

    NASA Astrophysics Data System (ADS)

    Dowben, P. A.; Wu, Ning; Binek, Christian

    2011-05-01

    Spin polarization is an unusually ambiguous scientific idiom and, as such, is rarely well defined. A given experimental methodology may allow one to quantify a spin polarization but only in its particular context. As one might expect, these ambiguities sometimes give rise to inappropriate interpretations when comparing the spin polarizations determined through different methods. The spin polarization of CrO2 and Cr2O3 illustrate some of the complications which hinders comparisons of spin polarization values.

  1. A review and critique of some models used in competing risk analysis.

    PubMed

    Gail, M

    1975-03-01

    We have introduced a notation which allows one to define competing risk models easily and to examine underlying assumptions. We have treated the actuarial model for competing risk in detail, comparing it with other models and giving useful variance formulae both for the case when times of death are available and for the case when they are not. The generality of these methods is illustrated by an example treating two dependent competing risks.

  2. Instrumental color control for metallic coatings

    NASA Astrophysics Data System (ADS)

    Chou, W.; Han, Bing; Cui, Guihua; Rigg, Bryan; Luo, Ming R.

    2002-06-01

    This paper describes work investigating a suitable color quality control method for metallic coatings. A set of psychological experiments was carried out based upon 50 pairs of samples. The results were used to test the performance of various color difference formulae. Different techniques were developed by optimising the weights and/or the lightness parametric factors of colour differences calculated from the four measuring angles. The results show that the new techniques give a significant improvement compared to conventional techniques.

  3. Comparison of automatic denoising methods for phonocardiograms with extraction of signal parameters via the Hilbert Transform

    NASA Astrophysics Data System (ADS)

    Messer, Sheila R.; Agzarian, John; Abbott, Derek

    2001-05-01

    Phonocardiograms (PCGs) have many advantages over traditional auscultation (listening to the heart) because they may be replayed, may be analyzed for spectral and frequency content, and frequencies inaudible to the human ear may be recorded. However, various sources of noise may pollute a PCG including lung sounds, environmental noise and noise generated from contact between the recording device and the skin. Because PCG signals are known to be nonlinear and it is often not possible to determine their noise content, traditional de-noising methods may not be effectively applied. However, other methods including wavelet de-noising, wavelet packet de-noising and averaging can be employed to de-noise the PCG. This study examines and compares these de-noising methods. This study answers such questions as to which de-noising method gives a better SNR, the magnitude of signal information that is lost as a result of the de-noising process, the appropriate uses of the different methods down to such specifics as to which wavelets and decomposition levels give best results in wavelet and wavelet packet de-noising. In general, the wavelet and wavelet packet de-noising performed roughly equally with optimal de-noising occurring at 3-5 levels of decomposition. Averaging also proved a highly useful de- noising technique; however, in some cases averaging is not appropriate. The Hilbert Transform is used to illustrate the results of the de-noising process and to extract instantaneous features including instantaneous amplitude, frequency, and phase.

  4. Accurate approximation method for prediction of class I MHC affinities for peptides of length 8, 10 and 11 using prediction tools trained on 9mers.

    PubMed

    Lundegaard, Claus; Lund, Ole; Nielsen, Morten

    2008-06-01

    Several accurate prediction systems have been developed for prediction of class I major histocompatibility complex (MHC):peptide binding. Most of these are trained on binding affinity data of primarily 9mer peptides. Here, we show how prediction methods trained on 9mer data can be used for accurate binding affinity prediction of peptides of length 8, 10 and 11. The method gives the opportunity to predict peptides with a different length than nine for MHC alleles where no such peptides have been measured. As validation, the performance of this approach is compared to predictors trained on peptides of the peptide length in question. In this validation, the approximation method has an accuracy that is comparable to or better than methods trained on a peptide length identical to the predicted peptides. The algorithm has been implemented in the web-accessible servers NetMHC-3.0: http://www.cbs.dtu.dk/services/NetMHC-3.0, and NetMHCpan-1.1: http://www.cbs.dtu.dk/services/NetMHCpan-1.1

  5. A method for calculating strut and splitter plate noise in exit ducts: Theory and verification

    NASA Technical Reports Server (NTRS)

    Fink, M. R.

    1978-01-01

    Portions of a four-year analytical and experimental investigation relative to noise radiation from engine internal components in turbulent flow are summarized. Spectra measured for such airfoils over a range of chord, thickness ratio, flow velocity, and turbulence level were compared with predictions made by an available rigorous thin-airfoil analytical method. This analysis included the effects of flow compressibility and source noncompactness. Generally good agreement was obtained. This noise calculation method for isolated airfoils in turbulent flow was combined with a method for calculating transmission of sound through a subsonic exit duct and with an empirical far-field directivity shape. These three elements were checked separately and were individually shown to give close agreement with data. This combination provides a method for predicting engine internally generated aft-radiated noise from radial struts and stators, and annular splitter rings. Calculated sound power spectra, directivity, and acoustic pressure spectra were compared with the best available data. These data were for noise caused by a fan exit duct annular splitter ring, larger-chord stator blades, and turbine exit struts.

  6. Comparison of dental maturity in children of different ethnic origins: international maturity curves for clinicians.

    PubMed

    Chaillet, Nils; Nyström, Marjatta; Demirjian, Arto

    2005-09-01

    Dental maturity was studied with 9577 dental panoramic tomograms of healthy subjects from 8 countries, aged between 2 and 25 years of age. Demirjian's method based on 7 teeth was used for determining dental maturity scores, establishing gender-specific tables of maturity scores and development graphs. The aim of this study was to give dental maturity standards when the ethnic origin is unknown and to compare the efficiency and applicability of this method to forensic sciences and dental clinicians. The second aim was to compare the dental maturity of these different populations. We noted an high efficiency for International Demirjian's method at 99% CI (0.85% of misclassified and a mean accuracy between 2 to 18 years +/- 2.15 years), which makes it useful for forensic purposes. Nevertheless, this international method is less accurate than Demirjian's method developed for a specific country, because of the inter-ethnic variability obtained by the addition of 8 countries in the dental database. There are inter-ethnic differences classified in three major groups. Australians have the fastest dental maturation and Koreans have the slowest.

  7. The Mixed Finite Element Multigrid Method for Stokes Equations

    PubMed Central

    Muzhinji, K.; Shateyi, S.; Motsa, S. S.

    2015-01-01

    The stable finite element discretization of the Stokes problem produces a symmetric indefinite system of linear algebraic equations. A variety of iterative solvers have been proposed for such systems in an attempt to construct efficient, fast, and robust solution techniques. This paper investigates one of such iterative solvers, the geometric multigrid solver, to find the approximate solution of the indefinite systems. The main ingredient of the multigrid method is the choice of an appropriate smoothing strategy. This study considers the application of different smoothers and compares their effects in the overall performance of the multigrid solver. We study the multigrid method with the following smoothers: distributed Gauss Seidel, inexact Uzawa, preconditioned MINRES, and Braess-Sarazin type smoothers. A comparative study of the smoothers shows that the Braess-Sarazin smoothers enhance good performance of the multigrid method. We study the problem in a two-dimensional domain using stable Hood-Taylor Q 2-Q 1 pair of finite rectangular elements. We also give the main theoretical convergence results. We present the numerical results to demonstrate the efficiency and robustness of the multigrid method and confirm the theoretical results. PMID:25945361

  8. System Identification and POD Method Applied to Unsteady Aerodynamics

    NASA Technical Reports Server (NTRS)

    Tang, Deman; Kholodar, Denis; Juang, Jer-Nan; Dowell, Earl H.

    2001-01-01

    The representation of unsteady aerodynamic flow fields in terms of global aerodynamic modes has proven to be a useful method for reducing the size of the aerodynamic model over those representations that use local variables at discrete grid points in the flow field. Eigenmodes and Proper Orthogonal Decomposition (POD) modes have been used for this purpose with good effect. This suggests that system identification models may also be used to represent the aerodynamic flow field. Implicit in the use of a systems identification technique is the notion that a relative small state space model can be useful in describing a dynamical system. The POD model is first used to show that indeed a reduced order model can be obtained from a much larger numerical aerodynamical model (the vortex lattice method is used for illustrative purposes) and the results from the POD and the system identification methods are then compared. For the example considered, the two methods are shown to give comparable results in terms of accuracy and reduced model size. The advantages and limitations of each approach are briefly discussed. Both appear promising and complementary in their characteristics.

  9. Comparative method of protein expression and isolation of EBV epitope in E.coli DH5α

    NASA Astrophysics Data System (ADS)

    Anyndita, Nadya V. M.; Dluha, Nurul; Himmah, Karimatul; Rifa'i, Muhaimin; Widodo

    2017-11-01

    Epstein-Barr Virus (EBV) or human herpes virus 4 (HHV-4) is a virus that infects human B cell and leads to nasopharyngeal carcinoma (NPC). The prevention of this disease remains unsuccessful since the vaccine has not been discovered. The objective of this study is to over-produce EBV gp350/220 epitope using several methods in E.coli DH5α. EBV epitope sequences were inserted into pMAL-p5x vector, then transformed into DH5α E.coli and over-produced using 0.3, 1 and 2 mM IPTG. Plasmid transformation was validated using AflIII restriction enzyme in 0.8% agarose. Periplasmic protein was isolated using 2 comparative methods and then analyzed using SDS-PAGE. Method A produced a protein band around 50 kDa and appeared only at transformant. Method B failed to isolate the protein, indicated by no protein band appearing. In addition, any variations in IPTG concentration didn't give a different result. Thus it can be concluded that even the lowest IPTG concentration is able to induce protein expression.

  10. Equivalence of the equilibrium and the nonequilibrium molecular dynamics methods for thermal conductivity calculations: From bulk to nanowire silicon

    NASA Astrophysics Data System (ADS)

    Dong, Haikuan; Fan, Zheyong; Shi, Libin; Harju, Ari; Ala-Nissila, Tapio

    2018-03-01

    Molecular dynamics (MD) simulations play an important role in studying heat transport in complex materials. The lattice thermal conductivity can be computed either using the Green-Kubo formula in equilibrium MD (EMD) simulations or using Fourier's law in nonequilibrium MD (NEMD) simulations. These two methods have not been systematically compared for materials with different dimensions and inconsistencies between them have been occasionally reported in the literature. Here we give an in-depth comparison of them in terms of heat transport in three allotropes of Si: three-dimensional bulk silicon, two-dimensional silicene, and quasi-one-dimensional silicon nanowire. By multiplying the correlation time in the Green-Kubo formula with an appropriate effective group velocity, we can express the running thermal conductivity in the EMD method as a function of an effective length and directly compare it to the length-dependent thermal conductivity in the NEMD method. We find that the two methods quantitatively agree with each other for all the systems studied, firmly establishing their equivalence in computing thermal conductivity.

  11. Multiway spectral community detection in networks

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao; Newman, M. E. J.

    2015-11-01

    One of the most widely used methods for community detection in networks is the maximization of the quality function known as modularity. Of the many maximization techniques that have been used in this context, some of the most conceptually attractive are the spectral methods, which are based on the eigenvectors of the modularity matrix. Spectral algorithms have, however, been limited, by and large, to the division of networks into only two or three communities, with divisions into more than three being achieved by repeated two-way division. Here we present a spectral algorithm that can directly divide a network into any number of communities. The algorithm makes use of a mapping from modularity maximization to a vector partitioning problem, combined with a fast heuristic for vector partitioning. We compare the performance of this spectral algorithm with previous approaches and find it to give superior results, particularly in cases where community sizes are unbalanced. We also give demonstrative applications of the algorithm to two real-world networks and find that it produces results in good agreement with expectations for the networks studied.

  12. Empirical prediction intervals improve energy forecasting

    PubMed Central

    Kaack, Lynn H.; Apt, Jay; Morgan, M. Granger; McSharry, Patrick

    2017-01-01

    Hundreds of organizations and analysts use energy projections, such as those contained in the US Energy Information Administration (EIA)’s Annual Energy Outlook (AEO), for investment and policy decisions. Retrospective analyses of past AEO projections have shown that observed values can differ from the projection by several hundred percent, and thus a thorough treatment of uncertainty is essential. We evaluate the out-of-sample forecasting performance of several empirical density forecasting methods, using the continuous ranked probability score (CRPS). The analysis confirms that a Gaussian density, estimated on past forecasting errors, gives comparatively accurate uncertainty estimates over a variety of energy quantities in the AEO, in particular outperforming scenario projections provided in the AEO. We report probabilistic uncertainties for 18 core quantities of the AEO 2016 projections. Our work frames how to produce, evaluate, and rank probabilistic forecasts in this setting. We propose a log transformation of forecast errors for price projections and a modified nonparametric empirical density forecasting method. Our findings give guidance on how to evaluate and communicate uncertainty in future energy outlooks. PMID:28760997

  13. A comparison of the Sensititre® MYCOTB panel and the agar proportion method for the susceptibility testing of Mycobacterium tuberculosis.

    PubMed

    Abuali, M M; Katariwala, R; LaBombardi, V J

    2012-05-01

    The agar proportion method (APM) for determining Mycobacterium tuberculosis susceptibilities is a qualitative method that requires 21 days in order to produce the results. The Sensititre method allows for a quantitative assessment. Our objective was to compare the accuracy, time to results, and ease of use of the Sensititre method to the APM. 7H10 plates in the APM and 96-well microtiter dry MYCOTB panels containing 12 antibiotics at full dilution ranges in the Sensititre method were inoculated with M. tuberculosis and read for colony growth. Thirty-seven clinical isolates were tested using both methods and 26 challenge strains of blinded susceptibilities were tested using the Sensititre method only. The Sensititre method displayed 99.3% concordance with the APM. The APM provided reliable results on day 21, whereas the Sensititre method displayed consistent results by day 10. The Sensititre method provides a more rapid, quantitative, and efficient method of testing both first- and second-line drugs when compared to the gold standard. It will give clinicians a sense of the degree of susceptibility, thus, guiding the therapeutic decision-making process. Furthermore, the microwell plate format without the need for instrumentation will allow its use in resource-poor settings.

  14. Comparison of missing value imputation methods in time series: the case of Turkish meteorological data

    NASA Astrophysics Data System (ADS)

    Yozgatligil, Ceylan; Aslan, Sipan; Iyigun, Cem; Batmaz, Inci

    2013-04-01

    This study aims to compare several imputation methods to complete the missing values of spatio-temporal meteorological time series. To this end, six imputation methods are assessed with respect to various criteria including accuracy, robustness, precision, and efficiency for artificially created missing data in monthly total precipitation and mean temperature series obtained from the Turkish State Meteorological Service. Of these methods, simple arithmetic average, normal ratio (NR), and NR weighted with correlations comprise the simple ones, whereas multilayer perceptron type neural network and multiple imputation strategy adopted by Monte Carlo Markov Chain based on expectation-maximization (EM-MCMC) are computationally intensive ones. In addition, we propose a modification on the EM-MCMC method. Besides using a conventional accuracy measure based on squared errors, we also suggest the correlation dimension (CD) technique of nonlinear dynamic time series analysis which takes spatio-temporal dependencies into account for evaluating imputation performances. Depending on the detailed graphical and quantitative analysis, it can be said that although computational methods, particularly EM-MCMC method, are computationally inefficient, they seem favorable for imputation of meteorological time series with respect to different missingness periods considering both measures and both series studied. To conclude, using the EM-MCMC algorithm for imputing missing values before conducting any statistical analyses of meteorological data will definitely decrease the amount of uncertainty and give more robust results. Moreover, the CD measure can be suggested for the performance evaluation of missing data imputation particularly with computational methods since it gives more precise results in meteorological time series.

  15. Study of gas-phase O-H bond dissociation enthalpies and ionization potentials of substituted phenols - Applicability of ab initio and DFT/B3LYP methods

    NASA Astrophysics Data System (ADS)

    Klein, Erik; Lukeš, Vladimír

    2006-11-01

    In this paper, the study of phenol and 37 compounds representing various ortho-, para-, and meta-substituted phenols is presented. Molecules and their radical structures were studied using ab initio methods with inclusion of correlation energy and DFT in order to calculate the O-H bond dissociation enthalpies (BDEs) and vertical ionization potentials (IPs). Calculated BDEs and IPs were compared with available experimental values to ascertain the suitability of used methods, especially for the description of the substituent induced changes in BDE and IP. MP2, MP3, and MP4 methods do not give reliable results, since they significantly underestimate substituent induced changes in BDE and do not reflect distinct effect of substituents related to para and meta position correctly. DFT/B3LYP method reflects the effect of substituents on BDE satisfactorily, though ΔBDEs are in narrower range than experimental values. BDE of phenol was calculated also using CCSD(T) method in various basis sets. Both, DFT and HF methods describe the effect of substituents on IP identically. However, DFT considerably underestimates individual values. HF method gives IPs in very good agreement with experimental data. Obtained results show that dependences of BDEs and IPs on Hammett constants of the substituents are linear. Linearity of DFT BDE vs. IP dependence is even better than the dependences on Hammett constants and obtained equations allow estimating of O-H BDEs of meta- and para-substituted phenols from calculated IPs.

  16. A comparative study of Averrhoabilimbi extraction method

    NASA Astrophysics Data System (ADS)

    Zulhaimi, H. I.; Rosli, I. R.; Kasim, K. F.; Akmal, H. Muhammad; Nuradibah, M. A.; Sam, S. T.

    2017-09-01

    In recent year, bioactive compound in plant has become a limelight in the food and pharmaceutical market, leading to research interest to implement effective technologies for extracting bioactive substance. Therefore, this study is focusing on extraction of Averrhoabilimbi by different extraction technique namely, maceration and ultrasound-assisted extraction. Fewplant partsof Averrhoabilimbiweretaken as extraction samples which are fruits, leaves and twig. Different solvents such as methanol, ethanol and distilled water were utilized in the process. Fruit extractsresult in highest extraction yield compared to other plant parts. Ethanol and distilled water have significant role compared to methanol in all parts and both extraction technique. The result also shows that ultrasound-assisted extraction gave comparable result with maceration. Besides, the shorter period on extraction process gives useful in term of implementation to industries.

  17. Do procedures for verbal reporting of thinking have to be reactive? A meta-analysis and recommendations for best reporting methods.

    PubMed

    Fox, Mark C; Ericsson, K Anders; Best, Ryan

    2011-03-01

    Since its establishment, psychology has struggled to find valid methods for studying thoughts and subjective experiences. Thirty years ago, Ericsson and Simon (1980) proposed that participants can give concurrent verbal expression to their thoughts (think aloud) while completing tasks without changing objectively measurable performance (accuracy). In contrast, directed requests for concurrent verbal reports, such as explanations or directions to describe particular kinds of information, were predicted to change thought processes as a consequence of the need to generate this information, thus altering performance. By comparing performance of concurrent verbal reporting conditions with their matching silent control condition, Ericsson and Simon found several studies demonstrating that directed verbalization was associated with changes in performance. In contrast, the lack of effects of thinking aloud was merely suggested by a handful of experimental studies. In this article, Ericsson and Simon's model is tested by a meta-analysis of 94 studies comparing performance while giving concurrent verbalizations to a matching condition without verbalization. Findings based on nearly 3,500 participants show that the "think-aloud" effect size is indistinguishable from zero (r = -.03) and that this procedure remains nonreactive even after statistically controlling additional factors such as task type (primarily visual or nonvisual). In contrast, procedures that entail describing or explaining thoughts and actions are significantly reactive, leading to higher performance than silent control conditions. All verbal reporting procedures tend to increase times to complete tasks. These results suggest that think-aloud should be distinguished from other methods in future studies. Theoretical and practical implications are discussed. (c) 2011 APA, all rights reserved.

  18. Changing trends in residents-as-teachers across graduate medical education

    PubMed Central

    Al Achkar, Morhaf; Hanauer, Mathew; Morrison, Elizabeth H; Davies, M Kelly; Oh, Robert C

    2017-01-01

    Background Teaching residents how to teach is a critical part of residents’ training in graduate medical education (GME). The purpose of this study was to assess the change in resident-as-teacher (RaT) instruction in GME over the past 15 years in the US. Methods We used a quantitative and qualitative survey of all program directors (PDs) across specialties. We compared our findings with a previous work from 2000–2001 that studied the same matter. Finally, we qualitatively analyzed PDs’ responses regarding the reasons for implementing and not implementing RaT instruction. Results Two hundred and twenty-one PDs completed the survey, which yields a response rate of 12.6%. Over 80% of PDs implement RaT, an increase of 26.34% compared to 2000–2001. RaT instruction uses multiple methods with didactic lectures reported as the most common, followed by role playing in simulated environments, then observing and giving feedback. Residents giving feedback, clinical supervision, and bedside teaching were the top three targeted skills. Through our qualitative analysis we identified five main reasons for implementing RaT: teaching is part of the residents’ role; learners desire formal RaT training; regulatory bodies require RaT training; RaT improves residents’ education; and RaT prepares residents for their current and future roles. Conclusion The use of RaT instruction has increased significantly in GME. More and more PDs are realizing its importance in the residents’ formative training experience. Future studies should examine the effectiveness of each method for RaT instruction. PMID:28496376

  19. A comparative study of methods for describing non-adiabatic coupling: diabatic representation of the 1Sigma +/1Pi HOH and HHO conical intersections

    NASA Astrophysics Data System (ADS)

    Dobbyn, Abigail J.; Knowles, Peter J.

    A number of established techniques for obtaining diabatic electronic states in small molecules are critically compared for the example of the X and B states in the water molecule, which contribute to the two lowest-energy conical intersections. Integration of the coupling matrix elements and analysis of configuration mixing coefficients both produce reliable diabatic states globally. Methods relying on diagonalization of dipole moment and angular momentum operators are shown to fail in large regions of coordinate space. However, the use of transition angular momentum matrix elements involving the A state, which is degenerate with B at the conical intersections, is successful globally, provided that an appropriate choice of coordinates is made. Long range damping of non-adiabatic coupling to give correct asymptotic mixing angles also is investigated.

  20. Numerical simulations of microwave heating of liquids: enhancements using Krylov subspace methods

    NASA Astrophysics Data System (ADS)

    Lollchund, M. R.; Dookhitram, K.; Sunhaloo, M. S.; Boojhawon, R.

    2013-04-01

    In this paper, we compare the performances of three iterative solvers for large sparse linear systems arising in the numerical computations of incompressible Navier-Stokes (NS) equations. These equations are employed mainly in the simulation of microwave heating of liquids. The emphasis of this work is on the application of Krylov projection techniques such as Generalized Minimal Residual (GMRES) to solve the Pressure Poisson Equations that result from discretisation of the NS equations. The performance of the GMRES method is compared with the traditional Gauss-Seidel (GS) and point successive over relaxation (PSOR) techniques through their application to simulate the dynamics of water housed inside a vertical cylindrical vessel which is subjected to microwave radiation. It is found that as the mesh size increases, GMRES gives the fastest convergence rate in terms of computational times and number of iterations.

  1. Optimal control penalty finite elements - Applications to integrodifferential equations

    NASA Astrophysics Data System (ADS)

    Chung, T. J.

    The application of the optimal-control/penalty finite-element method to the solution of integrodifferential equations in radiative-heat-transfer problems (Chung et al.; Chung and Kim, 1982) is discussed and illustrated. The nonself-adjointness of the convective terms in the governing equations is treated by utilizing optimal-control cost functions and employing penalty functions to constrain auxiliary equations which permit the reduction of second-order derivatives to first order. The OCPFE method is applied to combined-mode heat transfer by conduction, convection, and radiation, both without and with scattering and viscous dissipation; the results are presented graphically and compared to those obtained by other methods. The OCPFE method is shown to give good results in cases where standard Galerkin FE fail, and to facilitate the investigation of scattering and dissipation effects.

  2. An analytical method of estimating turbine performance

    NASA Technical Reports Server (NTRS)

    Kochendorfer, Fred D; Nettles, J Cary

    1949-01-01

    A method is developed by which the performance of a turbine over a range of operating conditions can be analytically estimated from the blade angles and flow areas. In order to use the method, certain coefficients that determine the weight flow and the friction losses must be approximated. The method is used to calculate the performance of the single-stage turbine of a commercial aircraft gas-turbine engine and the calculated performance is compared with the performance indicated by experimental data. For the turbine of the typical example, the assumed pressure losses and the tuning angles give a calculated performance that represents the trends of the experimental performance with reasonable accuracy. The exact agreement between analytical performance and experimental performance is contingent upon the proper selection of a blading-loss parameter.

  3. Identification of Chemical Toxicity Using Ontology Information of Chemicals.

    PubMed

    Jiang, Zhanpeng; Xu, Rui; Dong, Changchun

    2015-01-01

    With the advance of the combinatorial chemistry, a large number of synthetic compounds have surged. However, we have limited knowledge about them. On the other hand, the speed of designing new drugs is very slow. One of the key causes is the unacceptable toxicities of chemicals. If one can correctly identify the toxicity of chemicals, the unsuitable chemicals can be discarded in early stage, thereby accelerating the study of new drugs and reducing the R&D costs. In this study, a new prediction method was built for identification of chemical toxicities, which was based on ontology information of chemicals. By comparing to a previous method, our method is quite effective. We hope that the proposed method may give new insights to study chemical toxicity and other attributes of chemicals.

  4. Extrapolation methods for vector sequences

    NASA Technical Reports Server (NTRS)

    Smith, David A.; Ford, William F.; Sidi, Avram

    1987-01-01

    This paper derives, describes, and compares five extrapolation methods for accelerating convergence of vector sequences or transforming divergent vector sequences to convergent ones. These methods are the scalar epsilon algorithm (SEA), vector epsilon algorithm (VEA), topological epsilon algorithm (TEA), minimal polynomial extrapolation (MPE), and reduced rank extrapolation (RRE). MPE and RRE are first derived and proven to give the exact solution for the right 'essential degree' k. Then, Brezinski's (1975) generalization of the Shanks-Schmidt transform is presented; the generalized form leads from systems of equations to TEA. The necessary connections are then made with SEA and VEA. The algorithms are extended to the nonlinear case by cycling, the error analysis for MPE and VEA is sketched, and the theoretical support for quadratic convergence is discussed. Strategies for practical implementation of the methods are considered.

  5. Experimental study of geotextile as plinth beam in a pile group-supported modeled building frame

    NASA Astrophysics Data System (ADS)

    Ravi Kumar Reddy, C.; Gunneswara Rao, T. D.

    2017-12-01

    This paper presents the experimental results of static vertical load tests on a model building frame with geotextile as plinth beam supported by pile groups embedded in cohesionless soil (sand). The experimental results have been compared with those obtained from the nonlinear FEA and conventional method of analysis. The results revealed that the conventional method of analysis gives a shear force of about 53%, bending moment at the top of the column about 17% and at the base of the column about 50-98% higher than that by the nonlinear FEA for the frame with geotextile as plinth beam.

  6. METHOD AND APPARATUS FOR METABOLIC ASSAY

    DOEpatents

    Tolbert, B.M.; Kirk, M.R.; Baker, E.M.

    1961-09-19

    A method and instrumentation are described for producing an instuntuneous and continuous curve of the rate at which any selected carbon- containing substance is metabolized by a living subject. The substance is prepared with a known proportion of C/sup 14/ and, after administration, the C/ sup 14/ content of the subject's exhalations is continuously monitored along with the total C0/sub 2/ content thereof. The resulting data are continuously compared and displayed in graphical form to give the desired metabolic information. The invention includes specialized radiation counting means as well as means for assuring exact synchronization of the C/sup 14/ and C0/sub 2/ signals.

  7. Vapor-liquid equilibria for hydrogen fluoride + 1,1-difluoroethane at 288.23 and 298.35 K

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, J.; Kim, H.; Lim, J.S.

    1997-07-01

    Isothermal vapor-liquid equilibria for hydrogen fluoride + 1,1-difluoroethane at 288.23 and 298.35 K were measured using a circulation type apparatus equipped with an equilibrium view cell. The compositions of both vapor and liquid phases were analyzed by an on-line gas chromatographic method. They were compared with PTx equilibrium data measured by the total pressure method. The experimental data were correlated with Anderko`s equation of state using the Wong-Sandler mixing rule as well as the van der Waals one-fluid mixing rule. The Wong-Sandler mixing rule gives better results, and the relevant parameters are presented.

  8. Restoring the Pauli principle in the random phase approximation ground state

    NASA Astrophysics Data System (ADS)

    Kosov, D. S.

    2017-12-01

    Random phase approximation ground state contains electronic configurations where two (and more) identical electrons can occupy the same molecular spin-orbital violating the Pauli exclusion principle. This overcounting of electronic configurations happens due to quasiboson approximation in the treatment of electron-hole pair operators. We describe the method to restore the Pauli principle in the RPA wavefunction. The proposed theory is illustrated by the calculations of molecular dipole moments and electronic kinetic energies. The Hartree-Fock based RPA, which is corrected for the Pauli principle, gives the results of comparable accuracy with Møller-Plesset second order perturbation theory and coupled-cluster singles and doubles method.

  9. Methods for Engaging Stakeholders in Comparative Effectiveness Research: A Patient-Centered Approach to Improving Diabetes Care

    PubMed Central

    Schmittdiel, Julie A.; Desai, Jay; Schroeder, Emily B.; Paolino, Andrea R.; Nichols, Gregory A.; Lawrence, Jean M.; O’Connor, Patrick J.; Ohnsorg, Kris A.; Newton, Katherine M.; Steiner, John F.

    2016-01-01

    ABSTRACT/Implementation Lessons Engaging stakeholders in the research process has the potential to improve quality of care and the patient care experience.Online patient community surveys can elicit important topic areas for comparative effectiveness research.Stakeholder meetings with substantial patient representation, as well as representation from health care delivery systems and research funding agencies, are a valuable tool for selecting and refining pilot research and quality improvement projects.Giving patient stakeholders a deciding vote in selecting pilot research topics helps ensure their ‘voice’ is heard.Researchers and health care leaders should continue to develop best-practices and strategies for increasing patient involvement in comparative effectiveness and delivery science research. PMID:26179728

  10. Advice given by community members to pregnant women: a mixed methods study.

    PubMed

    Verma, Bianca A; Nichols, Lauren P; Plegue, Melissa A; Moniz, Michelle H; Rai, Manisha; Chang, Tammy

    2016-11-09

    Smoking and excess weight gain during pregnancy have been shown to have serious health consequences for both mothers and their infants. Advice from friends and family on these topics influences pregnant women's behaviors. The purpose of our study was to compare the advice that community members give pregnant women about smoking versus the advice they give about pregnancy weight gain. A survey was sent via text messaging to adults in a diverse, low-income primary care clinic in 2015. Respondents were asked what advice (if any) they have given pregnant women about smoking or gestational weight gain and their comfort-level discussing the topics. Descriptive statistics were used to characterize the sample population and to determine response rates. Open-ended responses were analyzed qualitatively using grounded theory analysis with an overall convergent parallel mixed methods design. Respondents (n = 370) were 77 % female, 40 % black, and 25 % reported education of high school or less. More respondents had spoken to pregnant women about smoking (40 %, n = 147) than weight gain (20 %, n = 73). Among individuals who had not discussed either topic (n = 181), more reported discomfort in talking about weight gain (65 %) compared to smoking (34 %; p < 0.0001). Advice about smoking during pregnancy (n = 148) was frequently negative, recommending abstinence and identifying smoking as harmful for baby and/or mother. Advice about weight gain in pregnancy (n = 74) revealed a breadth of messages, from reassurance about all weight gain ("Eat away" or "It's ok if you are gaining weight"), to specific warnings against excess weight gain ("Too much was dangerous for her and the baby."). Many community members give advice to pregnant women. Their advice reveals varied perspectives on the effects of pregnancy weight gain. Compared to a nearly ubiquitous understanding of the harms of smoking during pregnancy, community members demonstrated less awareness of and willingness to discuss the harms of excessive weight gain. Beyond educating pregnant women, community-level interventions may also be important to ensure that the information pregnant women receive supports healthy behaviors and promotes the long-term health of both moms and babies.

  11. Level Density in the Complex Scaling Method

    NASA Astrophysics Data System (ADS)

    Suzuki, R.; Myo, T.; Katō, K.

    2005-06-01

    It is shown that the continuum level density (CLD) at unbound energies can be calculated with the complex scaling method (CSM), in which the energy spectra of bound states, resonances and continuum states are obtained in terms of L(2) basis functions. In this method, the extended completeness relation is applied to the calculation of the Green functions, and the continuum-state part is approximately expressed in terms of discretized complex scaled continuum solutions. The obtained result is compared with the CLD calculated exactly from the scattering phase shift. The discretization in the CSM is shown to give a very good description of continuum states. We discuss how the scattering phase shifts can inversely be calculated from the discretized CLD using a basis function technique in the CSM.

  12. A Method of Dynamic Extended Reactive Power Optimization in Distribution Network Containing Photovoltaic-Storage System

    NASA Astrophysics Data System (ADS)

    Wang, Wu; Huang, Wei; Zhang, Yongjun

    2018-03-01

    The grid-integration of Photovoltaic-Storage System brings some undefined factors to the network. In order to make full use of the adjusting ability of Photovoltaic-Storage System (PSS), this paper puts forward a reactive power optimization model, which are used to construct the objective function based on power loss and the device adjusting cost, including energy storage adjusting cost. By using Cataclysmic Genetic Algorithm to solve this optimization problem, and comparing with other optimization method, the result proved that: the method of dynamic extended reactive power optimization this article puts forward, can enhance the effect of reactive power optimization, including reducing power loss and device adjusting cost, meanwhile, it gives consideration to the safety of voltage.

  13. Temperature-programmed technique accompanied with high-throughput methodology for rapidly searching the optimal operating temperature of MOX gas sensors.

    PubMed

    Zhang, Guozhu; Xie, Changsheng; Zhang, Shunping; Zhao, Jianwei; Lei, Tao; Zeng, Dawen

    2014-09-08

    A combinatorial high-throughput temperature-programmed method to obtain the optimal operating temperature (OOT) of gas sensor materials is demonstrated here for the first time. A material library consisting of SnO2, ZnO, WO3, and In2O3 sensor films was fabricated by screen printing. Temperature-dependent conductivity curves were obtained by scanning this gas sensor library from 300 to 700 K in different atmospheres (dry air, formaldehyde, carbon monoxide, nitrogen dioxide, toluene and ammonia), giving the OOT of each sensor formulation as a function of the carrier and analyte gases. A comparative study of the temperature-programmed method and a conventional method showed good agreement in measured OOT.

  14. Methods for solving reasoning problems in abstract argumentation – A survey

    PubMed Central

    Charwat, Günther; Dvořák, Wolfgang; Gaggl, Sarah A.; Wallner, Johannes P.; Woltran, Stefan

    2015-01-01

    Within the last decade, abstract argumentation has emerged as a central field in Artificial Intelligence. Besides providing a core formalism for many advanced argumentation systems, abstract argumentation has also served to capture several non-monotonic logics and other AI related principles. Although the idea of abstract argumentation is appealingly simple, several reasoning problems in this formalism exhibit high computational complexity. This calls for advanced techniques when it comes to implementation issues, a challenge which has been recently faced from different angles. In this survey, we give an overview on different methods for solving reasoning problems in abstract argumentation and compare their particular features. Moreover, we highlight available state-of-the-art systems for abstract argumentation, which put these methods to practice. PMID:25737590

  15. Techniques for the correction of topographical effects in scanning Auger electron microscopy

    NASA Technical Reports Server (NTRS)

    Prutton, M.; Larson, L. A.; Poppa, H.

    1983-01-01

    A number of ratioing methods for correcting Auger images and linescans for topographical contrast are tested using anisotropically etched silicon substrates covered with Au or Ag. Thirteen well-defined angles of incidence are present on each polyhedron produced on the Si by this etching. If N1 electrons are counted at the energy of an Auger peak and N2 are counted in the background above the peak, then N1, N1 - N2, (N1 - N2)/(N1 + N2) are measured and compared as methods of eliminating topographical contrast. The latter method gives the best compensation but can be further improved by using a measurement of the sample absorption current. Various other improvements are discussed.

  16. Lower bounds for the ground state energy for the PPP and Hubbard models of the benzene molecule

    NASA Astrophysics Data System (ADS)

    Číẑek, J.; Vinette, F.

    1988-09-01

    The optimized inner projection (OIP) technique, which is equivalent to the method of intermediate Hamiltonians (MIH), is applied to the PPP and Hubbard models of the benzene molecule. Both these methods are applicable since the electrostatic part of the PPP and Hubbard Hamiltonians is positive definite. Lower energy bounds are calculated using OIP and MIH for all values of the resonance integral β. In this study, β plays the role of a coupling constant. The deviation of the OIP results from exact ones is smaller than 7% for all values of β. The OIP results are also compared with the correlation energies obtained by other techniques. The OIP method gives surprisingly good results even for small |β| values.

  17. Terrain and refractivity effects on non-optical paths

    NASA Astrophysics Data System (ADS)

    Barrios, Amalia E.

    1994-07-01

    The split-step parabolic equation (SSPE) has been used extensively to model tropospheric propagation over the sea, but recent efforts have extended this method to propagation over arbitrary terrain. At the Naval Command, Control and Ocean Surveillance Center (NCCOSC), Research, Development, Test and Evaluation Division, a split-step Terrain Parabolic Equation Model (TPEM) has been developed that takes into account variable terrain and range-dependent refractivity profiles. While TPEM has been previously shown to compare favorably with measured data and other existing terrain models, two alternative methods to model radiowave propagation over terrain, implemented within TPEM, will be presented that give a two to ten-fold decrease in execution time. These two methods are also shown to agree well with measured data.

  18. Automated detection of coronal mass ejections in three-dimensions using multi-viewpoint observations

    NASA Astrophysics Data System (ADS)

    Hutton, J.; Morgan, H.

    2017-03-01

    A new, automated method of detecting coronal mass ejections (CMEs) in three dimensions for the LASCO C2 and STEREO COR2 coronagraphs is presented. By triangulating isolated CME signal from the three coronagraphs over a sliding window of five hours, the most likely region through which CMEs pass at 5 R⊙ is identified. The centre and size of the region gives the most likely direction of propagation and approximate angular extent. The Automated CME Triangulation (ACT) method is tested extensively using a series of synthetic CME images created using a wireframe flux rope density model, and on a sample of real coronagraph data; including halo CMEs. The accuracy of the angular difference (σ) between the detection and true input of the synthetic CMEs is σ = 7.14°, and remains acceptable for a broad range of CME positions relative to the observer, the relative separation of the three observers and even through the loss of one coronagraph. For real data, the method gives results that compare well with the distribution of low coronal sources and results from another instrument and technique made further from the Sun. The true three dimension (3D)-corrected kinematics and mass/density are discussed. The results of the new method will be incorporated into the CORIMP database in the near future, enabling improved space weather diagnostics and forecasting.

  19. Projection-slice theorem based 2D-3D registration

    NASA Astrophysics Data System (ADS)

    van der Bom, M. J.; Pluim, J. P. W.; Homan, R.; Timmer, J.; Bartels, L. W.

    2007-03-01

    In X-ray guided procedures, the surgeon or interventionalist is dependent on his or her knowledge of the patient's specific anatomy and the projection images acquired during the procedure by a rotational X-ray source. Unfortunately, these X-ray projections fail to give information on the patient's anatomy in the dimension along the projection axis. It would be very profitable to provide the surgeon or interventionalist with a 3D insight of the patient's anatomy that is directly linked to the X-ray images acquired during the procedure. In this paper we present a new robust 2D-3D registration method based on the Projection-Slice Theorem. This theorem gives us a relation between the pre-operative 3D data set and the interventional projection images. Registration is performed by minimizing a translation invariant similarity measure that is applied to the Fourier transforms of the images. The method was tested by performing multiple exhaustive searches on phantom data of the Circle of Willis and on a post-mortem human skull. Validation was performed visually by comparing the test projections to the ones that corresponded to the minimal value of the similarity measure. The Projection-Slice Theorem Based method was shown to be very effective and robust, and provides capture ranges up to 62 degrees. Experiments have shown that the method is capable of retrieving similar results when translations are applied to the projection images.

  20. Wavelet-based 3-D inversion for frequency-domain airborne EM data

    NASA Astrophysics Data System (ADS)

    Liu, Yunhe; Farquharson, Colin G.; Yin, Changchun; Baranwal, Vikas C.

    2018-04-01

    In this paper, we propose a new wavelet-based 3-D inversion method for frequency-domain airborne electromagnetic (FDAEM) data. Instead of inverting the model in the space domain using a smoothing constraint, this new method recovers the model in the wavelet domain based on a sparsity constraint. In the wavelet domain, the model is represented by two types of coefficients, which contain both large- and fine-scale informations of the model, meaning the wavelet-domain inversion has inherent multiresolution. In order to accomplish a sparsity constraint, we minimize an L1-norm measure in the wavelet domain that mostly gives a sparse solution. The final inversion system is solved by an iteratively reweighted least-squares method. We investigate different orders of Daubechies wavelets to accomplish our inversion algorithm, and test them on synthetic frequency-domain AEM data set. The results show that higher order wavelets having larger vanishing moments and regularity can deliver a more stable inversion process and give better local resolution, while the lower order wavelets are simpler and less smooth, and thus capable of recovering sharp discontinuities if the model is simple. At last, we test this new inversion algorithm on a frequency-domain helicopter EM (HEM) field data set acquired in Byneset, Norway. Wavelet-based 3-D inversion of HEM data is compared to L2-norm-based 3-D inversion's result to further investigate the features of the new method.

  1. Accelerating ab initio path integral molecular dynamics with multilevel sampling of potential surface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geng, Hua Y., E-mail: huay.geng@gmail.com; Department of Chemistry and Chemical Biology, Cornell University, Baker Laboratory, Ithaca, NY 14853

    A multilevel approach to sample the potential energy surface in a path integral formalism is proposed. The purpose is to reduce the required number of ab initio evaluations of energy and forces in ab initio path integral molecular dynamics (AI-PIMD) simulation, without compromising the overall accuracy. To validate the method, the internal energy and free energy of an Einstein crystal are calculated and compared with the analytical solutions. As a preliminary application, we assess the performance of the method in a realistic model—the FCC phase of dense atomic hydrogen, in which the calculated result shows that the acceleration rate ismore » about 3 to 4-fold for a two-level implementation, and can be increased up to 10 times if extrapolation is used. With only 16 beads used for the ab initio potential sampling, this method gives a well converged internal energy. The residual error in pressure is just about 3 GPa, whereas it is about 20 GPa for a plain AI-PIMD calculation with the same number of beads. The vibrational free energy of the FCC phase of dense hydrogen at 300 K is also calculated with an AI-PIMD thermodynamic integration method, which gives a result of about 0.51 eV/proton at a density of r{sub s}=0.912.« less

  2. Determination of Sodium, Potassium, Magnesium, and Calcium Minerals Level in Fresh and Boiled Broccoli and Cauliflower by Atomic Absorption Spectrometry

    NASA Astrophysics Data System (ADS)

    Nerdy

    2018-01-01

    Vegetables from the cabbage family vegetables consumed by many people, which is known healthful, by eaten raw, boiled, or cooked (stir fry or soup). Vegetables like broccoli and cauliflower contain vitamins, minerals, and fiber. This study aims to determine the decrease percentage of sodium, potassium, magnesium, and calcium minerals level caused by boiled broccoli and cauliflower by atomic absorption spectrometry. Boiled broccoli and cauliflower prepared by given boiled treatment in boiling water for 3 minutes. Fresh and boiled broccoli and cauliflower carried out dry destruction, followed by quantitative analysis of sodium, potassium, magnesium, and calcium minerals respectively at a wavelength of 589.0 nm; 766.5 nm; 285.2 nm; and 422.7 nm, using atomic absorption spectrometry methods. After the determination of the sodium, potassium, magnesium, and calcium minerals level followed by validation of analytical methods with accuracy, precision, linearity, range, limit of detection (LOD), and limit of quantitation (LOQ) parameters. Research results show a decrease in the sodium, potassium, magnesium, and calcium minerals level in boiled broccoli and cauliflower compared with fresh broccoli and cauliflower. Validation of analytical methods gives results that spectrometry methods used for determining sodium, potassium, magnesium, and calcium minerals level are valid. It concluded that the boiled gives the effect of decreasing the minerals level significantly in broccoli and cauliflower.

  3. Intra prediction using face continuity in 360-degree video coding

    NASA Astrophysics Data System (ADS)

    Hanhart, Philippe; He, Yuwen; Ye, Yan

    2017-09-01

    This paper presents a new reference sample derivation method for intra prediction in 360-degree video coding. Unlike the conventional reference sample derivation method for 2D video coding, which uses the samples located directly above and on the left of the current block, the proposed method considers the spherical nature of 360-degree video when deriving reference samples located outside the current face to which the block belongs, and derives reference samples that are geometric neighbors on the sphere. The proposed reference sample derivation method was implemented in the Joint Exploration Model 3.0 (JEM-3.0) for the cubemap projection format. Simulation results for the all intra configuration show that, when compared with the conventional reference sample derivation method, the proposed method gives, on average, luma BD-rate reduction of 0.3% in terms of the weighted spherical PSNR (WS-PSNR) and spherical PSNR (SPSNR) metrics.

  4. Simulation-Based Evaluation of Hybridization Network Reconstruction Methods in the Presence of Incomplete Lineage Sorting

    PubMed Central

    Kamneva, Olga K; Rosenberg, Noah A

    2017-01-01

    Hybridization events generate reticulate species relationships, giving rise to species networks rather than species trees. We report a comparative study of consensus, maximum parsimony, and maximum likelihood methods of species network reconstruction using gene trees simulated assuming a known species history. We evaluate the role of the divergence time between species involved in a hybridization event, the relative contributions of the hybridizing species, and the error in gene tree estimation. When gene tree discordance is mostly due to hybridization and not due to incomplete lineage sorting (ILS), most of the methods can detect even highly skewed hybridization events between highly divergent species. For recent divergences between hybridizing species, when the influence of ILS is sufficiently high, likelihood methods outperform parsimony and consensus methods, which erroneously identify extra hybridizations. The more sophisticated likelihood methods, however, are affected by gene tree errors to a greater extent than are consensus and parsimony. PMID:28469378

  5. Application of the variational-asymptotical method to composite plates

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Lee, Bok W.; Atilgan, Ali R.

    1992-01-01

    A method is developed for the 3D analysis of laminated plate deformation which is an extension of a variational-asymptotical method by Atilgan and Hodges (1991). Both methods are based on the treatment of plate deformation by splitting the 3D analysis into linear through-the-thickness analysis and 2D plate analysis. Whereas the first technique tackles transverse shear deformation in the second asymptotical approximation, the present method simplifies its treatment and restricts it to the first approximation. Both analytical techniques are applied to the linear cylindrical bending problem, and the strain and stress distributions are derived and compared with those of the exact solution. The present theory provides more accurate results than those of the classical laminated-plate theory for the transverse displacement of 2-, 3-, and 4-layer cross-ply laminated plates. The method can give reliable estimates of the in-plane strain and displacement distributions.

  6. Extraction and Classification of Emotions for Business Research

    NASA Astrophysics Data System (ADS)

    Verma, Rajib

    The commercial study of emotions has not embraced Internet / social mining yet, even though it has important applications in management. This is surprising since the emotional content is freeform, wide spread, can give a better indication of feelings (for instance with taboo subjects), and is inexpensive compared to other business research methods. A brief framework for applying text mining to this new research domain is shown and classification issues are discussed in an effort to quickly get businessman and researchers to adopt the mining methodology.

  7. Development of an Automatic Differentiation Version of the FPX Rotor Code

    NASA Technical Reports Server (NTRS)

    Hu, Hong

    1996-01-01

    The ADIFOR2.0 automatic differentiator is applied to the FPX rotor code along with the grid generator GRGN3. The FPX is an eXtended Full-Potential CFD code for rotor calculations. The automatic differentiation version of the code is obtained, which provides both non-geometry and geometry sensitivity derivatives. The sensitivity derivatives via automatic differentiation are presented and compared with divided difference generated derivatives. The study shows that automatic differentiation method gives accurate derivative values in an efficient manner.

  8. 1H NMR quantitative determination of photosynthetic pigments from green beans (Phaseolus vulgaris L.).

    PubMed

    Valverde, Juan; This, Hervé

    2008-01-23

    Using 1H nuclear magnetic resonance spectroscopy (1D and 2D), the two types of photosynthetic pigments (chlorophylls, their derivatives, and carotenoids) of "green beans" (immature pods of Phaseolus vulgaris L.) were analyzed. Compared to other analytical methods (light spectroscopy or chromatography), 1H NMR spectroscopy is a fast analytical way that provides more information on chlorophyll derivatives (allomers and epimers) than ultraviolet-visible spectroscopy. Moreover, it gives a large amount of data without prior chromatographic separation.

  9. The Evolving Field of Biodefence: Therapeutic Developments and Diagnostics

    DTIC Science & Technology

    2005-04-01

    several ways. One method would be to interfere with the furin -medi- ated cleavage of PA to its active form (PA 63 ) following host-cell receptor binding4...b | The inactive form of protective antigen (PA83) binds to a host-cell receptor, where it is cleaved by a furin -related protease, to give active PA63...explore whether a putative target, such as furin cleavage site of Ebola virus, is essential for viral infection88. Compared with filoviruses, poxvirus

  10. Accelerator Tests of the KLEM Prototypes

    NASA Technical Reports Server (NTRS)

    Bashindzhagyan, G.; Adams, J. H.; Bashindzhagyan, P.; Baranova, N.; Christl, M.; Chilingarian, A.; Chupin, I.; Derrickson, J.; Drury, L.; Egorov, N.

    2003-01-01

    The Kinematic Lightweight Energy Meter (KLEM) device is planned for direct measurement of the elemental energy spectra of high-energy (10(exp 11)-10(exp 16) eV) cosmic rays. The first KLEM prototype has been tested at CERN with 180 GeV pion beam in 2001. A modified KLEM prototype will be tested in proton and heavy ion beams to give more experimental data on energy resolution and charge resolution with KLEM method. The first test results are presented and compared with simulations.

  11. Thymectomy in Myasthenia Gravis

    PubMed Central

    Aydin, Yener; Ulas, Ali Bilal; Mutlu, Vahit; Colak, Abdurrahim; Eroglu, Atilla

    2017-01-01

    In recent years, thymectomy has become a widespread procedure in the treatment of myasthenia gravis (MG). Likelihood of remission was highest in preoperative mild disease classification (Osserman classification 1, 2A). In absence of thymoma or hyperplasia, there was no relationship between age and gender in remission with thymectomy. In MG treatment, randomized trials that compare conservative treatment with thymectomy have started, recently. As with non-randomized trials, remission with thymectomy in MG treatment was better than conservative treatment with only medication. There are four major methods for the surgical approach: transcervical, minimally invasive, transsternal, and combined transcervical transsternal thymectomy. Transsternal approach with thymectomy is the accepted standard surgical approach for many years. In recent years, the incidence of thymectomy has been increasing with minimally invasive techniques using thoracoscopic and robotic methods. There are not any randomized, controlled studies which are comparing surgical techniques. However, when comparing non-randomized trials, it is seen that minimally invasive thymectomy approaches give similar results to more aggressive approaches. PMID:28416933

  12. Wavelet versus detrended fluctuation analysis of multifractal structures

    NASA Astrophysics Data System (ADS)

    Oświȩcimka, Paweł; Kwapień, Jarosław; Drożdż, Stanisław

    2006-07-01

    We perform a comparative study of applicability of the multifractal detrended fluctuation analysis (MFDFA) and the wavelet transform modulus maxima (WTMM) method in proper detecting of monofractal and multifractal character of data. We quantify the performance of both methods by using different sorts of artificial signals generated according to a few well-known exactly soluble mathematical models: monofractal fractional Brownian motion, bifractal Lévy flights, and different sorts of multifractal binomial cascades. Our results show that in the majority of situations in which one does not know a priori the fractal properties of a process, choosing MFDFA should be recommended. In particular, WTMM gives biased outcomes for the fractional Brownian motion with different values of Hurst exponent, indicating spurious multifractality. In some cases WTMM can also give different results if one applies different wavelets. We do not exclude using WTMM in real data analysis, but it occurs that while one may apply MFDFA in a more automatic fashion, WTMM must be applied with care. In the second part of our work, we perform an analogous analysis on empirical data coming from the American and from the German stock market. For this data both methods detect rich multifractality in terms of broad f(α) , but MFDFA suggests that this multifractality is poorer than in the case of WTMM.

  13. Quantifying O3 Impacts in Urban Areas Due to Wildfires Using a Generalized Additive Model.

    PubMed

    Gong, Xi; Kaulfus, Aaron; Nair, Udaysankar; Jaffe, Daniel A

    2017-11-21

    Wildfires emit O 3 precursors but there are large variations in emissions, plume heights, and photochemical processing. These factors make it challenging to model O 3 production from wildfires using Eulerian models. Here we describe a statistical approach to characterize the maximum daily 8-h average O 3 (MDA8) for 8 cities in the U.S. for typical, nonfire, conditions. The statistical model represents between 35% and 81% of the variance in MDA8 for each city. We then examine the residual from the model under conditions with elevated particulate matter (PM) and satellite observed smoke ("smoke days"). For these days, the residuals are elevated by an average of 3-8 ppb (MDA8) compared to nonsmoke days. We found that while smoke days are only 4.1% of all days (May-Sept) they are 19% of days with an MDA8 greater than 75 ppb. We also show that a published method that does not account for transport patterns gives rise to large overestimates in the amount of O 3 from fires, particularly for coastal cities. Finally, we apply this method to a case study from August 2015, and show that the method gives results that are directly applicable to the EPA guidance on excluding data due to an uncontrollable source.

  14. Methods to improve traffic flow and noise exposure estimation on minor roads.

    PubMed

    Morley, David W; Gulliver, John

    2016-09-01

    Address-level estimates of exposure to road traffic noise for epidemiological studies are dependent on obtaining data on annual average daily traffic (AADT) flows that is both accurate and with good geographical coverage. National agencies often have reliable traffic count data for major roads, but for residential areas served by minor roads, especially at national scale, such information is often not available or incomplete. Here we present a method to predict AADT at the national scale for minor roads, using a routing algorithm within a geographical information system (GIS) to rank roads by importance based on simulated journeys through the road network. From a training set of known minor road AADT, routing importance is used to predict AADT on all UK minor roads in a regression model along with the road class, urban or rural location and AADT on the nearest major road. Validation with both independent traffic counts and noise measurements show that this method gives a considerable improvement in noise prediction capability when compared to models that do not give adequate consideration to minor road variability (Spearman's rho. increases from 0.46 to 0.72). This has significance for epidemiological cohort studies attempting to link noise exposure to adverse health outcomes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Size of Self-Gravity Wakes from Cassini UVIS Tracking Occultations and Ring Transparency Statistics

    NASA Astrophysics Data System (ADS)

    Esposito, Larry W.; Rehnberg, Morgan; Colwell, Joshua E.; Sremcevic, Miodrag

    2017-10-01

    We compare two methods for determining the size of self-gravity wakes in Saturn’s rings. Analysis of gaps seen in UVIS occultations gives a power law distribution from 10-100m (Rehnberg etal 2017). Excess variance from UVIS occultations can be related to characteristic clump widths, a method which extends the work of Showalter and Nicholson (1990) to more arbitrary shadow distributions. In the middle A ring, we use results from Colwell etal (2017) for the variance and results from Jerousek etal (2016) for the relative size of gaps and wakes to estimate the wake width consistent with the excess variance observed there. Our method gives:W= sqrt (A) * E/T2 * (1+ S/W)Where A is the area observed by UVIS in an integration period, E is the measured excess variance above Poisson statistics, T is the mean transparency, and S and W are the separation and width of self-gravity wakes in the granola bar model of Colwell etal (2006). We find:W ~ 10m and infer the wavelength of the fastest growing instabilityLambda(TOOMRE) = S + W ~ 30m.This is consistent with the calculation of the Toomre wavelength from the surface mass density of the A ring, and with the highest resolution UVIS star occultations.

  16. Size of Self-Gravity Wakes from Cassini UVIS Tracking Occultations and Ring Transparency Statistics

    NASA Astrophysics Data System (ADS)

    Esposito, L. W.; Rehnberg, M.; Colwell, J. E.; Sremcevic, M.

    2017-12-01

    We compare two methods for determining the size of self-gravity wakes in Saturn's rings. Analysis of gaps seen in UVIS occultations gives a power law distribution from 10-100m (Rehnberg etal 2017). Excess variance from UVIS occultations can be related to characteristic clump widths, a method which extends the work of Showalter and Nicholson (1990) to more arbitrary shadow distributions. In the middle A ring, we use results from Colwell etal (2017) for the variance and results from Jerousek etal (2016) for the relative size of gaps and wakes to estimate the wake width consistent with the excess variance observed there. Our method gives: W= sqrt (A) * E/T2 * (1+ S/W)Where A is the area observed by UVIS in an integration period, E is the measured excess variance above Poisson statistics, T is the mean transparency, and S and W are the separation and width of self-gravity wakes in the granola bar model of Colwell etal (2006). We find: W 10m and infer the wavelength of the fastest growing instability lamdaT = S + W 30m. This is consistent with the calculation of the Toomre wavelength from the surface mass density of the A ring, and with the highest resolution UVIS star occultations.

  17. Comparative analysis of three different methods for monitoring the use of green bridges by wildlife.

    PubMed

    Gužvica, Goran; Bošnjak, Ivana; Bielen, Ana; Babić, Danijel; Radanović-Gužvica, Biserka; Šver, Lidija

    2014-01-01

    Green bridges are used to decrease highly negative impact of roads/highways on wildlife populations and their effectiveness is evaluated by various monitoring methods. Based on the 3-year monitoring of four Croatian green bridges, we compared the effectiveness of three indirect monitoring methods: track-pads, camera traps and active infrared (IR) trail monitoring system. The ability of the methods to detect different species and to give good estimation of number of animal crossings was analyzed. The accuracy of species detection by track-pad method was influenced by granulometric composition of track-pad material, with the best results obtained with higher percentage of silt and clay. We compared the species composition determined by track-pad and camera trap methods and found that monitoring by tracks underestimated the ratio of small canids, while camera traps underestimated the ratio of roe deer. Regarding total number of recorder events, active IR detectors recorded from 11 to 19 times more events then camera traps and app. 80% of them were not caused by animal crossings. Camera trap method underestimated the real number of total events. Therefore, an algorithm for filtration of the IR dataset was developed for approximation of the real number of crossings. Presented results are valuable for future monitoring of wildlife crossings in Croatia and elsewhere, since advantages and disadvantages of used monitoring methods are shown. In conclusion, different methods should be chosen/combined depending on the aims of the particular monitoring study.

  18. In situ electronic probing of semiconducting nanowires in an electron microscope.

    PubMed

    Fauske, V T; Erlbeck, M B; Huh, J; Kim, D C; Munshi, A M; Dheeraj, D L; Weman, H; Fimland, B O; Van Helvoort, A T J

    2016-05-01

    For the development of electronic nanoscale structures, feedback on its electronic properties is crucial, but challenging. Here, we present a comparison of various in situ methods for electronically probing single, p-doped GaAs nanowires inside a scanning electron microscope. The methods used include (i) directly probing individual as-grown nanowires with a sharp nano-manipulator, (ii) contacting dispersed nanowires with two metal contacts and (iii) contacting dispersed nanowires with four metal contacts. For the last two cases, we compare the results obtained using conventional ex situ litho-graphy contacting techniques and by in situ, direct-write electron beam induced deposition of a metal (Pt). The comparison shows that 2-probe measurements gives consistent results also with contacts made by electron beam induced deposition, but that for 4-probe, stray deposition can be a problem for shorter nanowires. This comparative study demonstrates that the preferred in situ method depends on the required throughput and reliability. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  19. Robust Framework to Combine Diverse Classifiers Assigning Distributed Confidence to Individual Classifiers at Class Level

    PubMed Central

    Arshad, Sannia; Rho, Seungmin

    2014-01-01

    We have presented a classification framework that combines multiple heterogeneous classifiers in the presence of class label noise. An extension of m-Mediods based modeling is presented that generates model of various classes whilst identifying and filtering noisy training data. This noise free data is further used to learn model for other classifiers such as GMM and SVM. A weight learning method is then introduced to learn weights on each class for different classifiers to construct an ensemble. For this purpose, we applied genetic algorithm to search for an optimal weight vector on which classifier ensemble is expected to give the best accuracy. The proposed approach is evaluated on variety of real life datasets. It is also compared with existing standard ensemble techniques such as Adaboost, Bagging, and Random Subspace Methods. Experimental results show the superiority of proposed ensemble method as compared to its competitors, especially in the presence of class label noise and imbalance classes. PMID:25295302

  20. Robust framework to combine diverse classifiers assigning distributed confidence to individual classifiers at class level.

    PubMed

    Khalid, Shehzad; Arshad, Sannia; Jabbar, Sohail; Rho, Seungmin

    2014-01-01

    We have presented a classification framework that combines multiple heterogeneous classifiers in the presence of class label noise. An extension of m-Mediods based modeling is presented that generates model of various classes whilst identifying and filtering noisy training data. This noise free data is further used to learn model for other classifiers such as GMM and SVM. A weight learning method is then introduced to learn weights on each class for different classifiers to construct an ensemble. For this purpose, we applied genetic algorithm to search for an optimal weight vector on which classifier ensemble is expected to give the best accuracy. The proposed approach is evaluated on variety of real life datasets. It is also compared with existing standard ensemble techniques such as Adaboost, Bagging, and Random Subspace Methods. Experimental results show the superiority of proposed ensemble method as compared to its competitors, especially in the presence of class label noise and imbalance classes.

  1. A parallel algorithm for the two-dimensional time fractional diffusion equation with implicit difference method.

    PubMed

    Gong, Chunye; Bao, Weimin; Tang, Guojian; Jiang, Yuewen; Liu, Jie

    2014-01-01

    It is very time consuming to solve fractional differential equations. The computational complexity of two-dimensional fractional differential equation (2D-TFDE) with iterative implicit finite difference method is O(M(x)M(y)N(2)). In this paper, we present a parallel algorithm for 2D-TFDE and give an in-depth discussion about this algorithm. A task distribution model and data layout with virtual boundary are designed for this parallel algorithm. The experimental results show that the parallel algorithm compares well with the exact solution. The parallel algorithm on single Intel Xeon X5540 CPU runs 3.16-4.17 times faster than the serial algorithm on single CPU core. The parallel efficiency of 81 processes is up to 88.24% compared with 9 processes on a distributed memory cluster system. We do think that the parallel computing technology will become a very basic method for the computational intensive fractional applications in the near future.

  2. Efficiency test of filtering methods for the removal of transcranial magnetic stimulation artifacts on human electroencephalography with artificially transcranial magnetic stimulation-corrupted signals

    NASA Astrophysics Data System (ADS)

    Zilber, Nicolas A.; Katayama, Yoshinori; Iramina, Keiji; Erich, Wintermantel

    2010-05-01

    A new approach is proposed to test the efficiency of methods, such as the Kalman filter and the independent component analysis (ICA), when applied to remove the artifacts induced by transcranial magnetic stimulation (TMS) from electroencephalography (EEG). By using EEG recordings corrupted by TMS induction, the shape of the artifacts is approximately described with a model based on an equivalent circuit simulation. These modeled artifacts are subsequently added to other EEG signals—this time not influenced by TMS. The resulting signals prove of interest since we also know their form without the pseudo-TMS artifacts. Therefore, they enable us to use a fit test to compare the signals we obtain after removing the artifacts with the original signals. This efficiency test turned out very useful in comparing the methods between them, as well as in determining the parameters of the filtering that give satisfactory results with the automatic ICA.

  3. High-performance parallel approaches for three-dimensional light detection and ranging point clouds gridding

    NASA Astrophysics Data System (ADS)

    Rizki, Permata Nur Miftahur; Lee, Heezin; Lee, Minsu; Oh, Sangyoon

    2017-01-01

    With the rapid advance of remote sensing technology, the amount of three-dimensional point-cloud data has increased extraordinarily, requiring faster processing in the construction of digital elevation models. There have been several attempts to accelerate the computation using parallel methods; however, little attention has been given to investigating different approaches for selecting the most suited parallel programming model for a given computing environment. We present our findings and insights identified by implementing three popular high-performance parallel approaches (message passing interface, MapReduce, and GPGPU) on time demanding but accurate kriging interpolation. The performances of the approaches are compared by varying the size of the grid and input data. In our empirical experiment, we demonstrate the significant acceleration by all three approaches compared to a C-implemented sequential-processing method. In addition, we also discuss the pros and cons of each method in terms of usability, complexity infrastructure, and platform limitation to give readers a better understanding of utilizing those parallel approaches for gridding purposes.

  4. On the calculation of the complex wavenumber of plane waves in rigid-walled low-Mach-number turbulent pipe flows

    NASA Astrophysics Data System (ADS)

    Weng, Chenyang; Boij, Susann; Hanifi, Ardeshir

    2015-10-01

    A numerical method for calculating the wavenumbers of axisymmetric plane waves in rigid-walled low-Mach-number turbulent flows is proposed, which is based on solving the linearized Navier-Stokes equations with an eddy-viscosity model. In addition, theoretical models for the wavenumbers are reviewed, and the main effects (the viscothermal effects, the mean flow convection and refraction effects, the turbulent absorption, and the moderate compressibility effects) which may influence the sound propagation are discussed. Compared to the theoretical models, the proposed numerical method has the advantage of potentially including more effects in the computed wavenumbers. The numerical results of the wavenumbers are compared with the reviewed theoretical models, as well as experimental data from the literature. It shows that the proposed numerical method can give satisfactory prediction of both the real part (phase shift) and the imaginary part (attenuation) of the measured wavenumbers, especially when the refraction effects or the turbulent absorption effects become important.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azadi, Sam, E-mail: s.azadi@ucl.ac.uk; Cohen, R. E.

    We report an accurate study of interactions between benzene molecules using variational quantum Monte Carlo (VMC) and diffusion quantum Monte Carlo (DMC) methods. We compare these results with density functional theory using different van der Waals functionals. In our quantum Monte Carlo (QMC) calculations, we use accurate correlated trial wave functions including three-body Jastrow factors and backflow transformations. We consider two benzene molecules in the parallel displaced geometry, and find that by highly optimizing the wave function and introducing more dynamical correlation into the wave function, we compute the weak chemical binding energy between aromatic rings accurately. We find optimalmore » VMC and DMC binding energies of −2.3(4) and −2.7(3) kcal/mol, respectively. The best estimate of the coupled-cluster theory through perturbative triplets/complete basis set limit is −2.65(2) kcal/mol [Miliordos et al., J. Phys. Chem. A 118, 7568 (2014)]. Our results indicate that QMC methods give chemical accuracy for weakly bound van der Waals molecular interactions, comparable to results from the best quantum chemistry methods.« less

  6. Extracting the field-effect mobilities of random semiconducting single-walled carbon nanotube networks: A critical comparison of methods

    NASA Astrophysics Data System (ADS)

    Schießl, Stefan P.; Rother, Marcel; Lüttgens, Jan; Zaumseil, Jana

    2017-11-01

    The field-effect mobility is an important figure of merit for semiconductors such as random networks of single-walled carbon nanotubes (SWNTs). However, owing to their network properties and quantum capacitance, the standard models for field-effect transistors cannot be applied without modifications. Several different methods are used to determine the mobility with often very different results. We fabricated and characterized field-effect transistors with different polymer-sorted, semiconducting SWNT network densities ranging from low (≈6 μm-1) to densely packed quasi-monolayers (≈26 μm-1) with a maximum on-conductance of 0.24 μS μm-1 and compared four different techniques to evaluate the field-effect mobility. We demonstrate the limits and requirements for each method with regard to device layout and carrier accumulation. We find that techniques that take into account the measured capacitance on the active device give the most reliable mobility values. Finally, we compare our experimental results to a random-resistor-network model.

  7. Validating fatty acid intake as estimated by an FFQ: how does the 24 h recall perform as reference method compared with the duplicate portion?

    PubMed

    Trijsburg, Laura; de Vries, Jeanne Hm; Hollman, Peter Ch; Hulshof, Paul Jm; van 't Veer, Pieter; Boshuizen, Hendriek C; Geelen, Anouk

    2018-05-08

    To compare the performance of the commonly used 24 h recall (24hR) with the more distinct duplicate portion (DP) as reference method for validation of fatty acid intake estimated with an FFQ. Intakes of SFA, MUFA, n-3 fatty acids and linoleic acid (LA) were estimated by chemical analysis of two DP and by on average five 24hR and two FFQ. Plasma n-3 fatty acids and LA were used to objectively compare ranking of individuals based on DP and 24hR. Multivariate measurement error models were used to estimate validity coefficients and attenuation factors for the FFQ with the DP and 24hR as reference methods. Wageningen, the Netherlands. Ninety-two men and 106 women (aged 20-70 years). Validity coefficients for the fatty acid estimates by the FFQ tended to be lower when using the DP as reference method compared with the 24hR. Attenuation factors for the FFQ tended to be slightly higher based on the DP than those based on the 24hR as reference method. Furthermore, when using plasma fatty acids as reference, the DP showed comparable to slightly better ranking of participants according to their intake of n-3 fatty acids (0·33) and n-3:LA (0·34) than the 24hR (0·22 and 0·24, respectively). The 24hR gives only slightly different results compared with the distinctive but less feasible DP, therefore use of the 24hR seems appropriate as the reference method for FFQ validation of fatty acid intake.

  8. Burden of informal care giving to patients with psychoses: a descriptive and methodological study.

    PubMed

    Flyckt, Lena; Löthman, Anna; Jörgensen, Leif; Rylander, Anders; Koernig, Thomas

    2013-03-01

    There is a lack of studies of the size of burden associated with informal care giving in psychosis. To evaluate the objective and subjective burden of informal care giving to patients with psychoses, and to compare a diary and recall method for assessments of objective burden. Patients and their informal caregivers were recruited from nine Swedish psychiatric outpatient centres. Subjective burden was assessed at inclusion using the CarerQoL and COPE index scales. The objective burden (time and money spent) was assessed by the caregivers daily using diaries over four weeks and by recall at the end of weeks 1 and 2. One-hundred and seven patients (53% females; mean age 43 ± 11) and 118 informal caregivers (67%; 58 ± 15 years) were recruited. Informal caregivers spent 22.5 hours/week and about 14% of their gross income on care-related activities. The time spent was underestimated by two to 20 hours when assessed by recall than by daily diary records. The most prominent aspects of the subjective burden were mental problems. Despite a substantial amount of time and money spent on care giving, the informal caregivers perceived the mental aspects of burden as the most troublesome. The informal caregiver burden is considerable and should be taken into account when evaluating effects of health care provided to patients with psychoses.

  9. Giving Back: Exploring Service-Learning in an Online Learning Environment

    ERIC Educational Resources Information Center

    McWhorter, Rochell R.; Delello, Julie A.; Roberts, Paul B.

    2016-01-01

    Service-Learning (SL) as an instructional method is growing in popularity for giving back to the community while connecting the experience to course content. However, little has been published on using SL for online business students. This study highlights an exploratory mixed-methods, multiple case study of an online business leadership and…

  10. Attitudes toward High Achievers, Self Esteem, and Value Priorities for Australian, American, and Canadian Students.

    ERIC Educational Resources Information Center

    Feather, N. T.

    1998-01-01

    Results from a study comparing 114 American, 186 Australian, and 310 Canadian college students show that (1) Americans give more emphasis to achievement, competence, and conformity; (2) Australians give less emphasis to conformity and are more egalitarian; and (3) Canadians give less emphasis to affiliative contentment values. (SLD)

  11. The Straightforwardness of Advice: Advice-Giving in Interactions Between Swedish District Nurses and Patients.

    ERIC Educational Resources Information Center

    Leppanen, Vesa

    1998-01-01

    A study examined advice-giving interactions between Swedish district nurses and patients, comparing these sequences with parallel interactions between British health visitors and first-time mothers in previous research. Analysis focused on how advice-giving is organized in the settings, including how advice is initiated and designed, its…

  12. Photoelectron circular dichroism of bicyclic ketones from multiphoton ionization with femtosecond laser pulses.

    PubMed

    Lux, Christian; Wollenhaupt, Matthias; Sarpe, Cristian; Baumert, Thomas

    2015-01-12

    Photoelectron circular dichroism (PECD) is a CD effect up to the ten-percent regime and shows contributions from higher-order Legendre polynomials when multiphoton ionization is compared to single-photon ionization. We give a full account of our experimental methodology for measuring the multiphoton PECD and derive quantitative measures that we apply on camphor, fenchone and norcamphor. Different modulations and amplitudes of the contributing Legendre polynomials are observed despite the similarity in chemical structure. In addition, we study PECD for elliptically polarized light employing tomographic reconstruction methods. Intensity studies reveal dissociative ionization as the origin of the observed PECD effect, whereas ionization of the intermediate resonance is dominating the signal. As a perspective, we suggest to make use of our tomographic data as an experimental basis for a complete photoionization experiment and give a prospect of PECD as an analytic tool. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Searching Remotely Sensed Images for Meaningful Nested Gestalten

    NASA Astrophysics Data System (ADS)

    Michaelsen, E.; Muench, D.; Arens, M.

    2016-06-01

    Even non-expert human observers sometimes still outperform automatic extraction of man-made objects from remotely sensed data. We conjecture that some of this remarkable capability can be explained by Gestalt mechanisms. Gestalt algebra gives a mathematical structure capturing such part-aggregate relations and the laws to form an aggregate called Gestalt. Primitive Gestalten are obtained from an input image and the space of all possible Gestalt algebra terms is searched for well-assessed instances. This can be a very challenging combinatorial effort. The contribution at hand gives some tools and structures unfolding a finite and comparably small subset of the possible combinations. Yet, the intended Gestalten still are contained and found with high probability and moderate efforts. Experiments are made with images obtained from a virtual globe system, and use the SIFT method for extraction of the primitive Gestalten. Comparison is made with manually extracted ground-truth Gestalten salient to human observers.

  14. Estimating the Gibbs energy of hydration from molecular dynamics trajectories obtained by integral equations of the theory of liquids in the RISM approximation

    NASA Astrophysics Data System (ADS)

    Tikhonov, D. A.; Sobolev, E. V.

    2011-04-01

    A method of integral equations of the theory of liquids in the reference interaction site model (RISM) approximation is used to estimate the Gibbs energy averaged over equilibrium trajectories computed by molecular mechanics. Peptide oxytocin is selected as the object of interest. The Gibbs energy is calculated using all chemical potential formulas introduced in the RISM approach for the excess chemical potential of solvation and is compared with estimates by the generalized Born model. Some formulas are shown to give the wrong sign of Gibbs energy changes when peptide passes from the gas phase into water environment; the other formulas give overestimated Gibbs energy changes with the right sign. Note that allowance for the repulsive correction in the approximate analytical expressions for the Gibbs energy derived by thermodynamic perturbation theory is not a remedy.

  15. Mechanical properties of hydrogenated bilayer graphene

    NASA Astrophysics Data System (ADS)

    Andrew, R. C.; Mapasha, R. E.; Chetty, N.

    2013-06-01

    Using first principle methods, we study the mechanical properties of monolayer and bilayer graphene with 50% and 100% coverage of hydrogen. We employ the vdW-DF, vdW-DF-C09x, and vdW-DF2-C09x van der Waals functionals for the exchange correlation interactions that give significantly improved interlayer spacings and energies. We also use the PBE form for the generalized gradient corrected exchange correlation functional for comparison. We present a consistent theoretical framework for the in-plane layer modulus and the out-of-plane interlayer modulus and we calculate, for the first time, these properties for these systems. This gives a measure of the change of the strength properties when monolayer and bilayer graphene are hydrogenated. Moreover, comparing the relative performance of these functionals in describing hydrogenated bilayered graphenes, we also benchmark these functionals in how they calculate the properties of graphite.

  16. Measurements of tropospheric NO2 in Romania using a zenith-sky mobile DOAS system and comparisons with satellite observations.

    PubMed

    Constantin, Daniel-Eduard; Merlaud, Alexis; Van Roozendael, Michel; Voiculescu, Mirela; Fayt, Caroline; Hendrick, François; Pinardi, Gaia; Georgescu, Lucian

    2013-03-20

    In this paper we present a new method for retrieving tropospheric NO2 Vertical Column Density (VCD) from zenith-sky Differential Optical Absorption Spectroscopy (DOAS) measurements using mobile observations. This method was used during three days in the summer of 2011 in Romania, being to our knowledge the first mobile DOAS measurements peformed in this country. The measurements were carried out over large and different areas using a mobile DOAS system installed in a car. We present here a step-by-step retrieval of tropospheric VCD using complementary observations from ground and space which take into account the stratospheric contribution, which is a step forward compared to other similar studies. The detailed error budget indicates that the typical uncertainty on the retrieved NO2tropospheric VCD is less than 25%. The resulting ground-based data set is compared to satellite measurements from the Ozone Monitoring Instrument (OMI) and the Global Ozone Monitoring Experiment-2 (GOME-2). For instance, on 18 July 2011, in an industrial area located at 47.03°N, 22.45°E, GOME-2 observes a tropospheric VCD value of (3.4 ± 1.9) × 1015 molec./cm2, while average mobile measurements in the same area give a value of (3.4 ± 0.7) × 10(15) molec./cm2. On 22 August 2011, around Ploiesti city (44.99°N, 26.1°E), the tropospheric VCD observed by satellites is (3.3 ± 1.9) × 10(15) molec./cm2 (GOME-2) and (3.2 ± 3.2) × 10(15) molec./cm2 (OMI), while average mobile measurements give (3.8 ± 0.8) × 10(15) molec./cm2. Average ground measurements over "clean areas", on 18 July 2011, give (2.5 ± 0.6) × 10(15) molec./cm2 while the satellite observes a value of (1.8 ± 1.3) × 10(15) molec./cm2.

  17. Measurements of Tropospheric NO2 in Romania Using a Zenith-Sky Mobile DOAS System and Comparisons with Satellite Observations

    PubMed Central

    Constantin, Daniel-Eduard; Merlaud, Alexis; Van Roozendael, Michel; Voiculescu, Mirela; Fayt, Caroline; Hendrick, François; Pinardi, Gaia; Georgescu, Lucian

    2013-01-01

    In this paper we present a new method for retrieving tropospheric NO2 Vertical Column Density (VCD) from zenith-sky Differential Optical Absorption Spectroscopy (DOAS) measurements using mobile observations. This method was used during three days in the summer of 2011 in Romania, being to our knowledge the first mobile DOAS measurements peformed in this country. The measurements were carried out over large and different areas using a mobile DOAS system installed in a car. We present here a step-by-step retrieval of tropospheric VCD using complementary observations from ground and space which take into account the stratospheric contribution, which is a step forward compared to other similar studies. The detailed error budget indicates that the typical uncertainty on the retrieved NO2tropospheric VCD is less than 25%. The resulting ground-based data set is compared to satellite measurements from the Ozone Monitoring Instrument (OMI) and the Global Ozone Monitoring Experiment-2 (GOME-2). For instance, on 18 July 2011, in an industrial area located at 47.03°N, 22.45°E, GOME-2 observes a tropospheric VCD value of (3.4 ± 1.9) × 1015 molec./cm2, while average mobile measurements in the same area give a value of (3.4 ± 0.7) × 1015 molec./cm2. On 22 August 2011, around Ploiesti city (44.99°N, 26.1°E), the tropospheric VCD observed by satellites is (3.3 ± 1.9) × 1015 molec./cm2 (GOME-2) and (3.2 ± 3.2) × 1015 molec./cm2 (OMI), while average mobile measurements give (3.8 ± 0.8) × 1015 molec./cm2. Average ground measurements over “clean areas”, on 18 July 2011, give (2.5 ± 0.6) × 1015 molec./cm2 while the satellite observes a value of (1.8 ± 1.3) × 1015 molec./cm2. PMID:23519349

  18. A heuristic statistical stopping rule for iterative reconstruction in emission tomography.

    PubMed

    Ben Bouallègue, F; Crouzet, J F; Mariano-Goulart, D

    2013-01-01

    We propose a statistical stopping criterion for iterative reconstruction in emission tomography based on a heuristic statistical description of the reconstruction process. The method was assessed for MLEM reconstruction. Based on Monte-Carlo numerical simulations and using a perfectly modeled system matrix, our method was compared with classical iterative reconstruction followed by low-pass filtering in terms of Euclidian distance to the exact object, noise, and resolution. The stopping criterion was then evaluated with realistic PET data of a Hoffman brain phantom produced using the GATE platform for different count levels. The numerical experiments showed that compared with the classical method, our technique yielded significant improvement of the noise-resolution tradeoff for a wide range of counting statistics compatible with routine clinical settings. When working with realistic data, the stopping rule allowed a qualitatively and quantitatively efficient determination of the optimal image. Our method appears to give a reliable estimation of the optimal stopping point for iterative reconstruction. It should thus be of practical interest as it produces images with similar or better quality than classical post-filtered iterative reconstruction with a mastered computation time.

  19. Method validation for simultaneous determination of chromium, molybdenum and selenium in infant formulas by ICP-OES and ICP-MS.

    PubMed

    Khan, Naeem; Jeong, In Seon; Hwang, In Min; Kim, Jae Sung; Choi, Sung Hwa; Nho, Eun Yeong; Choi, Ji Yeon; Kwak, Byung-Man; Ahn, Jang-Hyuk; Yoon, Taehyung; Kim, Kyong Su

    2013-12-15

    This study aimed to validate the analytical method for simultaneous determination of chromium (Cr), molybdenum (Mo), and selenium (Se) in infant formulas available in South Korea. Various digestion methods of dry-ashing, wet-digestion and microwave were evaluated for samples preparation and both inductively coupled plasma optical emission spectrometry (ICP-OES) and inductively coupled plasma mass spectrometry (ICP-MS) were compared for analysis. The analytical techniques were validated by detection limits, precision, accuracy and recovery experiments. Results showed that wet-digestion and microwave methods were giving satisfactory results for sample preparation, while ICP-MS was found more sensitive and effective technique than ICP-OES. The recovery (%) of Se, Mo and Cr by ICP-OES were 40.9, 109.4 and 0, compared to 99.1, 98.7 and 98.4, respectively by ICP-MS. The contents of Cr, Mo and Se in infant formulas by ICP-MS were found in good nutritional values in accordance to nutrient standards for infant formulas CODEX values. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Bayesian Parameter Estimation for Heavy-Duty Vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Eric; Konan, Arnaud; Duran, Adam

    2017-03-28

    Accurate vehicle parameters are valuable for design, modeling, and reporting. Estimating vehicle parameters can be a very time-consuming process requiring tightly-controlled experimentation. This work describes a method to estimate vehicle parameters such as mass, coefficient of drag/frontal area, and rolling resistance using data logged during standard vehicle operation. The method uses Monte Carlo to generate parameter sets which is fed to a variant of the road load equation. Modeled road load is then compared to measured load to evaluate the probability of the parameter set. Acceptance of a proposed parameter set is determined using the probability ratio to the currentmore » state, so that the chain history will give a distribution of parameter sets. Compared to a single value, a distribution of possible values provides information on the quality of estimates and the range of possible parameter values. The method is demonstrated by estimating dynamometer parameters. Results confirm the method's ability to estimate reasonable parameter sets, and indicates an opportunity to increase the certainty of estimates through careful selection or generation of the test drive cycle.« less

  1. Classification of accelerometer wear and non-wear events in seconds for monitoring free-living physical activity.

    PubMed

    Zhou, Shang-Ming; Hill, Rebecca A; Morgan, Kelly; Stratton, Gareth; Gravenor, Mike B; Bijlsma, Gunnar; Brophy, Sinead

    2015-05-11

    To classify wear and non-wear time of accelerometer data for accurately quantifying physical activity in public health or population level research. A bi-moving-window-based approach was used to combine acceleration and skin temperature data to identify wear and non-wear time events in triaxial accelerometer data that monitor physical activity. Local residents in Swansea, Wales, UK. 50 participants aged under 16 years (n=23) and over 17 years (n=27) were recruited in two phases: phase 1: design of the wear/non-wear algorithm (n=20) and phase 2: validation of the algorithm (n=30). Participants wore a triaxial accelerometer (GeneActiv) against the skin surface on the wrist (adults) or ankle (children). Participants kept a diary to record the timings of wear and non-wear and were asked to ensure that events of wear/non-wear last for a minimum of 15 min. The overall sensitivity of the proposed method was 0.94 (95% CI 0.90 to 0.98) and specificity 0.91 (95% CI 0.88 to 0.94). It performed equally well for children compared with adults, and females compared with males. Using surface skin temperature data in combination with acceleration data significantly improved the classification of wear/non-wear time when compared with methods that used acceleration data only (p<0.01). Using either accelerometer seismic information or temperature information alone is prone to considerable error. Combining both sources of data can give accurate estimates of non-wear periods thus giving better classification of sedentary behaviour. This method can be used in population studies of physical activity in free-living environments. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  2. How many fish? Comparison of two underwater visual sampling methods for monitoring fish communities

    PubMed Central

    Sini, Maria; Vatikiotis, Konstantinos; Katsoupis, Christos

    2018-01-01

    Background Underwater visual surveys (UVSs) for monitoring fish communities are preferred over fishing surveys in certain habitats, such as rocky or coral reefs and seagrass beds and are the standard monitoring tool in many cases, especially in protected areas. However, despite their wide application there are potential biases, mainly due to imperfect detectability and the behavioral responses of fish to the observers. Methods The performance of two methods of UVSs were compared to test whether they give similar results in terms of fish population density, occupancy, species richness, and community composition. Distance sampling (line transects) and plot sampling (strip transects) were conducted at 31 rocky reef sites in the Aegean Sea (Greece) using SCUBA diving. Results Line transects generated significantly higher values of occupancy, species richness, and total fish density compared to strip transects. For most species, density estimates differed significantly between the two sampling methods. For secretive species and species avoiding the observers, the line transect method yielded higher estimates, as it accounted for imperfect detectability and utilized a larger survey area compared to the strip transect method. On the other hand, large-scale spatial patterns of species composition were similar for both methods. Discussion Overall, both methods presented a number of advantages and limitations, which should be considered in survey design. Line transects appear to be more suitable for surveying secretive species, while strip transects should be preferred at high fish densities and for species of high mobility. PMID:29942703

  3. Comparative rice seed toxicity tests using filter paper, growth pouch-tm, and seed tray methods

    USGS Publications Warehouse

    Wang, W.

    1993-01-01

    Paper substrate, especially circular filter paper placed inside a Petri dish, has long been used for the plant seed toxicity test (PSTT). Although this method is simple and inexpensive, recent evidence indicates that it gives results that are significantly different from those obtained using a method that does not involve paper, especially when testing metal cations. The study compared PSTT using three methods: filter paper, Growth Pouch-TM, and seed tray. The Growth Pouch-TM is a commercially available device. The seed tray is a newly designed plastic receptacle placed inside a Petri dish. The results of the Growth Pouch-TM method showed no toxic effects on rice for Ag up to 40 mg L-1 and Cd up to 20 mg L-1. Using the seed tray method, IC50 (50% inhibitory effect concentration) values were 0.55 and 1.4 mg L-1 for Ag and Cd, respectively. Although results of filter paper and seed tray methods were nearly identical for NaF, Cr(VI), and phenol, the toxicities of cations Ag and Cd were reduced by using the filter paper method; IC50 values were 22 and 18 mg L-1, respectively. The results clearly indicate that paper substrate is not advisable for PSTT.

  4. Accounting for the Multiple Natures of Missing Values in Label-Free Quantitative Proteomics Data Sets to Compare Imputation Strategies.

    PubMed

    Lazar, Cosmin; Gatto, Laurent; Ferro, Myriam; Bruley, Christophe; Burger, Thomas

    2016-04-01

    Missing values are a genuine issue in label-free quantitative proteomics. Recent works have surveyed the different statistical methods to conduct imputation and have compared them on real or simulated data sets and recommended a list of missing value imputation methods for proteomics application. Although insightful, these comparisons do not account for two important facts: (i) depending on the proteomics data set, the missingness mechanism may be of different natures and (ii) each imputation method is devoted to a specific type of missingness mechanism. As a result, we believe that the question at stake is not to find the most accurate imputation method in general but instead the most appropriate one. We describe a series of comparisons that support our views: For instance, we show that a supposedly "under-performing" method (i.e., giving baseline average results), if applied at the "appropriate" time in the data-processing pipeline (before or after peptide aggregation) on a data set with the "appropriate" nature of missing values, can outperform a blindly applied, supposedly "better-performing" method (i.e., the reference method from the state-of-the-art). This leads us to formulate few practical guidelines regarding the choice and the application of an imputation method in a proteomics context.

  5. Comparison of up-scaling methods in poroelasticity and its generalizations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berryman, J G

    2003-12-13

    Four methods of up-scaling coupled equations at the microscale to equations valid at the mesoscale and/or macroscale for fluid-saturated and partially saturated porous media will be discussed, compared, and contrasted. The four methods are: (1) effective medium theory, (2) mixture theory, (3) two-scale and multiscale homogenization, and (4) volume averaging. All these methods have advantages for some applications and disadvantages for others. For example, effective medium theory, mixture theory, and homogenization methods can all give formulas for coefficients in the up-scaled equations, whereas volume averaging methods give the form of the up-scaled equations but generally must be supplemented with physicalmore » arguments and/or data in order to determine the coefficients. Homogenization theory requires a great deal of mathematical insight from the user in order to choose appropriate scalings for use in the resulting power-law expansions, while volume averaging requires more physical insight to motivate the steps needed to find coefficients. Homogenization often is performed on periodic models, while volume averaging does not require any assumption of periodicity and can therefore be related very directly to laboratory and/or field measurements. Validity of the homogenization process is often limited to specific ranges of frequency - in order to justify the scaling hypotheses that must be made - and therefore cannot be used easily over wide ranges of frequency. However, volume averaging methods can quite easily be used for wide band data analysis. So, we learn from these comparisons that a researcher in the theory of poroelasticity and its generalizations needs to be conversant with two or more of these methods to solve problems generally.« less

  6. Energy minimization in medical image analysis: Methodologies and applications.

    PubMed

    Zhao, Feng; Xie, Xianghua

    2016-02-01

    Energy minimization is of particular interest in medical image analysis. In the past two decades, a variety of optimization schemes have been developed. In this paper, we present a comprehensive survey of the state-of-the-art optimization approaches. These algorithms are mainly classified into two categories: continuous method and discrete method. The former includes Newton-Raphson method, gradient descent method, conjugate gradient method, proximal gradient method, coordinate descent method, and genetic algorithm-based method, while the latter covers graph cuts method, belief propagation method, tree-reweighted message passing method, linear programming method, maximum margin learning method, simulated annealing method, and iterated conditional modes method. We also discuss the minimal surface method, primal-dual method, and the multi-objective optimization method. In addition, we review several comparative studies that evaluate the performance of different minimization techniques in terms of accuracy, efficiency, or complexity. These optimization techniques are widely used in many medical applications, for example, image segmentation, registration, reconstruction, motion tracking, and compressed sensing. We thus give an overview on those applications as well. Copyright © 2015 John Wiley & Sons, Ltd.

  7. Comparison of memory thresholds for planar qudit geometries

    NASA Astrophysics Data System (ADS)

    Marks, Jacob; Jochym-O'Connor, Tomas; Gheorghiu, Vlad

    2017-11-01

    We introduce and analyze a new type of decoding algorithm called general color clustering, based on renormalization group methods, to be used in qudit color codes. The performance of this decoder is analyzed under a generalized bit-flip error model, and is used to obtain the first memory threshold estimates for qudit 6-6-6 color codes. The proposed decoder is compared with similar decoding schemes for qudit surface codes as well as the current leading qubit decoders for both sets of codes. We find that, as with surface codes, clustering performs sub-optimally for qubit color codes, giving a threshold of 5.6 % compared to the 8.0 % obtained through surface projection decoding methods. However, the threshold rate increases by up to 112% for large qudit dimensions, plateauing around 11.9 % . All the analysis is performed using QTop, a new open-source software for simulating and visualizing topological quantum error correcting codes.

  8. Comparison of attrition test methods: ASTM standard fluidized bed vs jet cup

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, R.; Goodwin, J.G. Jr.; Jothimurugesan, K.

    2000-05-01

    Attrition resistance is one of the key design parameters for catalysts used in fluidized-bed and slurry phase types of reactors. The ASTM fluidized-bed test has been one of the most commonly used attrition resistance evaluation methods; however, it requires the use of 50 g samples--a large amount for catalyst development studies. Recently a test using the jet cup requiring only 5 g samples has been proposed. In the present study, two series of spray-dried iron catalysts were evaluated using both the ASTM fluidized-bed test and a test based on the jet cup to determine this comparability. It is shown thatmore » the two tests give comparable results. This paper, by reporting a comparison of the jet-cup test with the ASTM standard, provides a basis for utilizing the more efficient jet cup with confidence in catalyst attrition studies.« less

  9. Experimental consideration of the Hansen solubility parameters of as-produced multi-walled carbon nanotubes by inverse gas chromatography.

    PubMed

    Lim, Hyeong Jun; Lee, Kunsil; Cho, Young Shik; Kim, Yern Seung; Kim, Taehoon; Park, Chong Rae

    2014-09-07

    The Hansen solubility parameters (HSPs) of as-produced multi-walled carbon nanotubes (APMWCNTs) were determined by means of the inverse gas chromatography (IGC) technique. Due to non-homogeneous surfaces of the APMWCNTs arising from defects and impurities, it was necessary to establish adequate working conditions for determining the HSPs of the CNTs. We then obtained the HSPs of the APMWCNTs and compared these results with earlier reports as determined by using sedimentation and molecular dynamics simulation methods. It was found that the determination of the HSPs of the CNTs by IGC can give an enhanced determination range based on the adsorption thermodynamic parameters, compared to the HSPs determined using sedimentation methods. And the HSPs of the APMWCNTs, determined here, provided good guidelines for the selection of feasible solvents that can improve the dispersion of the APMWCNTs.

  10. A likelihood ratio test for evolutionary rate shifts and functional divergence among proteins

    PubMed Central

    Knudsen, Bjarne; Miyamoto, Michael M.

    2001-01-01

    Changes in protein function can lead to changes in the selection acting on specific residues. This can often be detected as evolutionary rate changes at the sites in question. A maximum-likelihood method for detecting evolutionary rate shifts at specific protein positions is presented. The method determines significance values of the rate differences to give a sound statistical foundation for the conclusions drawn from the analyses. A statistical test for detecting slowly evolving sites is also described. The methods are applied to a set of Myc proteins for the identification of both conserved sites and those with changing evolutionary rates. Those positions with conserved and changing rates are related to the structures and functions of their proteins. The results are compared with an earlier Bayesian method, thereby highlighting the advantages of the new likelihood ratio tests. PMID:11734650

  11. Analysing the magnetopause internal structure: new possibilities offered by MMS

    NASA Astrophysics Data System (ADS)

    Belmont, G.; Rezeau, L.; Manuzzo, R.; Aunai, N.; Dargent, J.

    2017-12-01

    We explore the structure of the magnetopause using a crossing observed by the MMS spacecraft on October 16th, 2015. Several methods (MVA, BV, CVA) are first applied to compute the normal to the magnetopause considered as a whole. The different results obtained are not identical and we show that the whole boundary is not stationary and not planar, so that basic assumptions of these methods are not well satisfied. We then analyse more finely the internal structure for investigating the departures from planarity. Using the basic mathematical definition of what is a one-dimensional physical problem, we introduce a new method, called LNA (Local Normal Analysis) for determining the varying normal, and we compare the results so obtained with those coming from the MDD tool developed by [Shi et al., 2005]. This method gives the dimensionality of the magnetic variations from multi-point measurements and allows estimating the direction of the local normal using the magnetic field. On the other hand, LNA is a single-spacecraft method which gives the local normal from the magnetic field and particle data. This study shows that the magnetopause does include approximate one-dimensional sub-structures but also two and three dimensional intervals. It also shows that the dimensionality of the magnetic variations can differ from the variations of the other fields so that, at some places, the magnetic field can have a 1D structure although all the plasma variations do not verify the properties of a global one-dimensional problem. Finally a generalisation and a systematic application of the MDD method to the physical quantities of interest is shown.

  12. Comparison of thermal and microwave paleointensity estimates in specimens that violate Thellier's laws

    NASA Astrophysics Data System (ADS)

    Grappone, J. M., Jr.; Biggin, A. J.; Barrett, T. J.; Hill, M. J.

    2017-12-01

    Deep in the Earth, thermodynamic behavior drives the geodynamo and creates the Earth's magnetic field. Determining how the strength of the field, its paleointensity (PI), varies with time, is vital to our understanding of Earth's evolution. Thellier-style paleointensity experiments assume the presence of non-interacting, single domain (SD) magnetic particles, which follow Thellier's laws. Most natural rocks however, contain larger, multi-domain (MD) or interacting single domain (ISD) particles that often violate these laws and cause experiments to fail. Even for samples that pass reliability criteria designed to minimize the impact of MD or ISD grains, different PI techniques can give systematically different estimates, implying violation of Thellier's laws. Our goal is to identify any disparities in PI results that may be explainable by protocol-specific MD and ISD behavior and determine optimum methods to maximize accuracy. Volcanic samples from the Hawai'ian SOH1 borehole previously produced method-dependent PI estimates. Previous studies showed consistently lower PI values when using a microwave (MW) system and the perpendicular method than using the original thermal Thellier-Thellier (OT) technique. However, the data were ambiguous regarding the cause of the discrepancy. The diverging estimates appeared to be either the result of using OT instead of the perpendicular method or the result of using MW protocols instead of thermal protocols. Comparison experiments were conducted using the thermal perpendicular method and microwave OT technique to bridge the gap. Preliminary data generally show that the perpendicular method gives lower estimates than OT for comparable Hlab values. MW estimates are also generally lower than thermal estimates using the same protocol.

  13. Thermal structure of the Martian atmosphere retrieved from the IR- spectrometry in the 15 mkm CO2 band

    NASA Astrophysics Data System (ADS)

    Zasova, L.; Formisano, V.; Grassi, D.; Igantiev, N.; Moroz, V.

    Thermal IR spectrometry is one of the methods of the Martian atmosphere investigation below 55 km. The temperature profiles retrieved from the 15 μm CO2 band may be used for MIRA database. This approach gives the vertical resolution of several kilometers and accuracy of several Kelvins. An aerosol abundance, which influences the temperature profiles, is obtained from the continuum of the same spectrum. It is taken into account in the temperature retrieval procedure in a self- consistent way. Although this method has limited vertical resolution it possesses some advantages. For example, the radio occultation method gives the temperature profiles with higher spectral resolution, but the radio observations are sparse in space and local time. Direct measurements, which give the most accurate results, enable to obtain the temperature profiles only for some chosen points (landing places). Actually, the thermal IR-spectrometry is the only method, which allows to monitor the temperature profiles with good coverage both in space and local time. The first measurements of this kind were fulfilled by IRIS, installed on board of Mariner 9. This spectrometer was characterized by rather high spectral resolution (2.4 cm-1). The temperature profiles vs. local time dependencies for different latitudes and seasons were retrieved, including dust storm conditions, North polar night, Tharsis volcanoes. The obtained temperature profiles have been compared with the temperature profiles for the same conditions, taken from Climate Data Base (European GCM). The Planetary Fourier Spectrometer onboard Mars Express (which is planned to be launched in 2003) has the spectral range 1.2-45 μm and spectral resolution of 1.5 cm- 1. Temperature retrieval is one of the main scientific goals of the experiment. It opens a possibility to get a series of temperature profiles taken for different conditions, which can later be used in MIRA producing.

  14. A comparative study of three pillars system and banking methods in accounting long-term purposes of retiree in Indonesian saving account

    NASA Astrophysics Data System (ADS)

    Hasbullah, E. S.; Suyudi, M.; Halim, N. A.; Sukono; Gustaf, F.; Putra, A. S.

    2018-03-01

    Human productivity is the main capital in economic activity. This main factor leads to the fact that the continuity of human resources in economic sector depends on the limited productivity age. In other word, once the economic agents has reach the limit of the productivity age. Hence they enter the pension state. In this case, the preparation of ‘old-age’ fund become crucial and should be initiated before the pension state to avoid the destitute condition of retiree. Two most simple and familiar methods in preparing the pension fund are The Three Pillars system and banking methods. Here we simulate the both of the methods for the synthetic data of investment program and analyse the result. The result gives the idea that the Three Pillar System has effective prospect in Long-term scheme. However, the banking method is likely adapted to the short-term plan.

  15. Constrained variation in Jastrow method at high density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owen, J.C.; Bishop, R.F.; Irvine, J.M.

    1976-11-01

    A method is derived for constraining the correlation function in a Jastrow variational calculation which permits the truncation of the cluster expansion after two-body terms, and which permits exact minimization of the two-body cluster by functional variation. This method is compared with one previously proposed by Pandharipande and is found to be superior both theoretically and practically. The method is tested both on liquid /sup 3/He, by using the Lennard--Jones potential, and on the model system of neutrons treated as Boltzmann particles (''homework'' problem). Good agreement is found both with experiment and with other calculations involving the explicit evaluation ofmore » higher-order terms in the cluster expansion. The method is then applied to a more realistic model of a neutron gas up to a density of 4 neutrons per F/sup 3/, and is found to give ground-state energies considerably lower than those of Pandharipande. (AIP)« less

  16. Correlative Fluorescence and Electron Microscopy

    PubMed Central

    Schirra, Randall T.; Zhang, Peijun

    2014-01-01

    Correlative fluorescence and electron microscopy (CFEM) is a multimodal technique that combines dynamic and localization information from fluorescence methods with ultrastructural data from electron microscopy, to give new information about how cellular components change relative to the spatiotemporal dynamics within their environment. In this review, we will discuss some of the basic techniques and tools of the trade for utilizing this attractive research method, which is becoming a very powerful tool for biology labs. The information obtained from correlative methods has proven to be invaluable in creating consensus between the two types of microscopy, extending the capability of each, and cutting the time and expense associate with using each method separately for comparative analysis. The realization of the advantages of these methods in cell biology have led to rapid improvement in the protocols and have ushered in a new generation of instruments to reach the next level of correlation – integration. PMID:25271959

  17. Research on Finite Element Model Generating Method of General Gear Based on Parametric Modelling

    NASA Astrophysics Data System (ADS)

    Lei, Yulong; Yan, Bo; Fu, Yao; Chen, Wei; Hou, Liguo

    2017-06-01

    Aiming at the problems of low efficiency and poor quality of gear meshing in the current mainstream finite element software, through the establishment of universal gear three-dimensional model, and explore the rules of unit and node arrangement. In this paper, a finite element model generation method of universal gear based on parameterization is proposed. Visual Basic program is used to realize the finite element meshing, give the material properties, and set the boundary / load conditions and other pre-processing work. The dynamic meshing analysis of the gears is carried out with the method proposed in this pape, and compared with the calculated values to verify the correctness of the method. The method greatly shortens the workload of gear finite element pre-processing, improves the quality of gear mesh, and provides a new idea for the FEM pre-processing.

  18. Determination of antioxidant power of red and white wines by a new electrochemical method and its correlation with polyphenolic content.

    PubMed

    Alonso, Angeles M; Domínguez, Cristina; Guillén, Dominico A; Barroso, Carmelo G

    2002-05-22

    A new method for measuring the antioxidant power of wine has been developed based on the accelerated electrochemical oxidation of 2,2'-azino-bis(3-ethylbenzthiazoline-6-sulfonic acid) (ABTS). The calibration (R = 0.9922) and repeatability study (RSD = 7%) have provided good statistical parameters. The method is easy and quick to apply and gives reliable results, requiring only the monitoring of time and absorbance. It has been applied to various red and white wines of different origins. The results have been compared with those obtained by the total antioxidant status (TAS) method. Both methods reveal that the more antioxidant wines are those with higher polyphenolic content. From the HPLC study of the polyphenolic content of the same samples, it is confirmed that there is a positive correlation between the resveratrol content of a wine and its antioxidant power.

  19. Wall relaxation in growing stems: comparison of four species and assessment of measurement techniques

    NASA Technical Reports Server (NTRS)

    Cosgrove, D. J.

    1987-01-01

    This study was carried out to develop improved methods for measuring in-vivo stress relaxation of growing tissues and to compare relaxation in the stems of four different species. When water uptake by growing tissue is prevented, in-vivo stress relaxation occurs because continued wall loosening reduces wall stress and cell turgor pressure. With this procedure one may measure the yield threshold for growth (Y), the turgor pressure in excess of the yield threshold (P-Y), and the physiological wall extensibility (phi). Three relaxation techniques proved useful: "turgor-relaxation", "balance-pressure" and "pressure-block". In the turgor-relaxation method, water is withheld from growing tissue and the reduction in turgor is measured directly with the pressure probe. This technique gives absolute values for P and Y, but requires tissue excision. In the balance-pressure technique, the excised growing region is sealed in a pressure chamber, and the subsequent reduction in water potential is measured as the applied pressure needed to return xylem sap to the cut surface. This method is simple, but only measures (P-Y), not the individual values of P and Y. In the pressure-block technique, the growing tissue is sealed into a pressure chamber, growth is monitored continuously, and just sufficient pressure is applied to the chamber to block growth. The method gives high-resolution kinetics of relaxation and does not require tissue excision, but only measures (P-Y). The three methods gave similar results when applied to the growing stems of pea (Pisum sativum L.), cucumber (Cucumis sativus L.), soybean (Glycine max (L.) Merr.) and zucchini (Curcubita pepo L.) seedlings. Values for (P-Y) averaged between 1.4 and 2.7 bar, depending on species. Yield thresholds averaged between 1.3 and 3.0 bar. Compared with the other methods, relaxation by pressure-block was faster and exhibited dynamic changes in wall-yielding properties. The two pressure-chamber methods were also used to measure the internal water-potential gradient (between the xylem and the epidermis) which drives water uptake for growth. For the four species it was small, between 0.3 and 0.6 bar, and so did not limit growth substantially.

  20. Robert Frost as Teacher. A Poet's Interpretation of the Teacher's Task.

    ERIC Educational Resources Information Center

    Larson, Mildred

    1979-01-01

    Robert Frost's method of teaching is explained. He saw all education as self-education, not something a teacher can give a student. Frost believed freedom to be a necessity and his method gives the student much freedom while also placing a heavy burden of responsibility on him. (Article originally published in 1951.) (AF)

  1. Cost Analysis of the Dutch Obstetric System: low-risk nulliparous women preferring home or short-stay hospital birth - a prospective non-randomised controlled study

    PubMed Central

    2009-01-01

    Background In the Netherlands, pregnant women without medical complications can decide where they want to give birth, at home or in a short-stay hospital setting with a midwife. However, a decrease in the home birth rate during the last decennium may have raised the societal costs of giving birth. The objective of this study is to compare the societal costs of home births with those of births in a short-stay hospital setting. Methods This study is a cost analysis based on the findings of a multicenter prospective non-randomised study comparing two groups of nulliparous women with different preferences for where to give birth, at home or in a short-stay hospital setting. Data were collected using cost diaries, questionnaires and birth registration forms. Analysis of the data is divided into a base case analysis and a sensitivity analysis. Results In the group of home births, the total societal costs associated with giving birth at home were €3,695 (per birth), compared with €3,950 per birth in the group for short-stay hospital births. Statistically significant differences between both groups were found regarding the following cost categories 'Cost of contacts with health care professionals during delivery' (€138.38 vs. €87.94, -50 (2.5-97.5 percentile range (PR)-76;-25), p < 0.05), 'cost of maternity care at home' (€1,551.69 vs. €1,240.69, -311 (PR -485; -150), p < 0.05) and 'cost of hospitalisation mother' (€707.77 vs. 959.06, 251 (PR 69;433), p < 0.05). The highest costs are for hospitalisation (41% of all costs). Because there is a relatively high amount of (partly) missing data, a sensitivity analysis was performed, in which all missing data were included in the analysis by means of general mean substitution. In the sensitivity analysis, the total costs associated with home birth are €4,364 per birth, and €4,541 per birth for short-stay hospital births. Conclusion The total costs associated with pregnancy, delivery, and postpartum care are comparable for home birth and short-stay hospital birth. The most important differences in costs between the home birth group and the short-stay hospital birth group are associated with maternity care assistance, hospitalisation, and travelling costs. PMID:19925673

  2. Net energy content of canola meal fed to growing pigs and effect of experimental methodology on energy values.

    PubMed

    Kim, J W; Koo, B; Nyachoti, C M

    2018-04-14

    An experiment was conducted to determine the digestible energy (DE), metabolizable energy (ME), and net energy (NE) contents of canola meal (CM) and to investigate the effects of basal diet [corn diet vs. corn-soybean meal (SBM) diet] and methodology (difference method vs. regression method) on energy values of CM. Thirty-six growing barrows (20.8 ± 1.0 kg initial body weight [BW]) were individually housed in metabolism crates and randomly allotted to one of six dietary treatments to give six replicates per treatment. The six experimental diets included a corn diet, a corn-SBM diet, a corn diet with 15 or 30% of CM, and a corn-SBM diet with 15 or 30% of CM. The DE, ME, and NE of CM were determined using the corn diet or the corn-SBM diet as a basal diet. In each basal diet, two additional diets containing 15 or 30% of CM were formulated to compare the determined energy values by the difference method and estimated energy values from the regression method. Feeding level was set at 550 kcal ME/kg BW0.6 per day. Pigs were fed experimental diets for 16 d including 10 d for adaptation and 6 d for total collection of feces and urine. Pigs were then moved into indirect calorimetry chambers to determine 24 h heat production (HP) and 12 h fasting HP. The DE, ME, and NE of CM determined by the difference method were within the 95% confidence intervals estimated for the DE, ME, and NE of CM by the regression method regardless of the basal diets used, which indicates that the difference and regression methods give equivalent DE, ME, and NE of CM. However, when the goodness of fit for the linear model was compared, the r2 of the regression analysis from the corn-SBM diet (0.78) was relatively greater than that from corn diet (0.40). The estimated NE of CM by the prediction equations generated by either the corn diet or corn-SBM diets were 2,096 kcal/kg and 1,960 kcal/kg (as-fed basis), respectively, whereas those values determined by the difference method were 2,233 kcal/kg and 2,106 kcal/kg (as-fed basis), respectively. In conclusion, the NE of CM determined in the current study was, on average, 2,099 kcal/kg (as-fed basis). The difference and regression methods do not give different NE value of CM fed to growing pigs. Although the NE values of CM determined using either the corn diet or the corn-SBM diet were not different, the greater r2 of the regression analysis from the corn-SBM diet than that from the corn diet suggests that the corn-SBM diet is a more appropriate basal diet for NE determination of ingredients.

  3. Exploring Women’s Personal Experiences of Giving Birth in Gonabad City: A Qualitative Study

    PubMed Central

    Askari, Fariba; Atarodi, Alireza; Torabi, Shirin; Moshki, Mahdi

    2014-01-01

    Background: Women’s health is an important task in society. The aim of this qualitative study that used a phenomenological approach was to explain women’s personal experiences of giving birth in Gonabad city that had positive experiences of giving birth in order to establish quality cares and the related factors of midwifery cares for this physiological phenomenon. Methods: The participants were 21 primiparae women who gave a normal and or uncomplicated giving birth in the hospital of Gonabad University of medical sciences. Based on a purposeful approach in-depth interviews were continued to reach data saturation. The data were collected through open and semi-structured interactional in-depth interviews with all the participants. All the interviews were taped, transcribed and then analyzed through a qualitative content analysis method to identify the concepts and themes. Findings: Some categories were emerged. A quiet and safe environment was the most urgent need of the most women giving birth. Unnecessary routine interventions that are performed on all women regardless of their needs and should be avoided were considered such as: “absolute rest, establishing vein, frequent vaginal examinations, fasting and early Amniotomy”. All the women wanted to take part actively in their giving birth, because they believed it could affect their giving birth. Conclusion: We hope that the women’s experiences of giving birth will be a pleasant and enjoyable experience for all the mothers giving birth. PMID:25168980

  4. Classification of large-sized hyperspectral imagery using fast machine learning algorithms

    NASA Astrophysics Data System (ADS)

    Xia, Junshi; Yokoya, Naoto; Iwasaki, Akira

    2017-07-01

    We present a framework of fast machine learning algorithms in the context of large-sized hyperspectral images classification from the theoretical to a practical viewpoint. In particular, we assess the performance of random forest (RF), rotation forest (RoF), and extreme learning machine (ELM) and the ensembles of RF and ELM. These classifiers are applied to two large-sized hyperspectral images and compared to the support vector machines. To give the quantitative analysis, we pay attention to comparing these methods when working with high input dimensions and a limited/sufficient training set. Moreover, other important issues such as the computational cost and robustness against the noise are also discussed.

  5. RNA-Puzzles: A CASP-like evaluation of RNA three-dimensional structure prediction

    PubMed Central

    Cruz, José Almeida; Blanchet, Marc-Frédérick; Boniecki, Michal; Bujnicki, Janusz M.; Chen, Shi-Jie; Cao, Song; Das, Rhiju; Ding, Feng; Dokholyan, Nikolay V.; Flores, Samuel Coulbourn; Huang, Lili; Lavender, Christopher A.; Lisi, Véronique; Major, François; Mikolajczak, Katarzyna; Patel, Dinshaw J.; Philips, Anna; Puton, Tomasz; Santalucia, John; Sijenyi, Fredrick; Hermann, Thomas; Rother, Kristian; Rother, Magdalena; Serganov, Alexander; Skorupski, Marcin; Soltysinski, Tomasz; Sripakdeevong, Parin; Tuszynska, Irina; Weeks, Kevin M.; Waldsich, Christina; Wildauer, Michael; Leontis, Neocles B.; Westhof, Eric

    2012-01-01

    We report the results of a first, collective, blind experiment in RNA three-dimensional (3D) structure prediction, encompassing three prediction puzzles. The goals are to assess the leading edge of RNA structure prediction techniques; compare existing methods and tools; and evaluate their relative strengths, weaknesses, and limitations in terms of sequence length and structural complexity. The results should give potential users insight into the suitability of available methods for different applications and facilitate efforts in the RNA structure prediction community in ongoing efforts to improve prediction tools. We also report the creation of an automated evaluation pipeline to facilitate the analysis of future RNA structure prediction exercises. PMID:22361291

  6. Analysis of Discontinuities in a Rectangular Waveguide Using Dyadic Green's Function Approach in Conjunction with Method of Moments

    NASA Technical Reports Server (NTRS)

    Deshpande, M. D.

    1997-01-01

    The dyadic Green's function for an electric current source placed in a rectangular waveguide is derived using a magnetic vector potential approach. A complete solution for the electric and magnetic fields including the source location is obtained by simple differentiation of the vector potential around the source location. The simple differentiation approach which gives electric and magnetic fields identical to an earlier derivation is overlooked by the earlier workers in the derivation of the dyadic Green's function particularly around the source location. Numerical results obtained using the Green's function approach are compared with the results obtained using the Finite Element Method (FEM).

  7. Numerical restoration of surface vortices in Nb films measured by a scanning SQUID microscope

    NASA Astrophysics Data System (ADS)

    Ito, Atsuki; Thanh Huy, Ho; Dang, Vu The; Miyoshi, Hiroki; Hayashi, Masahiko; Ishida, Takekazu

    2017-07-01

    In the present work, we investigated a vortex profile appeared on a pure Nb film (500 nm in thickness, 10 mm x 10 mm) by using a scanning SQUID microscope. We found that the local magnetic distribution thus observed is broadened compared to a true vortex profile in the superconducting film. We therefore applied the numerical method to improve a spatial resolution of the scanning SQUID microscope. The method is based on the inverse Biot-Savart law and the Fourier transformation to recover a real-space image. We found that the numerical analyses give a smaller vortex than the raw vortex profile observed by the scanning microscope.

  8. Comparison of flume and towing methods for verifying the calibration of a suspended-sediment sampler

    USGS Publications Warehouse

    Beverage, J.P.; Futrell, J.C.

    1986-01-01

    Suspended-sediment samplers must sample isokinetically (at stream velocity) in order to collect representative water samples of rivers. Each sampler solo by the Federal Interagency Sedimentation Project or by the U.S. Geological Survey Hydrologic Instrumentation Facility has been adjusted to sample isokinetically and tested in a flume to verify the calibration. The test program for a modified U.S. P-61 sampler provided an opportunity to compare flume and towing tank tests. Although the two tests yielded statistically distinct results, the difference between them was quite small. The conclusion is that verifying the calibration of any suspended-sediment sampler by either the flume or towing method should give acceptable results.

  9. Quantitation of TGF-beta1 mRNA in porcine mesangial cells by comparative kinetic RT/PCR: comparison with ribonuclease protection assay and in situ hybridization.

    PubMed

    Ceol, M; Forino, M; Gambaro, G; Sauer, U; Schleicher, E D; D'Angelo, A; Anglani, F

    2001-01-01

    Gene expression can be examined with different techniques including ribonuclease protection assay (RPA), in situ hybridisation (ISH), and quantitative reverse transcription-polymerase chain reaction (RT/PCR). These methods differ considerably in their sensitivity and precision in detecting and quantifying low abundance mRNA. Although there is evidence that RT/PCR can be performed in a quantitative manner, the quantitative capacity of this method is generally underestimated. To demonstrate that the comparative kinetic RT/PCR strategy-which uses a housekeeping gene as internal standard-is a quantitative method to detect significant differences in mRNA levels between different samples, the inhibitory effect of heparin on phorbol 12-myristate 13-acetate (PMA)-induced-TGF-beta1 mRNA expression was evaluated by RT/PCR and RPA, the standard method of mRNA quantification, and the results were compared. The reproducibility of RT/PCR amplification was calculated by comparing the quantity of G3PDH and TGF-beta1 PCR products, generated during the exponential phases, estimated from two different RT/PCR (G3PDH, r = 0.968, P = 0.0000; TGF-beta1, r = 0.966, P = 0.0000). The quantitative capacity of comparative kinetic RT/PCR was demonstrated by comparing the results obtained from RPA and RT/PCR using linear regression analysis. Starting from the same RNA extraction, but using only 1% of the RNA for the RT/PCR compared to RPA, significant correlation was observed (r = 0.984, P = 0.0004). Moreover the morphometric analysis of ISH signal was applied for the semi-quantitative evaluation of the expression and localisation of TGF-beta1 mRNA in the entire cell population. Our results demonstrate the close similarity of the RT/PCR and RPA methods in giving quantitative information on mRNA expression and indicate the possibility to adopt the comparative kinetic RT/PCR as reliable quantitative method of mRNA analysis. Copyright 2001 Wiley-Liss, Inc.

  10. GenomeFingerprinter: the genome fingerprint and the universal genome fingerprint analysis for systematic comparative genomics.

    PubMed

    Ai, Yuncan; Ai, Hannan; Meng, Fanmei; Zhao, Lei

    2013-01-01

    No attention has been paid on comparing a set of genome sequences crossing genetic components and biological categories with far divergence over large size range. We define it as the systematic comparative genomics and aim to develop the methodology. First, we create a method, GenomeFingerprinter, to unambiguously produce a set of three-dimensional coordinates from a sequence, followed by one three-dimensional plot and six two-dimensional trajectory projections, to illustrate the genome fingerprint of a given genome sequence. Second, we develop a set of concepts and tools, and thereby establish a method called the universal genome fingerprint analysis (UGFA). Particularly, we define the total genetic component configuration (TGCC) (including chromosome, plasmid, and phage) for describing a strain as a systematic unit, the universal genome fingerprint map (UGFM) of TGCC for differentiating strains as a universal system, and the systematic comparative genomics (SCG) for comparing a set of genomes crossing genetic components and biological categories. Third, we construct a method of quantitative analysis to compare two genomes by using the outcome dataset of genome fingerprint analysis. Specifically, we define the geometric center and its geometric mean for a given genome fingerprint map, followed by the Euclidean distance, the differentiate rate, and the weighted differentiate rate to quantitatively describe the difference between two genomes of comparison. Moreover, we demonstrate the applications through case studies on various genome sequences, giving tremendous insights into the critical issues in microbial genomics and taxonomy. We have created a method, GenomeFingerprinter, for rapidly computing, geometrically visualizing, intuitively comparing a set of genomes at genome fingerprint level, and hence established a method called the universal genome fingerprint analysis, as well as developed a method of quantitative analysis of the outcome dataset. These have set up the methodology of systematic comparative genomics based on the genome fingerprint analysis.

  11. Measurement of consumer preference for treatments used to induce labour: a willingness-to-pay approach.

    PubMed

    Taylor, Susan J.; Armour, Carol L.

    2000-09-01

    AIM: The purpose of the study was to assess the acceptability to consumers of two methods of induction of labour using a willingness-to-pay (WTP) approach. The methods compared were amniotomy plus oxytocin and prostaglandin E2 vaginal gel, followed by oxytocin if necessary. METHODS: A description of each method was presented, in questionnaire format, to pregnant women attending a public hospital ante-natal clinic. Women were asked to choose one of the two treatments, then give a valuation in dollar terms for both their preferred treatment and the alternative. RESULTS: It was found that 73.7% of patients preferred gel. The mean maximum WTP for amniotomy plus oxytocin was Aus$133 while that for gel was Aus$178 (P=0.0001). Those who chose amniotomy plus oxytocin were WTP 90% more for their preferred treatment compared with the alternative (Aus$180 vs. Aus$95). Similarly, those who preferred gel were WTP 90% more for their preferred treatment compared with the alternative (Aus$222 vs. Aus$119). CONCLUSION: Consumers were able to assess drug information provided on the two therapies, make an informed choice and to value that choice. Information obtained in this way, combined with information on costs, could be used in policy decision-making.

  12. Measurement of consumer preference for treatments used to induce labour: a willingness‐to‐pay approach

    PubMed Central

    Taylor, Susan J.; Armour, Carol L.

    2001-01-01

    Aim The purpose of the study was to assess the acceptability to consumers of two methods of induction of labour using a willingness‐to‐pay (WTP) approach. The methods compared were amniotomy plus oxytocin and prostaglandin E2 vaginal gel, followed by oxytocin if necessary. Methods A description of each method was presented, in questionnaire format, to pregnant women attending a public hospital ante‐natal clinic. Women were asked to choose one of the two treatments, then give a valuation in dollar terms for both their preferred treatment and the alternative. Results It was found that 73.7% of patients preferred gel. The mean maximum WTP for amniotomy plus oxytocin was Aus$133 while that for gel was Aus$178 (P=0.0001). Those who chose amniotomy plus oxytocin were WTP 90% more for their preferred treatment compared with the alternative (Aus$180 vs. Aus$95). Similarly, those who preferred gel were WTP 90% more for their preferred treatment compared with the alternative (Aus$222 vs. Aus$119). Conclusion Consumers were able to assess drug information provided on the two therapies, make an informed choice and to value that choice. Information obtained in this way, combined with information on costs, could be used in policy decision‐making. PMID:11281930

  13. Mortality risk prediction in burn injury: Comparison of logistic regression with machine learning approaches.

    PubMed

    Stylianou, Neophytos; Akbarov, Artur; Kontopantelis, Evangelos; Buchan, Iain; Dunn, Ken W

    2015-08-01

    Predicting mortality from burn injury has traditionally employed logistic regression models. Alternative machine learning methods have been introduced in some areas of clinical prediction as the necessary software and computational facilities have become accessible. Here we compare logistic regression and machine learning predictions of mortality from burn. An established logistic mortality model was compared to machine learning methods (artificial neural network, support vector machine, random forests and naïve Bayes) using a population-based (England & Wales) case-cohort registry. Predictive evaluation used: area under the receiver operating characteristic curve; sensitivity; specificity; positive predictive value and Youden's index. All methods had comparable discriminatory abilities, similar sensitivities, specificities and positive predictive values. Although some machine learning methods performed marginally better than logistic regression the differences were seldom statistically significant and clinically insubstantial. Random forests were marginally better for high positive predictive value and reasonable sensitivity. Neural networks yielded slightly better prediction overall. Logistic regression gives an optimal mix of performance and interpretability. The established logistic regression model of burn mortality performs well against more complex alternatives. Clinical prediction with a small set of strong, stable, independent predictors is unlikely to gain much from machine learning outside specialist research contexts. Copyright © 2015 Elsevier Ltd and ISBI. All rights reserved.

  14. Report on the Activities Associated with FUSE Cycle 3 GI Program C153 While the PI Was at Johns Hopkins University

    NASA Technical Reports Server (NTRS)

    Dreyfus, Barbara

    2003-01-01

    The goal of this program is to determine the intensity of O VI resonance line photons (1032, 1038 A) originating in the Galactic halo. This is being done by measuring the intensity along an unobscured line of sight and subtracting the local intensity from it. Two members of the team, Robin Shelton and Shauna Sallmen, have independently measured the O VI intensity on the unobscured line of sight. Our methods differ in many ways and we are making the extra effort to understand how systematic variations in method are leading to different measurements. We think that this is worthwhile because it will give us a better understanding of how to compare already published results obtained with these various methods.

  15. Directly calculated electrical conductivity of hot dense hydrogen from molecular dynamics simulation beyond Kubo-Greenwood formula

    NASA Astrophysics Data System (ADS)

    Ma, Qian; Kang, Dongdong; Zhao, Zengxiu; Dai, Jiayu

    2018-01-01

    Electrical conductivity of hot dense hydrogen is directly calculated by molecular dynamics simulation with a reduced electron force field method, in which the electrons are represented as Gaussian wave packets with fixed sizes. Here, the temperature is higher than electron Fermi temperature ( T > 300 eV , ρ = 40 g / cc ). The present method can avoid the Coulomb catastrophe and give the limit of electrical conductivity based on the Coulomb interaction. We investigate the effect of ion-electron coupled movements, which is lost in the static method such as density functional theory based Kubo-Greenwood framework. It is found that the ionic dynamics, which contributes to the dynamical electrical microfield and electron-ion collisions, will reduce the conductivity significantly compared with the fixed ion configuration calculations.

  16. Calculation of far-field scattering from nonspherical particles using a geometrical optics approach

    NASA Technical Reports Server (NTRS)

    Hovenac, Edward A.

    1991-01-01

    A numerical method was developed using geometrical optics to predict far-field optical scattering from particles that are symmetric about the optic axis. The diffractive component of scattering is calculated and combined with the reflective and refractive components to give the total scattering pattern. The phase terms of the scattered light are calculated as well. Verification of the method was achieved by assuming a spherical particle and comparing the results to Mie scattering theory. Agreement with the Mie theory was excellent in the forward-scattering direction. However, small-amplitude oscillations near the rainbow regions were not observed using the numerical method. Numerical data from spheroidal particles and hemispherical particles are also presented. The use of hemispherical particles as a calibration standard for intensity-type optical particle-sizing instruments is discussed.

  17. Fault Tolerant State Machines

    NASA Technical Reports Server (NTRS)

    Burke, Gary R.; Taft, Stephanie

    2004-01-01

    State machines are commonly used to control sequential logic in FPGAs and ASKS. An errant state machine can cause considerable damage to the device it is controlling. For example in space applications, the FPGA might be controlling Pyros, which when fired at the wrong time will cause a mission failure. Even a well designed state machine can be subject to random errors us a result of SEUs from the radiation environment in space. There are various ways to encode the states of a state machine, and the type of encoding makes a large difference in the susceptibility of the state machine to radiation. In this paper we compare 4 methods of state machine encoding and find which method gives the best fault tolerance, as well as determining the resources needed for each method.

  18. Optimal color coding for compression of true color images

    NASA Astrophysics Data System (ADS)

    Musatenko, Yurij S.; Kurashov, Vitalij N.

    1998-11-01

    In the paper we present the method that improves lossy compression of the true color or other multispectral images. The essence of the method is to project initial color planes into Karhunen-Loeve (KL) basis that gives completely decorrelated representation for the image and to compress basis functions instead of the planes. To do that the new fast algorithm of true KL basis construction with low memory consumption is suggested and our recently proposed scheme for finding optimal losses of Kl functions while compression is used. Compare to standard JPEG compression of the CMYK images the method provides the PSNR gain from 0.2 to 2 dB for the convenient compression ratios. Experimental results are obtained for high resolution CMYK images. It is demonstrated that presented scheme could work on common hardware.

  19. Standardized volume rendering for magnetic resonance angiography measurements in the abdominal aorta.

    PubMed

    Persson, A; Brismar, T B; Lundström, C; Dahlström, N; Othberg, F; Smedby, O

    2006-03-01

    To compare three methods for standardizing volume rendering technique (VRT) protocols by studying aortic diameter measurements in magnetic resonance angiography (MRA) datasets. Datasets from 20 patients previously examined with gadolinium-enhanced MRA and with digital subtraction angiography (DSA) for abdominal aortic aneurysm were retrospectively evaluated by three independent readers. The MRA datasets were viewed using VRT with three different standardized transfer functions: the percentile method (Pc-VRT), the maximum-likelihood method (ML-VRT), and the partial range histogram method (PRH-VRT). The aortic diameters obtained with these three methods were compared with freely chosen VRT parameters (F-VRT) and with maximum intensity projection (MIP) concerning inter-reader variability and agreement with the reference method DSA. F-VRT parameters and PRH-VRT gave significantly higher diameter values than DSA, whereas Pc-VRT gave significantly lower values than DSA. The highest interobserver variability was found for F-VRT parameters and MIP, and the lowest for Pc-VRT and PRH-VRT. All standardized VRT methods were significantly superior to both MIP and F-VRT in this respect. The agreement with DSA was best for PRH-VRT, which was the only method with a mean error below 1 mm and which also had the narrowest limits of agreement (95% of cases between 2.1 mm below and 3.1 mm above DSA). All the standardized VRT methods compare favorably with MIP and VRT with freely selected parameters as regards interobserver variability. The partial range histogram method, although systematically overestimating vessel diameters, gives results closest to those of DSA.

  20. X-ray phase contrast tomography by tracking near field speckle

    PubMed Central

    Wang, Hongchang; Berujon, Sebastien; Herzen, Julia; Atwood, Robert; Laundy, David; Hipp, Alexander; Sawhney, Kawal

    2015-01-01

    X-ray imaging techniques that capture variations in the x-ray phase can yield higher contrast images with lower x-ray dose than is possible with conventional absorption radiography. However, the extraction of phase information is often more difficult than the extraction of absorption information and requires a more sophisticated experimental arrangement. We here report a method for three-dimensional (3D) X-ray phase contrast computed tomography (CT) which gives quantitative volumetric information on the real part of the refractive index. The method is based on the recently developed X-ray speckle tracking technique in which the displacement of near field speckle is tracked using a digital image correlation algorithm. In addition to differential phase contrast projection images, the method allows the dark-field images to be simultaneously extracted. After reconstruction, compared to conventional absorption CT images, the 3D phase CT images show greatly enhanced contrast. This new imaging method has advantages compared to other X-ray imaging methods in simplicity of experimental arrangement, speed of measurement and relative insensitivity to beam movements. These features make the technique an attractive candidate for material imaging such as in-vivo imaging of biological systems containing soft tissue. PMID:25735237

  1. Comparison of methods for measuring atmospheric deposition of arsenic, cadmium, nickel and lead.

    PubMed

    Aas, Wenche; Alleman, Laurent Y; Bieber, Elke; Gladtke, Dieter; Houdret, Jean-Luc; Karlsson, Vuokko; Monies, Christian

    2009-06-01

    A comprehensive field intercomparison at four different types of European sites (two rural, one urban and one industrial) comparing three different collectors (wet only, bulk and Bergerhoff samplers) was conducted in the framework of the European Committee for Standardization (CEN) to create an European standard for the deposition of the four elements As, Cd, Ni and Pb. The purpose was to determine whether the proposed methods lead to results within the uncertainty required by the EU's daughter directive (70%). The main conclusion is that a different sampling strategy is needed for rural and industrial sites. Thus, the conclusions on uncertainties and sample approach are presented separately for the different approaches. The wet only and bulk collector ("bulk bottle method") are comparable at wet rural sites where the total deposition arises mainly from precipitation, the expanded uncertainty when comparing these two types of sampler are below 45% for As, Cd and Pb, 67% for Ni. At industrial sites and possibly very dry rural and urban sites it is necessary to use Bergerhoff samplers or a "bulk bottle+funnel method". It is not possible to address the total deposition estimation with these methods, but they will give the lowest estimate of the total deposition. The expanded uncertainties when comparing the Bergerhoff and the bulk bottle+funnel methods are below 50% for As and Cd, and 63% for Pb. The uncertainty for Ni was not addressed since the bulk bottle+funnel method did not include a full digestion procedure which is necessary for sites with high loads of undissolved metals. The lowest estimate can however be calculated by comparing parallel Bergerhoff samplers where the expanded uncertainty for Ni was 24%. The reproducibility is comparable to the between sampler/method uncertainties. Sampling and sample preparation were proved to be the main factors in the uncertainty budget of deposition measurements.

  2. Comparing, optimizing, and benchmarking quantum-control algorithms in a unifying programming framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Machnes, S.; Institute for Theoretical Physics, University of Ulm, D-89069 Ulm; Sander, U.

    2011-08-15

    For paving the way to novel applications in quantum simulation, computation, and technology, increasingly large quantum systems have to be steered with high precision. It is a typical task amenable to numerical optimal control to turn the time course of pulses, i.e., piecewise constant control amplitudes, iteratively into an optimized shape. Here, we present a comparative study of optimal-control algorithms for a wide range of finite-dimensional applications. We focus on the most commonly used algorithms: GRAPE methods which update all controls concurrently, and Krotov-type methods which do so sequentially. Guidelines for their use are given and open research questions aremore » pointed out. Moreover, we introduce a unifying algorithmic framework, DYNAMO (dynamic optimization platform), designed to provide the quantum-technology community with a convenient matlab-based tool set for optimal control. In addition, it gives researchers in optimal-control techniques a framework for benchmarking and comparing newly proposed algorithms with the state of the art. It allows a mix-and-match approach with various types of gradients, update and step-size methods as well as subspace choices. Open-source code including examples is made available at http://qlib.info.« less

  3. Comparative spectral analysis of veterinary powder product by continuous wavelet and derivative transforms

    NASA Astrophysics Data System (ADS)

    Dinç, Erdal; Kanbur, Murat; Baleanu, Dumitru

    2007-10-01

    Comparative simultaneous determination of chlortetracycline and benzocaine in the commercial veterinary powder product was carried out by continuous wavelet transform (CWT) and classical derivative transform (or classical derivative spectrophotometry). In this quantitative spectral analysis, two proposed analytical methods do not require any chemical separation process. In the first step, several wavelet families were tested to find an optimal CWT for the overlapping signal processing of the analyzed compounds. Subsequently, we observed that the coiflets (COIF-CWT) method with dilation parameter, a = 400, gives suitable results for this analytical application. For a comparison, the classical derivative spectrophotometry (CDS) approach was also applied to the simultaneous quantitative resolution of the same analytical problem. Calibration functions were obtained by measuring the transform amplitudes corresponding to zero-crossing points for both CWT and CDS methods. The utility of these two analytical approaches were verified by analyzing various synthetic mixtures consisting of chlortetracycline and benzocaine and they were applied to the real samples consisting of veterinary powder formulation. The experimental results obtained from the COIF-CWT approach were statistically compared with those obtained by classical derivative spectrophotometry and successful results were reported.

  4. Limitations and Tolerances in Optical Devices

    NASA Astrophysics Data System (ADS)

    Jackman, Neil Allan

    The performance of optical systems is limited by the imperfections of their components. Many of the devices in optical systems including optical fiber amplifiers, multimode transmission lines and multilayered media such as mirrors, windows and filters, are modeled by coupled line equations. This investigation includes: (i) a study of the limitations imposed on a wavelength multiplexed unidirectional ring by the non-uniformities of the gain spectra of Erbium-doped optical fiber amplifiers. We find numerical solutions for non-linear coupled power differential equations and use these solutions to compare the signal -to-noise ratios and signal levels at different nodes. (ii) An analytical study of the tolerances of imperfect multimode media which support forward traveling modes. The complex mode amplitudes are related by linear coupled differential equations. We use analytical methods to derive extended equations for the expected mode powers and give heuristic limits for their regions of validity. These results compare favorably to exact solutions found for a special case. (iii) A study of the tolerances of multilayered media in the presence of optical thickness imperfections. We use analytical methods including Kronecker producers, to calculate the reflection and transmission statistics of the media. Monte Carlo simulations compare well to our analytical method.

  5. Comparative study of methods for recognition of an unknown person's action from a video sequence

    NASA Astrophysics Data System (ADS)

    Hori, Takayuki; Ohya, Jun; Kurumisawa, Jun

    2009-02-01

    This paper proposes a Tensor Decomposition Based method that can recognize an unknown person's action from a video sequence, where the unknown person is not included in the database (tensor) used for the recognition. The tensor consists of persons, actions and time-series image features. For the observed unknown person's action, one of the actions stored in the tensor is assumed. Using the motion signature obtained from the assumption, the unknown person's actions are synthesized. The actions of one of the persons in the tensor are replaced by the synthesized actions. Then, the core tensor for the replaced tensor is computed. This process is repeated for the actions and persons. For each iteration, the difference between the replaced and original core tensors is computed. The assumption that gives the minimal difference is the action recognition result. For the time-series image features to be stored in the tensor and to be extracted from the observed video sequence, the human body silhouette's contour shape based feature is used. To show the validity of our proposed method, our proposed method is experimentally compared with Nearest Neighbor rule and Principal Component analysis based method. Experiments using 33 persons' seven kinds of action show that our proposed method achieves better recognition accuracies for the seven actions than the other methods.

  6. Joint modelling compared with two stage methods for analysing longitudinal data and prospective outcomes: A simulation study of childhood growth and BP.

    PubMed

    Sayers, A; Heron, J; Smith, Adac; Macdonald-Wallis, C; Gilthorpe, M S; Steele, F; Tilling, K

    2017-02-01

    There is a growing debate with regards to the appropriate methods of analysis of growth trajectories and their association with prospective dependent outcomes. Using the example of childhood growth and adult BP, we conducted an extensive simulation study to explore four two-stage and two joint modelling methods, and compared their bias and coverage in estimation of the (unconditional) association between birth length and later BP, and the association between growth rate and later BP (conditional on birth length). We show that the two-stage method of using multilevel models to estimate growth parameters and relating these to outcome gives unbiased estimates of the conditional associations between growth and outcome. Using simulations, we demonstrate that the simple methods resulted in bias in the presence of measurement error, as did the two-stage multilevel method when looking at the total (unconditional) association of birth length with outcome. The two joint modelling methods gave unbiased results, but using the re-inflated residuals led to undercoverage of the confidence intervals. We conclude that either joint modelling or the simpler two-stage multilevel approach can be used to estimate conditional associations between growth and later outcomes, but that only joint modelling is unbiased with nominal coverage for unconditional associations.

  7. Determination of very low levels of 5-(hydroxymethyl)-2-furaldehyde (HMF) in natural honey: comparison between the HPLC technique and the spectrophotometric white method.

    PubMed

    Truzzi, Cristina; Annibaldi, Anna; Illuminati, Silvia; Finale, Carolina; Rossetti, Monica; Scarponi, Giuseppe

    2012-07-01

    In this work we compared 2 official methods for the determination of HMF in honey, the spectrophotometric White method and the HPLC method (International Honey Commission) for the determination of HMF in unifloral honey and honeydew samples with a very low HMF content (<4 mg/kg), which is the most critical determination in terms of accuracy and precision of methods. In honey solutions, the limits of quantification for HPLC and White methods are 0.83 mg/L and 0.67 mg/L, respectively, and the linearity range is confirmed up to 20 mg/L for the HPLC method and up to 5 mg/L for the White method. In honeys with HMF >5 mg/kg, the molar extinction coefficient is 15369, lower than the literature value of 16830, and should be used for HMF determination. For samples with HMF content in the range 1-4 mg/kg the accuracy of the 2 methods is comparable both for unifloral and honeydew samples, whereas as regards precision, the HPLC method gives better results (3.5% compared with 6.4% for the White method). So, in general, the HPLC method seems to be more appropriate for the determination of HMF in honey in the range 1-4 mg/kg thanks to its greater precision, but for samples with a HMF content of less than 1 mg/kg the analyses are inaccurate for both methods. This work can help governmental and private laboratories that perform food analyses to choose the best method for the determination of HMF at very low levels in unifloral honey and honeydew samples. © 2012 Institute of Food Technologists®

  8. Detection of cyst using image segmentation and building knowledge-based intelligent decision support system as an aid to telemedicine

    NASA Astrophysics Data System (ADS)

    Janet, J.; Natesan, T. R.; Santhosh, Ramamurthy; Ibramsha, Mohideen

    2005-02-01

    An intelligent decision support tool to the Radiologist in telemedicine is described. Medical prescriptions are given based on the images of cyst that has been transmitted over computer networks to the remote medical center. The digital image, acquired by sonography, is converted into an intensity image. This image is then subjected to image preprocessing which involves correction methods to eliminate specific artifacts. The image is resized into a 256 x 256 matrix by using bilinear interpolation method. The background area is detected using distinct block operation. The area of the cyst is calculated by removing the background area from the original image. Boundary enhancement and morphological operations are done to remove unrelated pixels. This gives us the cyst volume. This segmented image of the cyst is sent to the remote medical center for analysis by Knowledge based artificial Intelligent Decision Support System (KIDSS). The type of cyst is detected and reported to the control mechanism of KIDSS. Then the inference engine compares this with the knowledge base and gives appropriate medical prescriptions or treatment recommendations by applying reasoning mechanisms at the remote medical center.

  9. Automated property optimization via ab initio O(N) elongation method: Application to (hyper-)polarizability in DNA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Orimoto, Yuuichi, E-mail: orimoto.yuuichi.888@m.kyushu-u.ac.jp; Aoki, Yuriko; Japan Science and Technology Agency, CREST, 4-1-8 Hon-chou, Kawaguchi, Saitama 332-0012

    An automated property optimization method was developed based on the ab initio O(N) elongation (ELG) method and applied to the optimization of nonlinear optical (NLO) properties in DNA as a first test. The ELG method mimics a polymerization reaction on a computer, and the reaction terminal of a starting cluster is attacked by monomers sequentially to elongate the electronic structure of the system by solving in each step a limited space including the terminal (localized molecular orbitals at the terminal) and monomer. The ELG-finite field (ELG-FF) method for calculating (hyper-)polarizabilities was used as the engine program of the optimization method,more » and it was found to show linear scaling efficiency while maintaining high computational accuracy for a random sequenced DNA model. Furthermore, the self-consistent field convergence was significantly improved by using the ELG-FF method compared with a conventional method, and it can lead to more feasible NLO property values in the FF treatment. The automated optimization method successfully chose an appropriate base pair from four base pairs (A, T, G, and C) for each elongation step according to an evaluation function. From test optimizations for the first order hyper-polarizability (β) in DNA, a substantial difference was observed depending on optimization conditions between “choose-maximum” (choose a base pair giving the maximum β for each step) and “choose-minimum” (choose a base pair giving the minimum β). In contrast, there was an ambiguous difference between these conditions for optimizing the second order hyper-polarizability (γ) because of the small absolute value of γ and the limitation of numerical differential calculations in the FF method. It can be concluded that the ab initio level property optimization method introduced here can be an effective step towards an advanced computer aided material design method as long as the numerical limitation of the FF method is taken into account.« less

  10. Automated property optimization via ab initio O(N) elongation method: Application to (hyper-)polarizability in DNA.

    PubMed

    Orimoto, Yuuichi; Aoki, Yuriko

    2016-07-14

    An automated property optimization method was developed based on the ab initio O(N) elongation (ELG) method and applied to the optimization of nonlinear optical (NLO) properties in DNA as a first test. The ELG method mimics a polymerization reaction on a computer, and the reaction terminal of a starting cluster is attacked by monomers sequentially to elongate the electronic structure of the system by solving in each step a limited space including the terminal (localized molecular orbitals at the terminal) and monomer. The ELG-finite field (ELG-FF) method for calculating (hyper-)polarizabilities was used as the engine program of the optimization method, and it was found to show linear scaling efficiency while maintaining high computational accuracy for a random sequenced DNA model. Furthermore, the self-consistent field convergence was significantly improved by using the ELG-FF method compared with a conventional method, and it can lead to more feasible NLO property values in the FF treatment. The automated optimization method successfully chose an appropriate base pair from four base pairs (A, T, G, and C) for each elongation step according to an evaluation function. From test optimizations for the first order hyper-polarizability (β) in DNA, a substantial difference was observed depending on optimization conditions between "choose-maximum" (choose a base pair giving the maximum β for each step) and "choose-minimum" (choose a base pair giving the minimum β). In contrast, there was an ambiguous difference between these conditions for optimizing the second order hyper-polarizability (γ) because of the small absolute value of γ and the limitation of numerical differential calculations in the FF method. It can be concluded that the ab initio level property optimization method introduced here can be an effective step towards an advanced computer aided material design method as long as the numerical limitation of the FF method is taken into account.

  11. Twostep-by-twostep PIRK-type PC methods with continuous output formulas

    NASA Astrophysics Data System (ADS)

    Cong, Nguyen Huu; Xuan, Le Ngoc

    2008-11-01

    This paper deals with parallel predictor-corrector (PC) iteration methods based on collocation Runge-Kutta (RK) corrector methods with continuous output formulas for solving nonstiff initial-value problems (IVPs) for systems of first-order differential equations. At nth step, the continuous output formulas are used not only for predicting the stage values in the PC iteration methods but also for calculating the step values at (n+2)th step. In this case, the integration processes can be proceeded twostep-by-twostep. The resulting twostep-by-twostep (TBT) parallel-iterated RK-type (PIRK-type) methods with continuous output formulas (twostep-by-twostep PIRKC methods or TBTPIRKC methods) give us a faster integration process. Fixed stepsize applications of these TBTPIRKC methods to a few widely-used test problems reveal that the new PC methods are much more efficient when compared with the well-known parallel-iterated RK methods (PIRK methods), parallel-iterated RK-type PC methods with continuous output formulas (PIRKC methods) and sequential explicit RK codes DOPRI5 and DOP853 available from the literature.

  12. Diagnostic utility of the cell block method versus the conventional smear study in pleural fluid cytology.

    PubMed

    Shivakumarswamy, Udasimath; Arakeri, Surekha U; Karigowdar, Mahesh H; Yelikar, Br

    2012-01-01

    The cytological examinations of serous effusions have been well-accepted, and a positive diagnosis is often considered as a definitive diagnosis. It helps in staging, prognosis and management of the patients in malignancies and also gives information about various inflammatory and non-inflammatory lesions. Diagnostic problems arise in everyday practice to differentiate reactive atypical mesothelial cells and malignant cells by the routine conventional smear (CS) method. To compare the morphological features of the CS method with those of the cell block (CB) method and also to assess the utility and sensitivity of the CB method in the cytodiagnosis of pleural effusions. The study was conducted in the cytology section of the Department of Pathology. Sixty pleural fluid samples were subjected to diagnostic evaluation for over a period of 20 months. Along with the conventional smears, cell blocks were prepared by using 10% alcohol-formalin as a fixative agent. Statistical analysis with the 'z test' was performed to identify the cellularity, using the CS and CB methods. Mc. Naemer's χ(2)test was used to identify the additional yield for malignancy by the CB method. Cellularity and additional yield for malignancy was 15% more by the CB method. The CB method provides high cellularity, better architectural patterns, morphological features and an additional yield of malignant cells, and thereby, increases the sensitivity of the cytodiagnosis when compared with the CS method.

  13. Methods of verifying net carbon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McClung, M.

    1996-10-01

    Problems currently exist with using net carbon as an industrial standard to gauge smelter performance. First, throughout the industry there are a number of different methods used for determining net carbon. Also, until recently there has not been a viable method to cross check or predict change in net carbon. This inherently leads to differences and most likely inaccuracies when comparing performances of different plants using a net carbon number. Ravenswood uses specific methods when calculating the net carbon balance. The R and D Carbon, Ltd. formula developed by Verner Fisher, et al, to predict and cross check net carbonmore » based on baked carbon core analysis has been successfully used. Another method is used, as a cross check, which is based on the raw materials (cokes and pitch) usage as related to the metal produced. The combination of these methods gives a definitive representation of the carbon performance in the reduction cell. This report details the methods Ravenswood Aluminum uses and the information derived from it.« less

  14. Hermite WENO limiting for multi-moment finite-volume methods using the ADER-DT time discretization for 1-D systems of conservation laws

    DOE PAGES

    Norman, Matthew R.

    2014-11-24

    New Hermite Weighted Essentially Non-Oscillatory (HWENO) interpolants are developed and investigated within the Multi-Moment Finite-Volume (MMFV) formulation using the ADER-DT time discretization. Whereas traditional WENO methods interpolate pointwise, function-based WENO methods explicitly form a non-oscillatory, high-order polynomial over the cell in question. This study chooses a function-based approach and details how fast convergence to optimal weights for smooth flow is ensured. Methods of sixth-, eighth-, and tenth-order accuracy are developed. We compare these against traditional single-moment WENO methods of fifth-, seventh-, ninth-, and eleventh-order accuracy to compare against more familiar methods from literature. The new HWENO methods improve upon existingmore » HWENO methods (1) by giving a better resolution of unreinforced contact discontinuities and (2) by only needing a single HWENO polynomial to update both the cell mean value and cell mean derivative. Test cases to validate and assess these methods include 1-D linear transport, the 1-D inviscid Burger's equation, and the 1-D inviscid Euler equations. Smooth and non-smooth flows are used for evaluation. These HWENO methods performed better than comparable literature-standard WENO methods for all regimes of discontinuity and smoothness in all tests herein. They exhibit improved optimal accuracy due to the use of derivatives, and they collapse to solutions similar to typical WENO methods when limiting is required. The study concludes that the new HWENO methods are robust and effective when used in the ADER-DT MMFV framework. Finally, these results are intended to demonstrate capability rather than exhaust all possible implementations.« less

  15. Modified multiple time scale method for solving strongly nonlinear damped forced vibration systems

    NASA Astrophysics Data System (ADS)

    Razzak, M. A.; Alam, M. Z.; Sharif, M. N.

    2018-03-01

    In this paper, modified multiple time scale (MTS) method is employed to solve strongly nonlinear forced vibration systems. The first-order approximation is only considered in order to avoid complexicity. The formulations and the determination of the solution procedure are very easy and straightforward. The classical multiple time scale (MS) and multiple scales Lindstedt-Poincare method (MSLP) do not give desire result for the strongly damped forced vibration systems with strong damping effects. The main aim of this paper is to remove these limitations. Two examples are considered to illustrate the effectiveness and convenience of the present procedure. The approximate external frequencies and the corresponding approximate solutions are determined by the present method. The results give good coincidence with corresponding numerical solution (considered to be exact) and also provide better result than other existing results. For weak nonlinearities with weak damping effect, the absolute relative error measures (first-order approximate external frequency) in this paper is only 0.07% when amplitude A = 1.5 , while the relative error gives MSLP method is surprisingly 28.81%. Furthermore, for strong nonlinearities with strong damping effect, the absolute relative error found in this article is only 0.02%, whereas the relative error obtained by MSLP method is 24.18%. Therefore, the present method is not only valid for weakly nonlinear damped forced systems, but also gives better result for strongly nonlinear systems with both small and strong damping effect.

  16. Attitudes and Perceptions about Private Philanthropic Giving to Arizona Community Colleges and Universities: Implications for Practice

    ERIC Educational Resources Information Center

    Martinez, George Andrew

    2009-01-01

    Wide disparity exists in philanthropic giving to public, two-year community colleges as compared to public, four-year universities. Recent estimates indicate that 0.5 to 5% of all private philanthropic giving to U.S. higher education annually goes to public, two-year community colleges, with the remainder going to public and private four-year…

  17. Technical Note: Improving proton stopping power ratio determination for a deformable silicone-based 3D dosimeter using dual energy CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taasti, Vicki Trier, E-mail: victaa@rm.dk; Høye, Ellen Marie; Hansen, David Christoffer

    Purpose: The aim of this study was to investigate whether the stopping power ratio (SPR) of a deformable, silicone-based 3D dosimeter could be determined more accurately using dual energy (DE) CT compared to using conventional methods based on single energy (SE) CT. The use of SECT combined with the stoichiometric calibration method was therefore compared to DECT-based determination. Methods: The SPR of the dosimeter was estimated based on its Hounsfield units (HUs) in both a SECT image and a DECT image set. The stoichiometric calibration method was used for converting the HU in the SECT image to a SPR valuemore » for the dosimeter while two published SPR calibration methods for dual energy were applied on the DECT images. Finally, the SPR of the dosimeter was measured in a 60 MeV proton by quantifying the range difference with and without the dosimeter in the beam path. Results: The SPR determined from SECT and the stoichiometric method was 1.10, compared to 1.01 with both DECT calibration methods. The measured SPR for the dosimeter material was 0.97. Conclusions: The SPR of the dosimeter was overestimated by 13% using the stoichiometric method and by 3% when using DECT. If the stoichiometric method should be applied for the dosimeter, the HU of the dosimeter must be manually changed in the treatment planning system in order to give a correct SPR estimate. Using a wrong SPR value will cause differences between the calculated and the delivered treatment plans.« less

  18. Mental health after first childbirth in women requesting a caesarean section; a retrospective register-based study.

    PubMed

    Möller, Louise; Josefsson, Ann; Bladh, Marie; Lilliecreutz, Caroline; Andolf, Ellika; Sydsjö, G

    2017-09-29

    Psychiatric illness before delivery increases the risk of giving birth by caesarean section on maternal request (CSMR) but little is known about these women's mental health after childbirth. In this study we aimed to compare the prevalence of psychiatric disorders five years before and after delivery in primiparae giving birth by CS on maternal request to all other primiparae giving birth, indifferent on their mode of delivery. The study population comprised all women born in Sweden 1973-1983 giving birth for the first time in 2002-2004. Psychiatric diagnoses, in- and outpatient care were retrieved from the National Patient Register in Sweden. The risk of psychiatric care after childbirth was estimated using CSMR, previous mental health and sociodemographic variables as covariates. Psychiatric disorders after childbirth were more common in women giving birth by CSMR compared to the other women (11.2% vs 5.5%, p < 0.001). CSMR increased the risk of psychiatric disorders after childbirth (aOR 1.5, 95% CI 1.2-1.9). The prevalence of psychiatric disorders had increased after compared to before childbirth (mean difference 0.02 ± 0.25, 95% CI 0.018-0.022, p < 0.001). Women giving birth by CSMR tended to be diagnosed in the inpatient care more often (54.9% vs. 45.8%, p = 0.056) and were more likely to have been diagnosed before childbirth as well (39.8% vs. 24.2%, p < 0.001). Women giving birth by CSMR more often suffer from psychiatric disorders both before and after delivery. This indicates that these women are a vulnerable group requiring special attention from obstetric- and general health-care providers. This vulnerability should be taken into account when deciding on mode of delivery.

  19. Women׳s birthplace decision-making, the role of confidence: Part of the Evaluating Maternity Units study, New Zealand.

    PubMed

    Grigg, Celia P; Tracy, Sally K; Schmied, Virginia; Daellenbach, Rea; Kensington, Mary

    2015-06-01

    to explore women׳s birthplace decision-making and identify the factors which enable women to plan to give birth in a freestanding midwifery-led primary level maternity unit rather than in an obstetric-led tertiary level maternity hospital in New Zealand. a mixed methods prospective cohort design. data from eight focus groups (37 women) and a six week postpartum survey (571 women, 82%) were analysed using thematic analysis and descriptive statistics. The qualitative data from the focus groups and survey were the primary data sources and were integrated at the analysis stage; and the secondary qualitative and quantitative data were integrated at the interpretation stage. Christchurch, New Zealand, with one tertiary maternity hospital and four primary level maternity units (2010-2012). well (at 'low risk' of developing complications), pregnant women booked to give birth in one of the primary units or the tertiary hospital. All women received midwifery continuity of care, regardless of their intended or actual birthplace. five core themes were identified: the birth process, women׳s self-belief in their ability to give birth, midwives, the health system and birth place. 'Confidence' was identified as the overarching concept influencing the themes. Women who chose to give birth in a primary maternity unit appeared to differ markedly in their beliefs regarding their optimal birthplace compared to women who chose to give birth in a tertiary maternity hospital. The women who planned a primary maternity unit birth expressed confidence in the birth process, their ability to give birth, their midwife, the maternity system and/or the primary unit itself. The women planning to give birth in a tertiary hospital did not express confidence in the birth process, their ability to give birth, the system for transfers and/or the primary unit as a birthplace, although they did express confidence in their midwife. birthplace is a profoundly important aspect of women׳s experience of childbirth. Birthplace decision-making is complex, in common with many other aspects of childbirth. A multiplicity of factors needs converge in order for all those involved to gain the confidence required to plan what, in this context, might be considered a 'countercultural' decision to give birth at a midwife-led primary maternity unit. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Data processing of qualitative results from an interlaboratory comparison for the detection of “Flavescence dorée” phytoplasma: How the use of statistics can improve the reliability of the method validation process in plant pathology

    PubMed Central

    Renaudin, Isabelle; Poliakoff, Françoise

    2017-01-01

    A working group established in the framework of the EUPHRESCO European collaborative project aimed to compare and validate diagnostic protocols for the detection of “Flavescence dorée” (FD) phytoplasma in grapevines. Seven molecular protocols were compared in an interlaboratory test performance study where each laboratory had to analyze the same panel of samples consisting of DNA extracts prepared by the organizing laboratory. The tested molecular methods consisted of universal and group-specific real-time and end-point nested PCR tests. Different statistical approaches were applied to this collaborative study. Firstly, there was the standard statistical approach consisting in analyzing samples which are known to be positive and samples which are known to be negative and reporting the proportion of false-positive and false-negative results to respectively calculate diagnostic specificity and sensitivity. This approach was supplemented by the calculation of repeatability and reproducibility for qualitative methods based on the notions of accordance and concordance. Other new approaches were also implemented, based, on the one hand, on the probability of detection model, and, on the other hand, on Bayes’ theorem. These various statistical approaches are complementary and give consistent results. Their combination, and in particular, the introduction of new statistical approaches give overall information on the performance and limitations of the different methods, and are particularly useful for selecting the most appropriate detection scheme with regards to the prevalence of the pathogen. Three real-time PCR protocols (methods M4, M5 and M6 respectively developed by Hren (2007), Pelletier (2009) and under patent oligonucleotides) achieved the highest levels of performance for FD phytoplasma detection. This paper also addresses the issue of indeterminate results and the identification of outlier results. The statistical tools presented in this paper and their combination can be applied to many other studies concerning plant pathogens and other disciplines that use qualitative detection methods. PMID:28384335

  1. Data processing of qualitative results from an interlaboratory comparison for the detection of "Flavescence dorée" phytoplasma: How the use of statistics can improve the reliability of the method validation process in plant pathology.

    PubMed

    Chabirand, Aude; Loiseau, Marianne; Renaudin, Isabelle; Poliakoff, Françoise

    2017-01-01

    A working group established in the framework of the EUPHRESCO European collaborative project aimed to compare and validate diagnostic protocols for the detection of "Flavescence dorée" (FD) phytoplasma in grapevines. Seven molecular protocols were compared in an interlaboratory test performance study where each laboratory had to analyze the same panel of samples consisting of DNA extracts prepared by the organizing laboratory. The tested molecular methods consisted of universal and group-specific real-time and end-point nested PCR tests. Different statistical approaches were applied to this collaborative study. Firstly, there was the standard statistical approach consisting in analyzing samples which are known to be positive and samples which are known to be negative and reporting the proportion of false-positive and false-negative results to respectively calculate diagnostic specificity and sensitivity. This approach was supplemented by the calculation of repeatability and reproducibility for qualitative methods based on the notions of accordance and concordance. Other new approaches were also implemented, based, on the one hand, on the probability of detection model, and, on the other hand, on Bayes' theorem. These various statistical approaches are complementary and give consistent results. Their combination, and in particular, the introduction of new statistical approaches give overall information on the performance and limitations of the different methods, and are particularly useful for selecting the most appropriate detection scheme with regards to the prevalence of the pathogen. Three real-time PCR protocols (methods M4, M5 and M6 respectively developed by Hren (2007), Pelletier (2009) and under patent oligonucleotides) achieved the highest levels of performance for FD phytoplasma detection. This paper also addresses the issue of indeterminate results and the identification of outlier results. The statistical tools presented in this paper and their combination can be applied to many other studies concerning plant pathogens and other disciplines that use qualitative detection methods.

  2. [Advances in metabolic engineering of Escherichia coli for isoprene biosynthesis].

    PubMed

    Guo, Jing; Cao, Yujin; Xian, Mo; Liu, Huizhou

    2016-08-25

    As an important industrial chemical, isoprene is mainly used as a precursor for synthetic rubbers. In addition, it also has wide applications in the field of pharmaceutical and chemical intermediates, food, adhesives and aviation fuel. Compared with conventional petrochemical routes, production of isoprene in microbial systems has been the research focus considering environment friendly and sustainable development features. This article summarizes the metabolic pathways and key enzymes of isoprene biosynthesis, reviews current methods and strategies in improving isoprene production of Escherichia coli, and also gives some basic ideas and expectation.

  3. Folding DNA into a Lipid-Conjugated Nanobarrel for Controlled Reconstitution of Membrane Proteins.

    PubMed

    Dong, Yuanchen; Chen, Shuobing; Zhang, Shijian; Sodroski, Joseph; Yang, Zhongqiang; Liu, Dongsheng; Mao, Youdong

    2018-02-19

    Building upon DNA origami technology, we introduce a method to reconstitute a single membrane protein into a self-assembled DNA nanobarrel that scaffolds a nanodisc-like lipid environment. Compared with the membrane-scaffolding-protein nanodisc technique, our approach gives rise to defined stoichiometry, controlled sizes, as well as enhanced stability and homogeneity in membrane protein reconstitution. We further demonstrate potential applications of the DNA nanobarrels in the structural analysis of membrane proteins. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Detecting peroxiredoxin hyperoxidation by one-dimensional isoelectric focusing.

    PubMed

    Cao, Zhenbo; Bulleid, Neil J

    The activity of typical 2-cys peroxiredoxin (Prxs) can be regulated by hyperoxidation with a consequent loss of redox activity. Here we developed a simple assay to monitor the level of hyperoxidation of different typical 2-cys prxs simultaneously. This assay only requires standard equipment and can compare different samples on the same gel. It requires much less time than conventional 2D gels and gives more information than Western blotting with an antibody specific for hyperoxidized peroxiredoxin. This method could also be used to monitor protein modification with a charge difference such as phosphorylation.

  5. High pressure melting curve of platinum up to 35 GPa

    NASA Astrophysics Data System (ADS)

    Patel, Nishant N.; Sunder, Meenakshi

    2018-04-01

    Melting curve of Platinum (Pt) has been measured up to 35 GPa using our laboratory based laser heated diamond anvil cell (LHDAC) facility. Laser speckle method has been employed to detect onset of melting. High pressure melting curve of Pt obtained in the present study has been compared with previously reported experimental and theoretical results. The melting curve measured agrees well within experimental error with the results of Kavner et al. The experimental data fitted with simon equation gives (∂Tm/∂P) ˜25 K/GPa at P˜1 MPa.

  6. How many molecules are required to measure a cyclic voltammogram?

    NASA Astrophysics Data System (ADS)

    Cutress, Ian J.; Compton, Richard G.

    2011-05-01

    The stochastic limit at which fully-reversible cyclic voltammetry can accurately be measured is investigated. Specifically, Monte Carlo GPU simulation is used to study low concentration cyclic voltammetry at a microdisk electrode over a range of scan rates and concentrations, and the results compared to the statistical limit as predicted by finite difference simulation based on Fick's Laws of Diffusion. Both Butler-Volmer and Marcus-Hush electrode kinetics are considered, simulated via random-walk methods, and shown to give identical results in the fast kinetic limit.

  7. Stress Analysis of Beams with Shear Deformation of the Flanges

    NASA Technical Reports Server (NTRS)

    Kuhn, Paul

    1937-01-01

    This report discusses the fundamental action of shear deformation of the flanges on the basis of simplifying assumptions. The theory is developed to the point of giving analytical solutions for simple cases of beams and of skin-stringer panels under axial load. Strain-gage tests on a tension panel and on a beam corresponding to these simple cases are described and the results are compared with analytical results. For wing beams, an approximate method of applying the theory is given. As an alternative, the construction of a mechanical analyzer is advocated.

  8. Chirality sensing with stereodynamic biphenolate zinc complexes.

    PubMed

    Bentley, Keith W; de Los Santos, Zeus A; Weiss, Mary J; Wolf, Christian

    2015-10-01

    Two bidentate ligands consisting of a fluxional polyarylacetylene framework with terminal phenol groups were synthesized. Reaction with diethylzinc gives stereodynamic complexes that undergo distinct asymmetric transformation of the first kind upon binding of chiral amines and amino alcohols. The substrate-to-ligand chirality imprinting at the zinc coordination sphere results in characteristic circular dichroism signals that can be used for direct enantiomeric excess (ee) analysis. This chemosensing approach bears potential for high-throughput ee screening with small sample amounts and reduced solvent waste compared to traditional high-performance liquid chromatography methods. © 2015 Wiley Periodicals, Inc.

  9. Detecting the golgi protein 73 of liver cancer with micro cantilever

    NASA Astrophysics Data System (ADS)

    Thanh Tuyen Le, Thi; Pham, Van Tho; Nhat Khoa Phan, Thanh; Binh Pham, Van; Thao Le, Van; Hien Tong, Duy

    2014-12-01

    Golgi protein 73 (GP73) is a potential serum biomarker used in diagnosing human hepatocellular carcinoma (HCC). Compared to alpha-fetoprotein, detection of GP73 is expected to give better sensitivity and specificity and thus offers a better method for diagnosis of HCC at an early stage. In this paper, silicon nitride microcantilever was used to detect GP73. The cantilever was modified through many steps to contain antibody of GP73. The result shows that the cantilever can be used as a label-free sensor to detect this kind of biomarker.

  10. Net thrust calculation sensitivity of an afterburning turbofan engine to variations in input parameters

    NASA Technical Reports Server (NTRS)

    Hughes, D. L.; Ray, R. J.; Walton, J. T.

    1985-01-01

    The calculated value of net thrust of an aircraft powered by a General Electric F404-GE-400 afterburning turbofan engine was evaluated for its sensitivity to various input parameters. The effects of a 1.0-percent change in each input parameter on the calculated value of net thrust with two calculation methods are compared. This paper presents the results of these comparisons and also gives the estimated accuracy of the overall net thrust calculation as determined from the influence coefficients and estimated parameter measurement accuracies.

  11. Validation of the PVSyst Performance Model for the Concentrix CPV Technology

    NASA Astrophysics Data System (ADS)

    Gerstmaier, Tobias; Gomez, María; Gombert, Andreas; Mermoud, André; Lejeune, Thibault

    2011-12-01

    The accuracy of the two-stage PVSyst model for the Concentrix CPV Technology is determined by comparing modeled to measured values. For both stages, i) the module model and ii) the power plant model, the underlying approaches are explained and methods for obtaining the model parameters are presented. The performance of both models is quantified using 19 months of outdoor measurements for the module model and 9 months of measurements at four different sites for the power plant model. Results are presented by giving statistical quantities for the model accuracy.

  12. Barriers and Facilitators to Effective Feedback: A Qualitative Analysis of Data From Multispecialty Resident Focus Groups.

    PubMed

    Reddy, Shalini T; Zegarek, Matthew H; Fromme, H Barrett; Ryan, Michael S; Schumann, Sarah-Anne; Harris, Ilene B

    2015-06-01

    Despite the importance of feedback, the literature suggests that there is inadequate feedback in graduate medical education. We explored barriers and facilitators that residents in anesthesiology, emergency medicine, obstetrics and gynecology, and surgery experience with giving and receiving feedback during their clinical training. Residents from 3 geographically diverse teaching institutions were recruited to participate in focus groups in 2012. Open-ended questions prompted residents to describe their experiences with giving and receiving feedback, and discuss facilitators and barriers. Data were transcribed and analyzed using the constant comparative method associated with a grounded theory approach. A total of 19 residents participated in 1 of 3 focus groups. Five major themes related to feedback were identified: teacher factors, learner factors, feedback process, feedback content, and educational context. Unapproachable attendings, time pressures due to clinical work, and discomfort with giving negative feedback were cited as major barriers in the feedback process. Learner engagement in the process was a major facilitator in the feedback process. Residents provided insights for improving the feedback process based on their dual roles as teachers and learners. Time pressures in the learning environment may be mitigated by efforts to improve the quality of teacher-learner relationships. Forms for collecting written feedback should be augmented by faculty development to ensure meaningful use. Efforts to improve residents' comfort with giving feedback and encouraging learners to engage in the feedback process may foster an environment conducive to increasing feedback.

  13. Comparison of standard moisture loss-on-drying methods for the determination of moisture content of corn distillers dried grains with solubles.

    PubMed

    Ileleji, Klein E; Garcia, Arnoldo A; Kingsly, Ambrose R P; Clementson, Clairmont L

    2010-01-01

    This study quantified the variability among 14 standard moisture loss-on-drying (gravimetric) methods for determination of the moisture content of corn distillers dried grains with solubles (DDGS). The methods were compared with the Karl Fischer (KF) titration method to determine their percent variation from the KF method. Additionally, the thermo-balance method using a halogen moisture analyzer that is routinely used in fuel ethanol plants was included in the methods investigated. Moisture contents by the loss-on-drying methods were significantly different for DDGS samples from three fuel ethanol plants. The percent deviation of the moisture loss-on-drying methods decreased with decrease in drying temperature and, to a lesser extent, drying time. This was attributed to an overestimation of moisture content in DDGS due to the release of volatiles at high temperatures. Our findings indicate that the various methods that have been used for moisture determination by moisture loss-on-drying will not give identical results and therefore, caution should be exercised when selecting a moisture loss-on-drying method for DDGS.

  14. Methods for assessing wall interference in the 2- by 2-foot adaptive-wall wind tunnel

    NASA Technical Reports Server (NTRS)

    Schairer, E. T.

    1986-01-01

    Discussed are two methods for assessing two-dimensional wall interference in the adaptive-wall test section of the NASA Ames 2 x 2-Foot Transonic Wind Tunnel: (1) a method for predicting free-air conditions near the walls of the test section (adaptive-wall methods); and (2) a method for estimating wall-induced velocities near the model (correction methods), both of which methods are based on measurements of either one or two components of flow velocity near the walls of the test section. Each method is demonstrated using simulated wind tunnel data and is compared with other methods of the same type. The two-component adaptive-wall and correction methods were found to be preferable to the corresponding one-component methods because: (1) they are more sensitive to, and give a more complete description of, wall interference; (2) they require measurements at fewer locations; (3) they can be used to establish free-stream conditions; and (4) they are independent of a description of the model and constants of integration.

  15. [Possibility of the species identification using blood stains located on the material evidences and bone fragments with the method of solid phase enzyme immunoassay with "IgG general-EIA-BEST" kit and human immunoglobulin G].

    PubMed

    Sidorov, V L; Shvetsova, I V; Isakova, I V

    2007-01-01

    The authors give the comparative analysis of Russian and foreign forensic medical methods of species character identification of the blood from the stains on the material evidences and bone fragments. It is shown that for this purpose it is feasible to apply human immunoglobulin G (IgG) and solid phase enzyme immunoassay (EIA) with the kit "IgG general-EIA-BEST". In comparison with the methods used in Russia this method is more sensitive, convenient for objective registration and computer processing. The results of experiments shown that it is possible to use the kit "IgG general-EIA-BEST" in forensic medicine for the species character identification of the blood from the stains on the material evidences and bone fragments.

  16. Nonlinear spline wavefront reconstruction through moment-based Shack-Hartmann sensor measurements.

    PubMed

    Viegers, M; Brunner, E; Soloviev, O; de Visser, C C; Verhaegen, M

    2017-05-15

    We propose a spline-based aberration reconstruction method through moment measurements (SABRE-M). The method uses first and second moment information from the focal spots of the SH sensor to reconstruct the wavefront with bivariate simplex B-spline basis functions. The proposed method, since it provides higher order local wavefront estimates with quadratic and cubic basis functions can provide the same accuracy for SH arrays with a reduced number of subapertures and, correspondingly, larger lenses which can be beneficial for application in low light conditions. In numerical experiments the performance of SABRE-M is compared to that of the first moment method SABRE for aberrations of different spatial orders and for different sizes of the SH array. The results show that SABRE-M is superior to SABRE, in particular for the higher order aberrations and that SABRE-M can give equal performance as SABRE on a SH grid of halved sampling.

  17. Liquid li structure and dynamics: A comparison between OFDFT and second nearest-neighbor embedded-atom method

    DOE PAGES

    Chen, Mohan; Vella, Joseph R.; Panagiotopoulos, Athanassios Z.; ...

    2015-04-08

    The structure and dynamics of liquid lithium are studied using two simulation methods: orbital-free (OF) first-principles molecular dynamics (MD), which employs OF density functional theory (DFT), and classical MD utilizing a second nearest-neighbor embedded-atom method potential. The properties we studied include the dynamic structure factor, the self-diffusion coefficient, the dispersion relation, the viscosity, and the bond angle distribution function. Our simulation results were compared to available experimental data when possible. Each method has distinct advantages and disadvantages. For example, OFDFT gives better agreement with experimental dynamic structure factors, yet is more computationally demanding than classical simulations. Classical simulations can accessmore » a broader temperature range and longer time scales. The combination of first-principles and classical simulations is a powerful tool for studying properties of liquid lithium.« less

  18. Systematic approach to cutoff frequency selection in continuous-wave electron paramagnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Hirata, Hiroshi; Itoh, Toshiharu; Hosokawa, Kouichi; Deng, Yuanmu; Susaki, Hitoshi

    2005-08-01

    This article describes a systematic method for determining the cutoff frequency of the low-pass window function that is used for deconvolution in two-dimensional continuous-wave electron paramagnetic resonance (EPR) imaging. An evaluation function for the criterion used to select the cutoff frequency is proposed, and is the product of the effective width of the point spread function for a localized point signal and the noise amplitude of a resultant EPR image. The present method was applied to EPR imaging for a phantom, and the result of cutoff frequency selection was compared with that based on a previously reported method for the same projection data set. The evaluation function has a global minimum point that gives the appropriate cutoff frequency. Images with reasonably good resolution and noise suppression can be obtained from projections with an automatically selected cutoff frequency based on the present method.

  19. Comparative study of original recover and recover KL in separable non-negative matrix factorization for topic detection in Twitter

    NASA Astrophysics Data System (ADS)

    Prabandari, R. D.; Murfi, H.

    2017-07-01

    An increasing amount of information on social media such as Twitter requires an efficient way to find the topics so that the information can be well managed. One of an automated method for topic detection is separable non-negative matrix factorization (SNMF). SNMF assumes that each topic has at least one word that does not appear on other topics. This method uses the direct approach and gives polynomial-time complexity, while the previous methods are iterative approaches and have NP-hard complexity. There are three steps of SNMF algorithm, i.e. constructing word co-occurrences, finding anchor words, and recovering topics. In this paper, we examine two topic recover methods, namely original recover that is using algebraic manipulation and recover KL that using probability approach with Kullback-Leibler divergence. Our simulations show that recover KL provides better accuracies in term of topic recall than original recover.

  20. Incremental harmonic balance method for predicting amplitudes of a multi-d.o.f. non-linear wheel shimmy system with combined Coulomb and quadratic damping

    NASA Astrophysics Data System (ADS)

    Zhou, J. X.; Zhang, L.

    2005-01-01

    Incremental harmonic balance (IHB) formulations are derived for general multiple degrees of freedom (d.o.f.) non-linear autonomous systems. These formulations are developed for a concerned four-d.o.f. aircraft wheel shimmy system with combined Coulomb and velocity-squared damping. A multi-harmonic analysis is performed and amplitudes of limit cycles are predicted. Within a large range of parametric variations with respect to aircraft taxi velocity, the IHB method can, at a much cheaper cost, give results with high accuracy as compared with numerical results given by a parametric continuation method. In particular, the IHB method avoids the stiff problems emanating from numerical treatment of aircraft wheel shimmy system equations. The development is applicable to other vibration control systems that include commonly used dry friction devices or velocity-squared hydraulic dampers.

  1. A hybrid-stress finite element approach for stress and vibration analysis in linear anisotropic elasticity

    NASA Technical Reports Server (NTRS)

    Oden, J. Tinsley; Fly, Gerald W.; Mahadevan, L.

    1987-01-01

    A hybrid stress finite element method is developed for accurate stress and vibration analysis of problems in linear anisotropic elasticity. A modified form of the Hellinger-Reissner principle is formulated for dynamic analysis and an algorithm for the determination of the anisotropic elastic and compliance constants from experimental data is developed. These schemes were implemented in a finite element program for static and dynamic analysis of linear anisotropic two dimensional elasticity problems. Specific numerical examples are considered to verify the accuracy of the hybrid stress approach and compare it with that of the standard displacement method, especially for highly anisotropic materials. It is that the hybrid stress approach gives much better results than the displacement method. Preliminary work on extensions of this method to three dimensional elasticity is discussed, and the stress shape functions necessary for this extension are included.

  2. Gaussian mass optimization for kernel PCA parameters

    NASA Astrophysics Data System (ADS)

    Liu, Yong; Wang, Zulin

    2011-10-01

    This paper proposes a novel kernel parameter optimization method based on Gaussian mass, which aims to overcome the current brute force parameter optimization method in a heuristic way. Generally speaking, the choice of kernel parameter should be tightly related to the target objects while the variance between the samples, the most commonly used kernel parameter, doesn't possess much features of the target, which gives birth to Gaussian mass. Gaussian mass defined in this paper has the property of the invariance of rotation and translation and is capable of depicting the edge, topology and shape information. Simulation results show that Gaussian mass leads a promising heuristic optimization boost up for kernel method. In MNIST handwriting database, the recognition rate improves by 1.6% compared with common kernel method without Gaussian mass optimization. Several promising other directions which Gaussian mass might help are also proposed at the end of the paper.

  3. Feature Selection for Ridge Regression with Provable Guarantees.

    PubMed

    Paul, Saurabh; Drineas, Petros

    2016-04-01

    We introduce single-set spectral sparsification as a deterministic sampling-based feature selection technique for regularized least-squares classification, which is the classification analog to ridge regression. The method is unsupervised and gives worst-case guarantees of the generalization power of the classification function after feature selection with respect to the classification function obtained using all features. We also introduce leverage-score sampling as an unsupervised randomized feature selection method for ridge regression. We provide risk bounds for both single-set spectral sparsification and leverage-score sampling on ridge regression in the fixed design setting and show that the risk in the sampled space is comparable to the risk in the full-feature space. We perform experiments on synthetic and real-world data sets; a subset of TechTC-300 data sets, to support our theory. Experimental results indicate that the proposed methods perform better than the existing feature selection methods.

  4. Coupling detrended fluctuation analysis for analyzing coupled nonstationary signals.

    PubMed

    Hedayatifar, L; Vahabi, M; Jafari, G R

    2011-08-01

    When many variables are coupled to each other, a single case study could not give us thorough and precise information. When these time series are stationary, different methods of random matrix analysis and complex networks can be used. But, in nonstationary cases, the multifractal-detrended-cross-correlation-analysis (MF-DXA) method was introduced for just two coupled time series. In this article, we have extended the MF-DXA to the method of coupling detrended fluctuation analysis (CDFA) for the case when more than two series are correlated to each other. Here, we have calculated the multifractal properties of the coupled time series, and by comparing CDFA results of the original series with those of the shuffled and surrogate series, we can estimate the source of multifractality and the extent to which our series are coupled to each other. We illustrate the method by selected examples from air pollution and foreign exchange rates.

  5. Coupling detrended fluctuation analysis for analyzing coupled nonstationary signals

    NASA Astrophysics Data System (ADS)

    Hedayatifar, L.; Vahabi, M.; Jafari, G. R.

    2011-08-01

    When many variables are coupled to each other, a single case study could not give us thorough and precise information. When these time series are stationary, different methods of random matrix analysis and complex networks can be used. But, in nonstationary cases, the multifractal-detrended-cross-correlation-analysis (MF-DXA) method was introduced for just two coupled time series. In this article, we have extended the MF-DXA to the method of coupling detrended fluctuation analysis (CDFA) for the case when more than two series are correlated to each other. Here, we have calculated the multifractal properties of the coupled time series, and by comparing CDFA results of the original series with those of the shuffled and surrogate series, we can estimate the source of multifractality and the extent to which our series are coupled to each other. We illustrate the method by selected examples from air pollution and foreign exchange rates.

  6. Shape Classification Using Wasserstein Distance for Brain Morphometry Analysis.

    PubMed

    Su, Zhengyu; Zeng, Wei; Wang, Yalin; Lu, Zhong-Lin; Gu, Xianfeng

    2015-01-01

    Brain morphometry study plays a fundamental role in medical imaging analysis and diagnosis. This work proposes a novel framework for brain cortical surface classification using Wasserstein distance, based on uniformization theory and Riemannian optimal mass transport theory. By Poincare uniformization theorem, all shapes can be conformally deformed to one of the three canonical spaces: the unit sphere, the Euclidean plane or the hyperbolic plane. The uniformization map will distort the surface area elements. The area-distortion factor gives a probability measure on the canonical uniformization space. All the probability measures on a Riemannian manifold form the Wasserstein space. Given any 2 probability measures, there is a unique optimal mass transport map between them, the transportation cost defines the Wasserstein distance between them. Wasserstein distance gives a Riemannian metric for the Wasserstein space. It intrinsically measures the dissimilarities between shapes and thus has the potential for shape classification. To the best of our knowledge, this is the first. work to introduce the optimal mass transport map to general Riemannian manifolds. The method is based on geodesic power Voronoi diagram. Comparing to the conventional methods, our approach solely depends on Riemannian metrics and is invariant under rigid motions and scalings, thus it intrinsically measures shape distance. Experimental results on classifying brain cortical surfaces with different intelligence quotients demonstrated the efficiency and efficacy of our method.

  7. Shape Classification Using Wasserstein Distance for Brain Morphometry Analysis

    PubMed Central

    Su, Zhengyu; Zeng, Wei; Wang, Yalin; Lu, Zhong-Lin; Gu, Xianfeng

    2015-01-01

    Brain morphometry study plays a fundamental role in medical imaging analysis and diagnosis. This work proposes a novel framework for brain cortical surface classification using Wasserstein distance, based on uniformization theory and Riemannian optimal mass transport theory. By Poincare uniformization theorem, all shapes can be conformally deformed to one of the three canonical spaces: the unit sphere, the Euclidean plane or the hyperbolic plane. The uniformization map will distort the surface area elements. The area-distortion factor gives a probability measure on the canonical uniformization space. All the probability measures on a Riemannian manifold form the Wasserstein space. Given any 2 probability measures, there is a unique optimal mass transport map between them, the transportation cost defines the Wasserstein distance between them. Wasserstein distance gives a Riemannian metric for the Wasserstein space. It intrinsically measures the dissimilarities between shapes and thus has the potential for shape classification. To the best of our knowledge, this is the first work to introduce the optimal mass transport map to general Riemannian manifolds. The method is based on geodesic power Voronoi diagram. Comparing to the conventional methods, our approach solely depends on Riemannian metrics and is invariant under rigid motions and scalings, thus it intrinsically measures shape distance. Experimental results on classifying brain cortical surfaces with different intelligence quotients demonstrated the efficiency and efficacy of our method. PMID:26221691

  8. Effective optimization using sample persistence: A case study on quantum annealers and various Monte Carlo optimization methods

    NASA Astrophysics Data System (ADS)

    Karimi, Hamed; Rosenberg, Gili; Katzgraber, Helmut G.

    2017-10-01

    We present and apply a general-purpose, multistart algorithm for improving the performance of low-energy samplers used for solving optimization problems. The algorithm iteratively fixes the value of a large portion of the variables to values that have a high probability of being optimal. The resulting problems are smaller and less connected, and samplers tend to give better low-energy samples for these problems. The algorithm is trivially parallelizable since each start in the multistart algorithm is independent, and could be applied to any heuristic solver that can be run multiple times to give a sample. We present results for several classes of hard problems solved using simulated annealing, path-integral quantum Monte Carlo, parallel tempering with isoenergetic cluster moves, and a quantum annealer, and show that the success metrics and the scaling are improved substantially. When combined with this algorithm, the quantum annealer's scaling was substantially improved for native Chimera graph problems. In addition, with this algorithm the scaling of the time to solution of the quantum annealer is comparable to the Hamze-de Freitas-Selby algorithm on the weak-strong cluster problems introduced by Boixo et al. Parallel tempering with isoenergetic cluster moves was able to consistently solve three-dimensional spin glass problems with 8000 variables when combined with our method, whereas without our method it could not solve any.

  9. Evaluation of two methods for using MR information in PET reconstruction

    NASA Astrophysics Data System (ADS)

    Caldeira, L.; Scheins, J.; Almeida, P.; Herzog, H.

    2013-02-01

    Using magnetic resonance (MR) information in maximum a posteriori (MAP) algorithms for positron emission tomography (PET) image reconstruction has been investigated in the last years. Recently, three methods to introduce this information have been evaluated and the Bowsher prior was considered the best. Its main advantage is that it does not require image segmentation. Another method that has been widely used for incorporating MR information is using boundaries obtained by segmentation. This method has also shown improvements in image quality. In this paper, two methods for incorporating MR information in PET reconstruction are compared. After a Bayes parameter optimization, the reconstructed images were compared using the mean squared error (MSE) and the coefficient of variation (CV). MSE values are 3% lower in Bowsher than using boundaries. CV values are 10% lower in Bowsher than using boundaries. Both methods performed better than using no prior, that is, maximum likelihood expectation maximization (MLEM) or MAP without anatomic information in terms of MSE and CV. Concluding, incorporating MR information using the Bowsher prior gives better results in terms of MSE and CV than boundaries. MAP algorithms showed again to be effective in noise reduction and convergence, specially when MR information is incorporated. The robustness of the priors in respect to noise and inhomogeneities in the MR image has however still to be performed.

  10. A software tool for determination of breast cancer treatment methods using data mining approach.

    PubMed

    Cakır, Abdülkadir; Demirel, Burçin

    2011-12-01

    In this work, breast cancer treatment methods are determined using data mining. For this purpose, software is developed to help to oncology doctor for the suggestion of application of the treatment methods about breast cancer patients. 462 breast cancer patient data, obtained from Ankara Oncology Hospital, are used to determine treatment methods for new patients. This dataset is processed with Weka data mining tool. Classification algorithms are applied one by one for this dataset and results are compared to find proper treatment method. Developed software program called as "Treatment Assistant" uses different algorithms (IB1, Multilayer Perception and Decision Table) to find out which one is giving better result for each attribute to predict and by using Java Net beans interface. Treatment methods are determined for the post surgical operation of breast cancer patients using this developed software tool. At modeling step of data mining process, different Weka algorithms are used for output attributes. For hormonotherapy output IB1, for tamoxifen and radiotherapy outputs Multilayer Perceptron and for the chemotherapy output decision table algorithm shows best accuracy performance compare to each other. In conclusion, this work shows that data mining approach can be a useful tool for medical applications particularly at the treatment decision step. Data mining helps to the doctor to decide in a short time.

  11. Thermodynamic Temperature of High-Temperature Fixed Points Traceable to Blackbody Radiation and Synchrotron Radiation

    NASA Astrophysics Data System (ADS)

    Wähmer, M.; Anhalt, K.; Hollandt, J.; Klein, R.; Taubert, R. D.; Thornagel, R.; Ulm, G.; Gavrilov, V.; Grigoryeva, I.; Khlevnoy, B.; Sapritsky, V.

    2017-10-01

    Absolute spectral radiometry is currently the only established primary thermometric method for the temperature range above 1300 K. Up to now, the ongoing improvements of high-temperature fixed points and their formal implementation into an improved temperature scale with the mise en pratique for the definition of the kelvin, rely solely on single-wavelength absolute radiometry traceable to the cryogenic radiometer. Two alternative primary thermometric methods, yielding comparable or possibly even smaller uncertainties, have been proposed in the literature. They use ratios of irradiances to determine the thermodynamic temperature traceable to blackbody radiation and synchrotron radiation. At PTB, a project has been established in cooperation with VNIIOFI to use, for the first time, all three methods simultaneously for the determination of the phase transition temperatures of high-temperature fixed points. For this, a dedicated four-wavelengths ratio filter radiometer was developed. With all three thermometric methods performed independently and in parallel, we aim to compare the potential and practical limitations of all three methods, disclose possibly undetected systematic effects of each method and thereby confirm or improve the previous measurements traceable to the cryogenic radiometer. This will give further and independent confidence in the thermodynamic temperature determination of the high-temperature fixed point's phase transitions.

  12. A Method to Determine of All Non-Isomorphic Groups of Order 16

    ERIC Educational Resources Information Center

    Valcan, Dumitru

    2012-01-01

    Many students or teachers ask themselves: Being given a natural number n, how many non-isomorphic groups of order n exists? The answer, generally, is not yet given. But, for certain values of the number n have answered this question. The present work gives a method to determine of all non-isomorphic groups of order 16 and gives descriptions of all…

  13. Estimation of effective brain connectivity with dual Kalman filter and EEG source localization methods.

    PubMed

    Rajabioun, Mehdi; Nasrabadi, Ali Motie; Shamsollahi, Mohammad Bagher

    2017-09-01

    Effective connectivity is one of the most important considerations in brain functional mapping via EEG. It demonstrates the effects of a particular active brain region on others. In this paper, a new method is proposed which is based on dual Kalman filter. In this method, firstly by using a brain active localization method (standardized low resolution brain electromagnetic tomography) and applying it to EEG signal, active regions are extracted, and appropriate time model (multivariate autoregressive model) is fitted to extracted brain active sources for evaluating the activity and time dependence between sources. Then, dual Kalman filter is used to estimate model parameters or effective connectivity between active regions. The advantage of this method is the estimation of different brain parts activity simultaneously with the calculation of effective connectivity between active regions. By combining dual Kalman filter with brain source localization methods, in addition to the connectivity estimation between parts, source activity is updated during the time. The proposed method performance has been evaluated firstly by applying it to simulated EEG signals with interacting connectivity simulation between active parts. Noisy simulated signals with different signal to noise ratios are used for evaluating method sensitivity to noise and comparing proposed method performance with other methods. Then the method is applied to real signals and the estimation error during a sweeping window is calculated. By comparing proposed method results in different simulation (simulated and real signals), proposed method gives acceptable results with least mean square error in noisy or real conditions.

  14. Introduction to a standardized method for the evaluation of the potency of Bacillus thuringiensis serotype H-14 based products*

    PubMed Central

    Rishikesh, N.; Quélennec, G.

    1983-01-01

    Vector resistance and other constraints have necessitated consideration of the use of alternative materials and methods in an integrated approach to vector control. Bacillus thuringiensis serotype H-14 is a promising biological control agent which acts as a conventional larvicide through its delta-endotoxin (active ingredient) and which now has to be suitably formulated for application in vector breeding habitats. The active ingredient in the formulations has so far not been chemically characterized or quantified and therefore recourse has to be taken to a bioassay method. Drawing on past experience and through the assistance mainly of various collaborating centres, the World Health Organization has standardized a bioassay method (described in the Annex), which gives consistent and reproducible results. The method permits the determination of the potency of a B.t. H-14 preparation through comparison with a standard powder. The universal adoption of the standardized bioassay method will ensure comparability of the results of different investigators. PMID:6601545

  15. Damage Based Analysis (DBA) - Theory, Derivation and Practical Application Using Both an Acceleration and Pseudo Velocity Approach

    NASA Technical Reports Server (NTRS)

    Grillo, Vince

    2017-01-01

    The objective of this presentation is to give a brief overview of the theory behind the (DBA) method, an overview of the derivation and a practical application of the theory using the Python computer language. The Theory and Derivation will use both Acceleration and Pseudo Velocity methods to derive a series of equations for processing by Python. We will take the results and compare both Acceleration and Pseudo Velocity methods and discuss implementation of the Python functions. Also, we will discuss the efficiency of the methods and the amount of computer time required for the solution. In conclusion, (DBA) offers a powerful method to evaluate the amount of energy imparted into a system in the form of both Amplitude and Duration during qualification testing and flight environments. Many forms of steady state and transient vibratory motion can be characterized using this technique. (DBA) provides a more robust alternative to traditional methods such Power Spectral Density (PSD) using a maximax approach.

  16. A consensus reaching model for 2-tuple linguistic multiple attribute group decision making with incomplete weight information

    NASA Astrophysics Data System (ADS)

    Zhang, Wancheng; Xu, Yejun; Wang, Huimin

    2016-01-01

    The aim of this paper is to put forward a consensus reaching method for multi-attribute group decision-making (MAGDM) problems with linguistic information, in which the weight information of experts and attributes is unknown. First, some basic concepts and operational laws of 2-tuple linguistic label are introduced. Then, a grey relational analysis method and a maximising deviation method are proposed to calculate the incomplete weight information of experts and attributes respectively. To eliminate the conflict in the group, a weight-updating model is employed to derive the weights of experts based on their contribution to the consensus reaching process. After conflict elimination, the final group preference can be obtained which will give the ranking of the alternatives. The model can effectively avoid information distortion which is occurred regularly in the linguistic information processing. Finally, an illustrative example is given to illustrate the application of the proposed method and comparative analysis with the existing methods are offered to show the advantages of the proposed method.

  17. Damage Based Analysis (DBA): Theory, Derivation and Practical Application - Using Both an Acceleration and Pseudo-Velocity Approach

    NASA Technical Reports Server (NTRS)

    Grillo, Vince

    2016-01-01

    The objective of this presentation is to give a brief overview of the theory behind the (DBA) method, an overview of the derivation and a practical application of the theory using the Python computer language. The Theory and Derivation will use both Acceleration and Pseudo Velocity methods to derive a series of equations for processing by Python. We will take the results and compare both Acceleration and Pseudo Velocity methods and discuss implementation of the Python functions. Also, we will discuss the efficiency of the methods and the amount of computer time required for the solution. In conclusion, (DBA) offers a powerful method to evaluate the amount of energy imparted into a system in the form of both Amplitude and Duration during qualification testing and flight environments. Many forms of steady state and transient vibratory motion can be characterized using this technique. (DBA) provides a more robust alternative to traditional methods such Power Spectral Density (PSD) using a Maximax approach.

  18. Koopmans' theorem in the Hartree-Fock method. General formulation

    NASA Astrophysics Data System (ADS)

    Plakhutin, Boris N.

    2018-03-01

    This work presents a general formulation of Koopmans' theorem (KT) in the Hartree-Fock (HF) method which is applicable to molecular and atomic systems with arbitrary orbital occupancies and total electronic spin including orbitally degenerate (OD) systems. The new formulation is based on the full set of variational conditions imposed upon the HF orbitals by the variational principle for the total energy and the conditions imposed by KT on the orbitals of an ionized electronic shell [B. N. Plakhutin and E. R. Davidson, J. Chem. Phys. 140, 014102 (2014)]. Based on these conditions, a general form of the restricted open-shell HF method is developed, whose eigenvalues (orbital energies) obey KT for the whole energy spectrum. Particular attention is paid to the treatment of OD systems, for which the new method gives a number of unexpected results. For example, the present method gives four different orbital energies for the triply degenerate atomic level 2p in the second row atoms B to F. Based on both KT conditions and a parallel treatment of atoms B to F within a limited configuration interaction approach, we prove that these four orbital energies, each of which is triply degenerate, are related via KT to the energies of different spin-dependent ionization and electron attachment processes (2p)N → (2p ) N ±1. A discussion is also presented of specific limitations of the validity of KT in the HF method which arise in OD systems. The practical applicability of the theory is verified by comparing KT estimates of the ionization potentials I2s and I2p for the second row open-shell atoms Li to F with the relevant experimental data.

  19. A comparison of father-infant interaction between primary and non-primary care giving fathers.

    PubMed

    Lewis, S N; West, A F; Stein, A; Malmberg, L-E; Bethell, K; Barnes, J; Sylva, K; Leach, P

    2009-03-01

    This study examined the socio-demographic characteristics and attitudes of primary care giving fathers and non-primary care giving fathers and the quality of their interaction with their infants. Two groups of fathers of 11.9-month old infants were compared - 25 primary care giving fathers (20 h per week or more of sole infant care) and 75 non-primary care giving fathers - with regard to socio-demographic characteristics, attitudinal differences and father-infant interaction during play and mealtimes. The quality of father-child interaction in relation to the total number of hours of primary care provided by fathers was also examined. Primary care giving fathers had lower occupational status and earned a smaller proportion of the family income but did not differ in educational level or attitudes compared with non-primary care giving fathers. There were no differences between the partners of the two groups of fathers on any variables, and their infants did not differ in temperament. Primary care giving fathers and their infants exhibited more positive emotional tone during play than non-primary care giving fathers, although fathers did not differ in responsivity. There were no differences between the groups during mealtimes. There was a positive association between total number of child care hours provided by all fathers and infant positive emotional tone. Primary and non-primary care giving fathers were similar in many respects, but primary care giving fathers and their infants were happier during play. This suggests a possible link between the involvement of fathers in the care of their children and their children's emotional state. The finding of a trend towards increased paternal happiness with increased hours of child care suggests that there may also be a gain for fathers who are more involved in the care of their infants. Further research is needed to determine whether these differences ultimately have an effect on children's development.

  20. Synthesis and characterization of nanocrystalline mesoporous zirconia using supercritical drying.

    PubMed

    Tyagi, Beena; Sidhpuria, Kalpesh; Shaik, Basha; Jasra, Raksh Vir

    2006-06-01

    Synthesis of nano-crystalline zirconia aerogel was done by sol-gel technique and supercritical drying using n-propanol solvent at and above supercritical temperature (235-280 degrees C) and pressure (48-52 bar) of n-propanol. Zirconia xerogel samples have also been prepared by conventional thermal drying method to compare with the super critically dried samples. Crystalline phase, crystallite size, surface area, pore volume, and pore size distribution were determined for all the samples in detail to understand the effect of gel drying methods on these properties. Supercritical drying of zirconia gel was observed to give thermally stable, nano-crystalline, tetragonal zirconia aerogels having high specific surface area and porosity with narrow and uniform pore size distribution as compared to thermally dried zirconia. With supercritical drying, zirconia samples show the formation of only mesopores whereas in thermally dried samples, substantial amount of micropores are observed along with mesopores. The samples prepared using supercritical drying yield nano-crystalline zirconia with smaller crystallite size (4-6 nm) as compared to higher crystallite size (13-20 nm) observed with thermally dried zirconia.

  1. Review of Methods for Buildings Energy Performance Modelling

    NASA Astrophysics Data System (ADS)

    Krstić, Hrvoje; Teni, Mihaela

    2017-10-01

    Research presented in this paper gives a brief review of methods used for buildings energy performance modelling. This paper gives also a comprehensive review of the advantages and disadvantages of available methods as well as the input parameters used for modelling buildings energy performance. European Directive EPBD obliges the implementation of energy certification procedure which gives an insight on buildings energy performance via exiting energy certificate databases. Some of the methods for buildings energy performance modelling mentioned in this paper are developed by employing data sets of buildings which have already undergone an energy certification procedure. Such database is used in this paper where the majority of buildings in the database have already gone under some form of partial retrofitting - replacement of windows or installation of thermal insulation but still have poor energy performance. The case study presented in this paper utilizes energy certificates database obtained from residential units in Croatia (over 400 buildings) in order to determine the dependence between buildings energy performance and variables from database by using statistical dependencies tests. Building energy performance in database is presented with building energy efficiency rate (from A+ to G) which is based on specific annual energy needs for heating for referential climatic data [kWh/(m2a)]. Independent variables in database are surfaces and volume of the conditioned part of the building, building shape factor, energy used for heating, CO2 emission, building age and year of reconstruction. Research results presented in this paper give an insight in possibilities of methods used for buildings energy performance modelling. Further on it gives an analysis of dependencies between buildings energy performance as a dependent variable and independent variables from the database. Presented results could be used for development of new building energy performance predictive model.

  2. Rotational Energy Transfer of N2 Determined Using a New Ab Initio Potential Energy Surface

    NASA Technical Reports Server (NTRS)

    Huo, Winifred M.; Stallcop, James R.; Partridge, Harry; Langhoff, Stephen R. (Technical Monitor)

    1997-01-01

    A new N2-N2 rigid-rotor surface has been determined using extensive Ab Initio quantum chemistry calculations together with recent experimental data for the second virial coefficient. Rotational energy transfer is studied using the new potential energy surface (PES) employing the close coupling method below 200 cm(exp -1) and coupled state approximation above that. Comparing with a previous calculation based on the PES of van der Avoird et al.,3 it is found that the new PES generally gives larger cross sections for large (delta)J transitions, but for small (delta)J transitions the cross sections are either comparable or smaller. Correlation between the differences in the cross sections and the two PES will be attempted. The computed cross sections will also be compared with available experimental data.

  3. Modified Drop Tower Impact Tests for American Football Helmets.

    PubMed

    Rush, G Alston; Prabhu, R; Rush, Gus A; Williams, Lakiesha N; Horstemeyer, M F

    2017-02-19

    A modified National Operating Committee on Standards for Athletic Equipment (NOCSAE) test method for American football helmet drop impact test standards is presented that would provide better assessment of a helmet's on-field impact performance by including a faceguard on the helmet. In this study, a merger of faceguard and helmet test standards is proposed. The need for a more robust systematic approach to football helmet testing procedures is emphasized by comparing representative results of the Head Injury Criterion (HIC), Severity Index (SI), and peak acceleration values for different helmets at different helmet locations under modified NOCSAE standard drop tower tests. Essentially, these comparative drop test results revealed that the faceguard adds a stiffening kinematic constraint to the shell that lessens total energy absorption. The current NOCSAE standard test methods can be improved to represent on-field helmet hits by attaching the faceguards to helmets and by including two new helmet impact locations (Front Top and Front Top Boss). The reported football helmet test method gives a more accurate representation of a helmet's performance and its ability to mitigate on-field impacts while promoting safer football helmets.

  4. [Comparative analysis of the effectiveness of different methods of allergen-specific immunotherapy of bronchial asthma].

    PubMed

    Besh, O M; Radchenko, O M

    2014-01-01

    The article presents a comparative analysis of the effectiveness of different methods of allergen- specific immunotherapy of light and medium- severe persistent asthma using a special questionnaire of quality of life of patients. It is noted that traditional survey methods involving physical, laboratory and instrumental studies do not give an opportunity to get a complete assessment of the patient, because it does not provide information about its psychological and social adjustment to illness. It is proved that a comprehensive description of the physical, psychological and social components of the patient's condition allows the assessment of its quality of life. Established that chronic asthma affects the quality of life of patients, making certain psychological, emotional and social problems. The disease limits the vitality of patients, their performance, leading to social exclusion and psychological discomfort. Studies have shown that holding the base of treatment with different ways ASIT it positively affects the quality of life for patients. However, treatment of sublingual allergen patients perceive better adherence to such treatment was higher.

  5. Locally adaptive MR intensity models and MRF-based segmentation of multiple sclerosis lesions

    NASA Astrophysics Data System (ADS)

    Galimzianova, Alfiia; Lesjak, Žiga; Likar, Boštjan; Pernuš, Franjo; Špiclin, Žiga

    2015-03-01

    Neuroimaging biomarkers are an important paraclinical tool used to characterize a number of neurological diseases, however, their extraction requires accurate and reliable segmentation of normal and pathological brain structures. For MR images of healthy brains the intensity models of normal-appearing brain tissue (NABT) in combination with Markov random field (MRF) models are known to give reliable and smooth NABT segmentation. However, the presence of pathology, MR intensity bias and natural tissue-dependent intensity variability altogether represent difficult challenges for a reliable estimation of NABT intensity model based on MR images. In this paper, we propose a novel method for segmentation of normal and pathological structures in brain MR images of multiple sclerosis (MS) patients that is based on locally-adaptive NABT model, a robust method for the estimation of model parameters and a MRF-based segmentation framework. Experiments on multi-sequence brain MR images of 27 MS patients show that, compared to whole-brain model and compared to the widely used Expectation-Maximization Segmentation (EMS) method, the locally-adaptive NABT model increases the accuracy of MS lesion segmentation.

  6. Determination of N epsilon-(carboxymethyl)lysine in foods and related systems.

    PubMed

    Ames, Jennifer M

    2008-04-01

    The sensitive and specific determination of advanced glycation end products (AGEs) is of considerable interest because these compounds have been associated with pro-oxidative and proinflammatory effects in vivo. AGEs form when carbonyl compounds, such as glucose and its oxidation products, glyoxal and methylglyoxal, react with the epsilon-amino group of lysine and the guanidino group of arginine to give structures including N epsilon-(carboxymethyl)lysine (CML), N epsilon-(carboxyethyl)lysine, and hydroimidazolones. CML is frequently used as a marker for AGEs in general. It exists in both the free or peptide-bound forms. Analysis of CML involves its extraction from the food (including protein hydrolysis to release any peptide-bound adduct) and determination by immunochemical or instrumental means. Various factors must be considered at each step of the analysis. Extraction, hydrolysis, and sample clean-up are all less straight forward for food samples, compared to plasma and tissue. The immunochemical and instrumental methods all have their advantages and disadvantages, and no perfect method exists. Currently, different procedures are being used in different laboratories, and there is an urgent need to compare, improve, and validate methods.

  7. Chemical accuracy from quantum Monte Carlo for the benzene dimer.

    PubMed

    Azadi, Sam; Cohen, R E

    2015-09-14

    We report an accurate study of interactions between benzene molecules using variational quantum Monte Carlo (VMC) and diffusion quantum Monte Carlo (DMC) methods. We compare these results with density functional theory using different van der Waals functionals. In our quantum Monte Carlo (QMC) calculations, we use accurate correlated trial wave functions including three-body Jastrow factors and backflow transformations. We consider two benzene molecules in the parallel displaced geometry, and find that by highly optimizing the wave function and introducing more dynamical correlation into the wave function, we compute the weak chemical binding energy between aromatic rings accurately. We find optimal VMC and DMC binding energies of -2.3(4) and -2.7(3) kcal/mol, respectively. The best estimate of the coupled-cluster theory through perturbative triplets/complete basis set limit is -2.65(2) kcal/mol [Miliordos et al., J. Phys. Chem. A 118, 7568 (2014)]. Our results indicate that QMC methods give chemical accuracy for weakly bound van der Waals molecular interactions, comparable to results from the best quantum chemistry methods.

  8. Comparative study of multimodal biometric recognition by fusion of iris and fingerprint.

    PubMed

    Benaliouche, Houda; Touahria, Mohamed

    2014-01-01

    This research investigates the comparative performance from three different approaches for multimodal recognition of combined iris and fingerprints: classical sum rule, weighted sum rule, and fuzzy logic method. The scores from the different biometric traits of iris and fingerprint are fused at the matching score and the decision levels. The scores combination approach is used after normalization of both scores using the min-max rule. Our experimental results suggest that the fuzzy logic method for the matching scores combinations at the decision level is the best followed by the classical weighted sum rule and the classical sum rule in order. The performance evaluation of each method is reported in terms of matching time, error rates, and accuracy after doing exhaustive tests on the public CASIA-Iris databases V1 and V2 and the FVC 2004 fingerprint database. Experimental results prior to fusion and after fusion are presented followed by their comparison with related works in the current literature. The fusion by fuzzy logic decision mimics the human reasoning in a soft and simple way and gives enhanced results.

  9. Comparative Study of Multimodal Biometric Recognition by Fusion of Iris and Fingerprint

    PubMed Central

    Benaliouche, Houda; Touahria, Mohamed

    2014-01-01

    This research investigates the comparative performance from three different approaches for multimodal recognition of combined iris and fingerprints: classical sum rule, weighted sum rule, and fuzzy logic method. The scores from the different biometric traits of iris and fingerprint are fused at the matching score and the decision levels. The scores combination approach is used after normalization of both scores using the min-max rule. Our experimental results suggest that the fuzzy logic method for the matching scores combinations at the decision level is the best followed by the classical weighted sum rule and the classical sum rule in order. The performance evaluation of each method is reported in terms of matching time, error rates, and accuracy after doing exhaustive tests on the public CASIA-Iris databases V1 and V2 and the FVC 2004 fingerprint database. Experimental results prior to fusion and after fusion are presented followed by their comparison with related works in the current literature. The fusion by fuzzy logic decision mimics the human reasoning in a soft and simple way and gives enhanced results. PMID:24605065

  10. Quantizing the Toda lattice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siddharthan, R.; Shastry, B.S.

    In this work we study the quantum Toda lattice, developing the asymptotic Bethe ansatz method first used by Sutherland. Despite its known limitations we find, on comparing with Gutzwiller{close_quote}s exact method, that it works well in this particular problem and in fact becomes exact as {h_bar} grows large. We calculate ground state and excitation energies for finite-sized lattices, identify excitations as phonons and solitons on the basis of their quantum numbers, and find their dispersions. These are similar to the classical dispersions for small {h_bar}, and remain similar all the way up to {h_bar}=1, but then deviate substantially as wemore » go farther into the quantum regime. On comparing the sound velocities for various {h_bar} obtained thus with that predicted by conformal theory we conclude that the Bethe ansatz gives the energies per particle accurate to O(1/N{sup 2}). On that assumption we can find correlation functions. Thus the Bethe ansatz method can be used to yield much more than the thermodynamic properties which previous authors have calculated. {copyright} {ital 1997} {ital The American Physical Society}« less

  11. Model reduction for slow–fast stochastic systems with metastable behaviour

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bruna, Maria, E-mail: bruna@maths.ox.ac.uk; Computational Science Laboratory, Microsoft Research, Cambridge CB1 2FB; Chapman, S. Jonathan

    2014-05-07

    The quasi-steady-state approximation (or stochastic averaging principle) is a useful tool in the study of multiscale stochastic systems, giving a practical method by which to reduce the number of degrees of freedom in a model. The method is extended here to slow–fast systems in which the fast variables exhibit metastable behaviour. The key parameter that determines the form of the reduced model is the ratio of the timescale for the switching of the fast variables between metastable states to the timescale for the evolution of the slow variables. The method is illustrated with two examples: one from biochemistry (a fast-species-mediatedmore » chemical switch coupled to a slower varying species), and one from ecology (a predator–prey system). Numerical simulations of each model reduction are compared with those of the full system.« less

  12. Comparison of molecular mechanics-Poisson-Boltzmann surface area (MM-PBSA) and molecular mechanics-three-dimensional reference interaction site model (MM-3D-RISM) method to calculate the binding free energy of protein-ligand complexes: Effect of metal ion and advance statistical test

    NASA Astrophysics Data System (ADS)

    Pandey, Preeti; Srivastava, Rakesh; Bandyopadhyay, Pradipta

    2018-03-01

    The relative performance of MM-PBSA and MM-3D-RISM methods to estimate the binding free energy of protein-ligand complexes is investigated by applying these to three proteins (Dihydrofolate Reductase, Catechol-O-methyltransferase, and Stromelysin-1) differing in the number of metal ions they contain. None of the computational methods could distinguish all the ligands based on their calculated binding free energies (as compared to experimental values). The difference between the two comes from both polar and non-polar part of solvation. For charged ligand case, MM-PBSA and MM-3D-RISM give a qualitatively different result for the polar part of solvation.

  13. Application and evaluation of ISVR method in QuickBird image fusion

    NASA Astrophysics Data System (ADS)

    Cheng, Bo; Song, Xiaolu

    2014-05-01

    QuickBird satellite images are widely used in many fields, and applications have put forward high requirements for the integration of the spatial information and spectral information of the imagery. A fusion method for high resolution remote sensing images based on ISVR is identified in this study. The core principle of ISVS is taking the advantage of radicalization targeting to remove the effect of different gain and error of satellites' sensors. Transformed from DN to radiance, the multi-spectral image's energy is used to simulate the panchromatic band. The linear regression analysis is carried through the simulation process to find a new synthetically panchromatic image, which is highly linearly correlated to the original panchromatic image. In order to evaluate, test and compare the algorithm results, this paper used ISVR and other two different fusion methods to give a comparative study of the spatial information and spectral information, taking the average gradient and the correlation coefficient as an indicator. Experiments showed that this method could significantly improve the quality of fused image, especially in preserving spectral information, to maximize the spectral information of original multispectral images, while maintaining abundant spatial information.

  14. A Kirchhoff approach to seismic modeling and prestack depth migration

    NASA Astrophysics Data System (ADS)

    Liu, Zhen-Yue

    1993-05-01

    The Kirchhoff integral provides a robust method for implementing seismic modeling and prestack depth migration, which can handle lateral velocity variation and turning waves. With a little extra computation cost, the Kirchoff-type migration can obtain multiple outputs that have the same phase but different amplitudes, compared with that of other migration methods. The ratio of these amplitudes is helpful in computing some quantities such as reflection angle. I develop a seismic modeling and prestack depth migration method based on the Kirchhoff integral, that handles both laterally variant velocity and a dip beyond 90 degrees. The method uses a finite-difference algorithm to calculate travel times and WKBJ amplitudes for the Kirchhoff integral. Compared to ray-tracing algorithms, the finite-difference algorithm gives an efficient implementation and single-valued quantities (first arrivals) on output. In my finite difference algorithm, the upwind scheme is used to calculate travel times, and the Crank-Nicolson scheme is used to calculate amplitudes. Moreover, interpolation is applied to save computation cost. The modeling and migration algorithms require a smooth velocity function. I develop a velocity-smoothing technique based on damped least-squares to aid in obtaining a successful migration.

  15. COACH: profile-profile alignment of protein families using hidden Markov models.

    PubMed

    Edgar, Robert C; Sjölander, Kimmen

    2004-05-22

    Alignments of two multiple-sequence alignments, or statistical models of such alignments (profiles), have important applications in computational biology. The increased amount of information in a profile versus a single sequence can lead to more accurate alignments and more sensitive homolog detection in database searches. Several profile-profile alignment methods have been proposed and have been shown to improve sensitivity and alignment quality compared with sequence-sequence methods (such as BLAST) and profile-sequence methods (e.g. PSI-BLAST). Here we present a new approach to profile-profile alignment we call Comparison of Alignments by Constructing Hidden Markov Models (HMMs) (COACH). COACH aligns two multiple sequence alignments by constructing a profile HMM from one alignment and aligning the other to that HMM. We compare the alignment accuracy of COACH with two recently published methods: Yona and Levitt's prof_sim and Sadreyev and Grishin's COMPASS. On two sets of reference alignments selected from the FSSP database, we find that COACH is able, on average, to produce alignments giving the best coverage or the fewest errors, depending on the chosen parameter settings. COACH is freely available from www.drive5.com/lobster

  16. Utilizing field-aligned current profiles derived from Swarm to estimate the peak emission height of 630 nm auroral arcs: a comparison of methods and discussion of associated error estimates in the ASI data.

    NASA Astrophysics Data System (ADS)

    Gillies, D. M.; Knudsen, D. J.; Donovan, E.; Jackel, B. J.; Gillies, R.; Spanswick, E.

    2017-12-01

    We compare field-aligned currents (FACs) measured by the Swarm constellation of satellites with the location of red-line (630 nm) auroral arcs observed by all-sky imagers (ASIs) to derive a characteristic emission height for the optical emissions. In our 10 events we find that an altitude of 200 km applied to the ASI maps gives optimal agreement between the two observations. We also compare the new FAC method against the traditional triangulation method using pairs of all-sky imagers (ASIs), and against electron density profiles obtained from the Resolute Bay Incoherent Scatter Radar-Canadian radar (RISR-C), both of which are consistent with a characteristic emission height of 200 km. We also present the spatial error associated with georeferencing REdline Geospace Observatory (REGO) and THEMIS all-sky imagers (ASIs) and how it applies to altitude projections of the mapped image. Utilizing this error we validate the estimated altitude of redline aurora using two methods: triangulation between ASIs and field-aligned current profiles derived from magnetometers on-board the Swarm satellites.

  17. Study on Impact of Electric Vehicles Charging Models on Power Load

    NASA Astrophysics Data System (ADS)

    Cheng, Chen; Hui-mei, Yuan

    2017-05-01

    With the rapid increase in the number of electric vehicles, which will lead the power load on grid increased and have an adversely affect. This paper gives a detailed analysis of the following factors, such as scale of the electric cars, charging mode, initial charging time, initial state of charge, charging power and other factors. Monte Carlo simulation method is used to compare the two charging modes, which are conventional charging and fast charging, and MATLAB is used to model and simulate the electric vehicle charging load. The results show that compared with the conventional charging mode, fast charging mode can meet the requirements of fast charging, but also bring great load to the distribution network which will affect the reliability of power grid.

  18. Thermal transmission of camouflage nets revisited

    NASA Astrophysics Data System (ADS)

    Jersblad, Johan; Jacobs, Pieter

    2016-10-01

    In this article we derive, from first principles, the correct formula for thermal transmission of a camouflage net, based on the setup described in the US standard for lightweight camouflage nets. Furthermore, we compare the results and implications with the use of an incorrect formula that have been seen in several recent tenders. It is shown that the incorrect formulation not only gives rise to large errors, but the result also depends on the surrounding room temperature, which in the correct derivation cancels out. The theoretical results are compared with laboratory measurements. The theoretical results agree with the laboratory results for the correct derivation. To summarize we discuss the consequences for soldiers on the battlefield if incorrect standards and test methods are used in procurement processes.

  19. Multireference adaptive noise canceling applied to the EEG.

    PubMed

    James, C J; Hagan, M T; Jones, R D; Bones, P J; Carroll, G J

    1997-08-01

    The technique of multireference adaptive noise canceling (MRANC) is applied to enhance transient nonstationarities in the electroeancephalogram (EEG), with the adaptation implemented by means of a multilayer-perception artificial neural network (ANN). The method was applied to recorded EEG segments and the performance on documented nonstationarities recorded. The results show that the neural network (nonlinear) gives an improvement in performance (i.e., signal-to-noise ratio (SNR) of the nonstationarities) compared to a linear implementation of MRANC. In both cases an improvement in the SNR was obtained. The advantage of the spatial filtering aspect of MRANC is highlighted when the performance of MRANC is compared to that of the inverse auto-regressive filtering of the EEG, a purely temporal filter.

  20. Comparative study of Wenner and Schlumberger electrical resistivity method for groundwater investigation: a case study from Dhule district (M.S.), India

    NASA Astrophysics Data System (ADS)

    Vasantrao, Baride Mukund; Bhaskarrao, Patil Jitendra; Mukund, Baride Aarti; Baburao, Golekar Rushikesh; Narayan, Patil Sanjaykumar

    2017-12-01

    The area chosen for the present study is Dhule district, which belongs to the drought prone area of Maharashtra State, India. Dhule district suffers from water problem, and therefore, there is no extra water available to supply for the agricultural and industrial growth. To understand the lithological characters in terms of its hydro-geological conditions, it is necessary to understand the geology of the area. It is now established fact that the geophysical method gives a better information of subsurface geology. Geophysical electrical surveys with four electrodes configuration, i.e., Wenner and Schlumberger method, were carried out at the same selected sites to observe the similarity and compared both the applications in terms of its use and handling in the field. A total 54 VES soundings were carried out spread over the Dhule district and representing different lithological units. The VES curves are drawn using inverse slope method for Wenner configuration, IPI2 win Software, and curve matching techniques were used for Schlumberger configuration. Regionwise lithologs are prepared based on the obtained resistivity and thickness for Wenner method. Regionwise curves were prepared based on resistivity layers for Schlumberger method. Comparing the two methods, it is observed that Wenner and Schlumberger methods have merits or demerits. Considering merits and demerits from the field point of view, it is suggested that Wenner inverse slope method is more handy for calculation and interpretation, but requires lateral length which is a constrain. Similarly, Schlumberger method is easy in application but unwieldy for their interpretation. The work amply proves the applicability of geophysical techniques in the water resource evaluation procedure. This technique is found to be suitable for the areas with similar geological setup elsewhere.

  1. Using Parameter Constraints to Choose State Structures in Cost-Effectiveness Modelling.

    PubMed

    Thom, Howard; Jackson, Chris; Welton, Nicky; Sharples, Linda

    2017-09-01

    This article addresses the choice of state structure in a cost-effectiveness multi-state model. Key model outputs, such as treatment recommendations and prioritisation of future research, may be sensitive to state structure choice. For example, it may be uncertain whether to consider similar disease severities or similar clinical events as the same state or as separate states. Standard statistical methods for comparing models require a common reference dataset but merging states in a model aggregates the data, rendering these methods invalid. We propose a method that involves re-expressing a model with merged states as a model on the larger state space in which particular transition probabilities, costs and utilities are constrained to be equal between states. This produces a model that gives identical estimates of cost effectiveness to the model with merged states, while leaving the data unchanged. The comparison of state structures can be achieved by comparing maximised likelihoods or information criteria between constrained and unconstrained models. We can thus test whether the costs and/or health consequences for a patient in two states are the same, and hence if the states can be merged. We note that different structures can be used for rates, costs and utilities, as appropriate. We illustrate our method with applications to two recent models evaluating the cost effectiveness of prescribing anti-depressant medications by depression severity and the cost effectiveness of diagnostic tests for coronary artery disease. State structures in cost-effectiveness models can be compared using standard methods to compare constrained and unconstrained models.

  2. A quantum chemistry study of Qinghaosu

    NASA Astrophysics Data System (ADS)

    Gu, Jian-De; Chen, Kai-Xian; Jiang, Hua-Liang; Zhu, Wei-Liang; Chen, Jian-Zhong; Ji, Ru-Yun

    1997-10-01

    The powerful anti-malarial drug, Qinghaosu (Artemisinin), has been studied using ab initio methods. The DFT B3LYP method with the 6-31G ∗ basis set gives an excellent geometry compared to experiments, especially for the OO bond length and the 1,2,4-Trioxane ring subsystem. The R(OO) bond length predicted at this level is 1.460 Å, only 0.018 Å shorter than the experimental measurement. The vibrational analysis shows that the OO stretching mode is combined with the OC vibration mode, having the character of an OOC entity. The OO vibrational band at 722 cm -1 suggested in the experimental studies has been assigned as 1,2,4-trioxane ring breathing.

  3. Chaotic map clustering algorithm for EEG analysis

    NASA Astrophysics Data System (ADS)

    Bellotti, R.; De Carlo, F.; Stramaglia, S.

    2004-03-01

    The non-parametric chaotic map clustering algorithm has been applied to the analysis of electroencephalographic signals, in order to recognize the Huntington's disease, one of the most dangerous pathologies of the central nervous system. The performance of the method has been compared with those obtained through parametric algorithms, as K-means and deterministic annealing, and supervised multi-layer perceptron. While supervised neural networks need a training phase, performed by means of data tagged by the genetic test, and the parametric methods require a prior choice of the number of classes to find, the chaotic map clustering gives a natural evidence of the pathological class, without any training or supervision, thus providing a new efficient methodology for the recognition of patterns affected by the Huntington's disease.

  4. Production of zinc oxide nanowires power with precisely defined morphology

    NASA Astrophysics Data System (ADS)

    Mičová, Júlia; Remeš, Zdeněk; Chan, Yu-Ying

    2017-12-01

    The interest about zinc oxide is increasing thanks to its unique chemical and physical properties. Our attention has focused on preparation powder of 1D nanostructures of ZnO nanowires with precisely defined morphology include characterization size (length and diameter) and shape controlled in the scanning electron microscopy (SEM). We have compared results of SEM with dynamic light scattering (DLS) technique. We have found out that SEM method gives more accurate results. We have proposed transformation process from ZnO nanowires on substrates to ZnO nanowires powder by ultrasound peeling to colloid followed by lyophilization. This method of the mass production of the ZnO nanowires powder has some advantages: simplicity, cost effective, large-scale and environment friendly.

  5. An evaluation of contractor projected and actual costs

    NASA Technical Reports Server (NTRS)

    Kwiatkowski, K. A.; Buffalano, C.

    1974-01-01

    GSFC contractors with cost-plus contracts provide cost estimates for each of the next four quarters on a quarterly basis. Actual expenditures over a two-year period were compared to the estimates, and the data were sorted in different ways to answer several questions and give quantification to observations, such as how much does the accuracy of estimates degrade as they are made further into the future? Are estimates made for small dollar amounts more accurate than for large dollar estimates? Other government agencies and private companies with cost-plus contracts may be interested in this analysis as potential methods of contract management for their organizations. It provides them with the different methods one organization is beginning to use to control costs.

  6. Credit allocation for research institutes

    NASA Astrophysics Data System (ADS)

    Wang, J.-P.; Guo, Q.; Yang, K.; Han, J.-T.; Liu, J.-G.

    2017-05-01

    It is a challenging work to assess research performance of multiple institutes. Considering that it is unfair to average the credit to the institutes which is in the different order from a paper, in this paper, we present a credit allocation method (CAM) with a weighted order coefficient for multiple institutes. The results for the APS dataset with 18987 institutes show that top-ranked institutes obtained by the CAM method correspond to well-known universities or research labs with high reputation in physics. Moreover, we evaluate the performance of the CAM method when citation links are added or rewired randomly quantified by the Kendall's Tau and Jaccard index. The experimental results indicate that the CAM method has better performance in robustness compared with the total number of citations (TC) method and Shen's method. Finally, we give the first 20 Chinese universities in physics obtained by the CAM method. However, this method is valid for any other branch of sciences, not just for physics. The proposed method also provides universities and policy makers an effective tool to quantify and balance the academic performance of university.

  7. Ordinary differential equations.

    PubMed

    Lebl, Jiří

    2013-01-01

    In this chapter we provide an overview of the basic theory of ordinary differential equations (ODE). We give the basics of analytical methods for their solutions and also review numerical methods. The chapter should serve as a primer for the basic application of ODEs and systems of ODEs in practice. As an example, we work out the equations arising in Michaelis-Menten kinetics and give a short introduction to using Matlab for their numerical solution.

  8. The effect of shear wall location in resisting earthquake

    NASA Astrophysics Data System (ADS)

    Tarigan, J.; Manggala, J.; Sitorus, T.

    2018-02-01

    Shear wall is one of lateral resisting structure which is used commonly. Shear wall gives high stiffness to the structure so as the structure will be stable. Applying shear wall can effectively reduce the displacement and story-drift of the structure. This will reduce the destruction comes from lateral loads such as an earthquake. Earlier studies showed that shear wall gives different performance based on its position in structures. In this paper, seismic analysis has been performed using response spectrum method for different Model of structures; they are the open frame, the shear wall at core symmetrically, the shear wall at periphery symmetrically, and the shear wall at periphery asymmetrically. The results are observed by comparing the displacement and story-drift. Based on the analysis, the placement of shear wall at the core of structure symmetrically gives the best performance to reduce the displacement and story-drift. It can reduce the displacement up to 61.16% (X-dir) and 70.60% (Y-dir). The placement of shear wall at periphery symmetrically will reduce the displacement up to 53.85% (X-dir) and 47.87% (Y-dir) while the placement of shear wall at periphery asymmetrically reducing the displacement up to 59.42% (X-dir) and 66.99% (Y-dir).

  9. THE TOXICITY OF RUBBERS AND PLASTICS USED IN TRANSFUSION-GIVING SETS

    PubMed Central

    Cruickshank, C. N. D.; Hooper, Caroline; Lewis, H. B. M.; MacDougall, J. D. B.

    1960-01-01

    The toxicity of different rubbers and plastics used in transfusion-giving sets has been investigated by examining their effects on (a) cultures of chick embryo tissues, (b) the oxygen uptake of guinea-pig skin tissue cultures, and (c) the growth of Str. pyogenes. The results of the laboratory tests have been compared with the incidence of thrombophlebitis after prolonged transfusions through the various materials. It was found that where the materials inhibited the growth of Str. pyogenes they were also toxic to tissue cultures, but that some materials which were toxic to tissue cultures did not inhibit bacterial growth. The assessments of the relative toxicity of the materials tested by the two tissue culture methods were in agreement. The skin respiration studies, however, gave more information on the early effects of the toxic materials. The relative toxicity of the materials as revealed by these tests could be correlated with the differences in the incidence of thrombophlebitis following intravenous infusions administered through giving-sets assembled with the materials tested. It is suggested therefore that the toxicity revealed by these tests is of clinical importance, and that tissue culture toxicity tests will prove to be of value in selecting rubbers and plastics for clinical purposes. Images PMID:13813084

  10. Optimization of Ex Vivo Murine Bone Marrow Derived Immature Dendritic Cells: A Comparative Analysis of Flask Culture Method and Mouse CD11c Positive Selection Kit Method

    PubMed Central

    Salwe, Sukeshani; Kothari, Sweta; Chowdhary, Abhay; Deshmukh, Ranjana A.

    2018-01-01

    12–14 days of culturing of bone marrow (BM) cells containing various growth factors is widely used method for generating dendritic cells (DCs) from suspended cell population. Here we compared flask culture method and commercially available CD11c Positive Selection kit method. Immature BMDCs' purity of adherent as well as suspended cell population was generated in the decreasing concentration of recombinant-murine granulocyte-macrophage colony-stimulating factor (rmGM-CSF) in nontreated tissue culture flasks. The expression of CD11c, MHCII, CD40, and CD86 was measured by flow cytometry. We found significant difference (P < 0.05) between the two methods in the adherent cells population but no significant difference was observed between the suspended cell populations with respect to CD11c+ count. However, CD11c+ was significantly higher in both adhered and suspended cell population by culture method but kit method gave more CD11c+ from suspended cells population only. On the other hand, using both methods, immature DC expressed moderate level of MHC class II molecules as well as low levels of CD40 and CD86. Our findings suggest that widely used culture method gives the best results in terms of yield, viability, and purity of BMDCs from both adherent and suspended cell population whereas kit method works well for suspended cell population. PMID:29682352

  11. Diagnostic utility of the cell block method versus the conventional smear study in pleural fluid cytology

    PubMed Central

    Shivakumarswamy, Udasimath; Arakeri, Surekha U; Karigowdar, Mahesh H; Yelikar, BR

    2012-01-01

    Background: The cytological examinations of serous effusions have been well-accepted, and a positive diagnosis is often considered as a definitive diagnosis. It helps in staging, prognosis and management of the patients in malignancies and also gives information about various inflammatory and non-inflammatory lesions. Diagnostic problems arise in everyday practice to differentiate reactive atypical mesothelial cells and malignant cells by the routine conventional smear (CS) method. Aims: To compare the morphological features of the CS method with those of the cell block (CB) method and also to assess the utility and sensitivity of the CB method in the cytodiagnosis of pleural effusions. Materials and Methods: The study was conducted in the cytology section of the Department of Pathology. Sixty pleural fluid samples were subjected to diagnostic evaluation for over a period of 20 months. Along with the conventional smears, cell blocks were prepared by using 10% alcohol–formalin as a fixative agent. Statistical analysis with the ‘z test’ was performed to identify the cellularity, using the CS and CB methods. Mc. Naemer's χ2test was used to identify the additional yield for malignancy by the CB method. Results: Cellularity and additional yield for malignancy was 15% more by the CB method. Conclusions: The CB method provides high cellularity, better architectural patterns, morphological features and an additional yield of malignant cells, and thereby, increases the sensitivity of the cytodiagnosis when compared with the CS method. PMID:22438610

  12. Optimization of Ex Vivo Murine Bone Marrow Derived Immature Dendritic Cells: A Comparative Analysis of Flask Culture Method and Mouse CD11c Positive Selection Kit Method.

    PubMed

    Gosavi, Rahul Ashok; Salwe, Sukeshani; Mukherjee, Sandeepan; Dahake, Ritwik; Kothari, Sweta; Patel, Vainav; Chowdhary, Abhay; Deshmukh, Ranjana A

    2018-01-01

    12-14 days of culturing of bone marrow (BM) cells containing various growth factors is widely used method for generating dendritic cells (DCs) from suspended cell population. Here we compared flask culture method and commercially available CD11c Positive Selection kit method. Immature BMDCs' purity of adherent as well as suspended cell population was generated in the decreasing concentration of recombinant-murine granulocyte-macrophage colony-stimulating factor (rmGM-CSF) in nontreated tissue culture flasks. The expression of CD11c, MHCII, CD40, and CD86 was measured by flow cytometry. We found significant difference ( P < 0.05) between the two methods in the adherent cells population but no significant difference was observed between the suspended cell populations with respect to CD11c+ count. However, CD11c+ was significantly higher in both adhered and suspended cell population by culture method but kit method gave more CD11c+ from suspended cells population only. On the other hand, using both methods, immature DC expressed moderate level of MHC class II molecules as well as low levels of CD40 and CD86. Our findings suggest that widely used culture method gives the best results in terms of yield, viability, and purity of BMDCs from both adherent and suspended cell population whereas kit method works well for suspended cell population.

  13. Mathematical analysis of compressive/tensile molecular and nuclear structures

    NASA Astrophysics Data System (ADS)

    Wang, Dayu

    Mathematical analysis in chemistry is a fascinating and critical tool to explain experimental observations. In this dissertation, mathematical methods to present chemical bonding and other structures for many-particle systems are discussed at different levels (molecular, atomic, and nuclear). First, the tetrahedral geometry of single, double, or triple carbon-carbon bonds gives an unsatisfying demonstration of bond lengths, compared to experimental trends. To correct this, Platonic solids and Archimedean solids were evaluated as atoms in covalent carbon or nitrogen bond systems in order to find the best solids for geometric fitting. Pentagonal solids, e.g. the dodecahedron and icosidodecahedron, give the best fit with experimental bond lengths; an ideal pyramidal solid which models covalent bonds was also generated. Second, the macroscopic compression/tension architectural approach was applied to forces at the molecular level, considering atomic interactions as compressive (repulsive) and tensile (attractive) forces. Two particle interactions were considered, followed by a model of the dihydrogen molecule (H2; two protons and two electrons). Dihydrogen was evaluated as two different types of compression/tension structures: a coaxial spring model and a ring model. Using similar methods, covalent diatomic molecules (made up of C, N, O, or F) were evaluated. Finally, the compression/tension model was extended to the nuclear level, based on the observation that nuclei with certain numbers of protons/neutrons (magic numbers) have extra stability compared to other nucleon ratios. A hollow spherical model was developed that combines elements of the classic nuclear shell model and liquid drop model. Nuclear structure and the trend of the "island of stability" for the current and extended periodic table were studied.

  14. Design and Computational/Experimental Analysis of Low Sonic Boom Configurations

    NASA Technical Reports Server (NTRS)

    Cliff, Susan E.; Baker, Timothy J.; Hicks, Raymond M.

    1999-01-01

    Recent studies have shown that inviscid CFD codes combined with a planar extrapolation method give accurate sonic boom pressure signatures at distances greater than one body length from supersonic configurations if either adapted grids swept at the approximate Mach angle or very dense non-adapted grids are used. The validation of CFD for computing sonic boom pressure signatures provided the confidence needed to undertake the design of new supersonic transport configurations with low sonic boom characteristics. An aircraft synthesis code in combination with CFD and an extrapolation method were used to close the design. The principal configuration of this study is designated LBWT (Low Boom Wing Tail) and has a highly swept cranked arrow wing with conventional tails, and was designed to accommodate either 3 or 4 engines. The complete configuration including nacelles and boundary layer diverters was evaluated using the AIRPLANE code. This computer program solves the Euler equations on an unstructured tetrahedral mesh. Computations and wind tunnel data for the LBWT and two other low boom configurations designed at NASA Ames Research Center are presented. The two additional configurations are included to provide a basis for comparing the performance and sonic boom level of the LBWT with contemporary low boom designs and to give a broader experiment/CFD correlation study. The computational pressure signatures for the three configurations are contrasted with on-ground-track near-field experimental data from the NASA Ames 9x7 Foot Supersonic Wind Tunnel. Computed pressure signatures for the LBWT are also compared with experiment at approximately 15 degrees off ground track.

  15. Analysis of fracture in sheet bending and roll forming

    NASA Astrophysics Data System (ADS)

    Deole, Aditya D.; Barnett, Matthew; Weiss, Matthias

    2018-05-01

    The bending limit or minimum bending radius of sheet metal is conventionally measured in a wiping (swing arm) or in a vee bend test and reported as the minimum radius of the tool over which the sheet can be bent without fracture. Frequently the material kinks while bending so that the actual inner bend radius of the sheet metal is smaller than the tool radius giving rise to inaccuracy in these methods. It has been shown in the previous studies that conventional bend test methods may under-estimate formability in bending dominated processes such as roll forming. A new test procedure is proposed here to improve understanding and measurement of fracture in bending and roll forming. In this study, conventional wiping test and vee bend test have been performed on martensitic steel to determine the minimum bend radius. In addition, the vee bend test is performed in an Erichsen sheet metal tester equipped with the GOM Aramis system to enable strain measurement on the outer surface during bending. The strain measurement before the onset of fracture is then used to determine the minimum bend radius. To compare this result with a technological process, a vee channel is roll formed and in-situ strain measurement carried out with the Vialux Autogrid system. The strain distribution at fracture in the roll forming process is compared with that predicted by the conventional bending tests and by the improved process. It is shown that for this forming operation and material, the improved procedure gives a more accurate prediction of fracture.

  16. Comparative analysis of the application of different Low Power Wide Area Network technologies in power grid

    NASA Astrophysics Data System (ADS)

    Wang, Hao; Sui, Hong; Liao, Xing; Li, Junhao

    2018-03-01

    Low Power Wide Area Network (LPWAN) technologies developed rapidly in recent years, but the application principle of different LPWAN technologies in power grid is still not clear. This paper gives a comparative analysis of two mainstream LPWAN technologies including NB-IoT and LoRa, and gives an application suggestion of these two LPWAN technologies, which can guide the planning and construction of LPWAN in power grid.

  17. The comparison of road safety survey answers between web-panel and face-to-face; Dutch results of SARTRE-4 survey.

    PubMed

    Goldenbeld, C; de Craen, S

    2013-09-01

    In the Netherlands, a comparison of an online and a face-to-face sample of car drivers was made to study differences on a number of selected questions from the SARTRE-4 road safety survey. Contrary to expectations, there was no indication that online respondents were more likely to come from higher educated or more privileged social groups. Confirming earlier research, the results indicated that online respondents were less inclined to give socially desirable answers and were less inclined to use more extreme ratings in their opinions about measures. Contrary to expectations, face-to-face respondents did not tend to give more positive answers in judgment of road safety measures. Weighting to make samples comparable on gender, age, and education had almost no effect on outcomes. The implications for a transition from face-to-face survey to online panel method are discussed. Copyright © 2013 National Safety Council and Elsevier Ltd. All rights reserved.

  18. Biomimetic self-templating optical structures fabricated by genetically engineered M13 bacteriophage.

    PubMed

    Kim, Won-Geun; Song, Hyerin; Kim, Chuntae; Moon, Jong-Sik; Kim, Kyujung; Lee, Seung-Wuk; Oh, Jin-Woo

    2016-11-15

    Here, we describe a highly sensitive and selective surface plasmon resonance sensor system by utilizing self-assembly of genetically engineered M13 bacteriophage. About 2700 copies of genetically expressed peptide copies give superior selectivity and sensitivity to M13 phage-based SPR sensor. Furthermore, the sensitivity of the M13 phage-based SPR sensor was enhanced due to the aligning of receptor matrix in specific direction. Incorporation of specific binding peptide (His Pro Gln: HPQ) gives M13 bacteriophage high selectivity for the streptavidin. Our M13 phage-based SPR sensor takes advantage of simplicity of self-assembly compared with relatively complex photolithography techniques or chemical conjugations. Additionally, designed structure which is composed of functionalized M13 bacteriophage can simultaneously improve the sensitivity and selectivity of SPR sensor evidently. By taking advantages of the genetic engineering and self-assembly, we propose the simple method for fabricating novel M13 phage-based SPR sensor system which has a high sensitivity and high selectivity. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Scaling and modeling of turbulent suspension flows

    NASA Technical Reports Server (NTRS)

    Chen, C. P.

    1989-01-01

    Scaling factors determining various aspects of particle-fluid interactions and the development of physical models to predict gas-solid turbulent suspension flow fields are discussed based on two-fluid, continua formulation. The modes of particle-fluid interactions are discussed based on the length and time scale ratio, which depends on the properties of the particles and the characteristics of the flow turbulence. For particle size smaller than or comparable with the Kolmogorov length scale and concentration low enough for neglecting direct particle-particle interaction, scaling rules can be established in various parameter ranges. The various particle-fluid interactions give rise to additional mechanisms which affect the fluid mechanics of the conveying gas phase. These extra mechanisms are incorporated into a turbulence modeling method based on the scaling rules. A multiple-scale two-phase turbulence model is developed, which gives reasonable predictions for dilute suspension flow. Much work still needs to be done to account for the poly-dispersed effects and the extension to dense suspension flows.

  20. [Methods of protein gradient determination for diagnostic use in the clinical laboratory].

    PubMed

    Schulz, D; Rothenhöfer, C

    1982-02-25

    Computed quotients of the concentrations of individual proteins in serum and in cerebrospinal fluid recorded in a semilogarithmic way against the hydrodynamic radius of the molecules may be connected and so give lineals, the positions of which are believed to estimate the actual function of the blood-brain-barrier (Felgenhauer et al. 1974; 1976). It is demonstrated by examinations of different samples of blood and cerebrospinal fluid, that both the described method and the simple measurement of total protein in cerebrospinal fluid are of the same power in their application to estimate the function of the blood-brain-barrier. However, the method introduced by Felgenhauer et al. enables to demonstrate immunoglobulins not having reached the cerebrospinal fluid by entering from the blood stream and so giving rise to diagnose a local humoral immune response within the central nervous system. In this way the method of Felgenhauer et al. gives advantages to the diagnostics of neurological diseases.

  1. Performance Investigation of A Mix Wind Turbine Using A Clutch Mechanism At Low Wind Speed Condition

    NASA Astrophysics Data System (ADS)

    Jamanun, M. J.; Misaran, M. S.; Rahman, M.; Muzammil, W. K.

    2017-07-01

    Wind energy is one of the methods that generates energy from sustainable resources. This technology has gained prominence in this era because it produces no harmful product to the society. There is two fundamental type of wind turbine are generally used this day which is Horizontal axis wind turbine (HAWT) and Vertical axis wind turbine (VAWT). The VAWT technology is more preferable compare to HAWT because it gives better efficiency and cost effectiveness as a whole. However, VAWT is known to have distinct disadvantage compared to HAWT; self-start ability and efficiency at low wind speed condition. Different solution has been proposed to solve these issues which includes custom design blades, variable angle of attack mechanism and mix wind turbine. A new type of clutch device was successfully developed in UMS to be used in a mix Savonius-Darrieus wind turbine configuration. The clutch system which barely audible when in operation compared to a ratchet clutch system interconnects the Savonius and Darrieus rotor; allowing the turbine to self-start at low wind speed condition as opposed to a standalone Darrieus turbine. The Savonius height were varied at three different size in order to understand the effect of the Savonius rotor to the mix wind turbine performance. The experimental result shows that the fabricated Savonius rotor show that the height of the Savonius rotor affecting the RPM for the turbine. The swept area (SA), aspect ratio (AR) and tip speed ratio (TSR) also calculated in this paper. The highest RPM recorded in this study is 90 RPM for Savonius rotor 0.22-meter height at 2.75 m/s. The Savonius rotor 0.22-meter also give the highest TSR for each range of speed from 0.75 m/s, 1.75 m/s and 2.75 m/s where it gives 1.03 TSR, 0.76 TSR, and 0.55 TSR.

  2. Robust covariance estimation of galaxy-galaxy weak lensing: validation and limitation of jackknife covariance

    NASA Astrophysics Data System (ADS)

    Shirasaki, Masato; Takada, Masahiro; Miyatake, Hironao; Takahashi, Ryuichi; Hamana, Takashi; Nishimichi, Takahiro; Murata, Ryoma

    2017-09-01

    We develop a method to simulate galaxy-galaxy weak lensing by utilizing all-sky, light-cone simulations and their inherent halo catalogues. Using the mock catalogue to study the error covariance matrix of galaxy-galaxy weak lensing, we compare the full covariance with the 'jackknife' (JK) covariance, the method often used in the literature that estimates the covariance from the resamples of the data itself. We show that there exists the variation of JK covariance over realizations of mock lensing measurements, while the average JK covariance over mocks can give a reasonably accurate estimation of the true covariance up to separations comparable with the size of JK subregion. The scatter in JK covariances is found to be ∼10 per cent after we subtract the lensing measurement around random points. However, the JK method tends to underestimate the covariance at the larger separations, more increasingly for a survey with a higher number density of source galaxies. We apply our method to the Sloan Digital Sky Survey (SDSS) data, and show that the 48 mock SDSS catalogues nicely reproduce the signals and the JK covariance measured from the real data. We then argue that the use of the accurate covariance, compared to the JK covariance, allows us to use the lensing signals at large scales beyond a size of the JK subregion, which contains cleaner cosmological information in the linear regime.

  3. Protocol: systematic review and meta-analyses of birth outcomes for women who intend at the onset of labour to give birth at home compared to women of low obstetrical risk who intend to give birth in hospital.

    PubMed

    Hutton, Eileen K; Reitsma, Angela; Thorpe, Julia; Brunton, Ginny; Kaufman, Karyn

    2014-05-29

    There has been a renewed interest in the place of birth, including intended home birth, for low risk women. In the absence of adequately-sized randomised controlled trials, a recent Cochrane review recommended that a systematic review and meta-analysis, including observational studies, be undertaken to inform this topic. The objective of this review is to determine if women intending at the onset of labour to give birth at home are more or less likely to experience a foetal or neonatal loss compared to a cohort of women who are comparable to the home birth cohort on the absence of risk factors but who intend to give birth in a hospital setting. We will search using Embase, MEDLINE, CINAHL, AMED and the Cochrane Library to find studies published since 1990 that compare foetal, neonatal and maternal outcomes for women who intended at the onset of labour to give birth at home to a comparison cohort of low risk women who intended at the onset of labour to give birth in hospital. We will obtain pooled estimates of effect using Review Manager. Because of the likelihood of differences in outcomes in settings where home birth is integrated into the health care system, we will stratify our results according to jurisdictions that have a health care system that integrates home birth and those where home birth is provided outside the usual health care system. Since parity is known to be associated with birth outcomes, only studies that take parity into account will be included in the meta-analyses. We will provide results by parity to the extent possible. This protocol was registered with PROSPERO at http://www.crd.york.ac.uk/Prospero/ (Registration number: CRD42013004046).

  4. Phylogenic inference using alignment-free methods for applications in microbial community surveys using 16s rRNA gene

    PubMed Central

    2017-01-01

    The diversity of microbiota is best explored by understanding the phylogenetic structure of the microbial communities. Traditionally, sequence alignment has been used for phylogenetic inference. However, alignment-based approaches come with significant challenges and limitations when massive amounts of data are analyzed. In the recent decade, alignment-free approaches have enabled genome-scale phylogenetic inference. Here we evaluate three alignment-free methods: ACS, CVTree, and Kr for phylogenetic inference with 16s rRNA gene data. We use a taxonomic gold standard to compare the accuracy of alignment-free phylogenetic inference with that of common microbiome-wide phylogenetic inference pipelines based on PyNAST and MUSCLE alignments with FastTree and RAxML. We re-simulate fecal communities from Human Microbiome Project data to evaluate the performance of the methods on datasets with properties of real data. Our comparisons show that alignment-free methods are not inferior to alignment-based methods in giving accurate and robust phylogenic trees. Moreover, consensus ensembles of alignment-free phylogenies are superior to those built from alignment-based methods in their ability to highlight community differences in low power settings. In addition, the overall running times of alignment-based and alignment-free phylogenetic inference are comparable. Taken together our empirical results suggest that alignment-free methods provide a viable approach for microbiome-wide phylogenetic inference. PMID:29136663

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehtomäki, Jouko; Makkonen, Ilja; Harju, Ari

    We present a computational scheme for orbital-free density functional theory (OFDFT) that simultaneously provides access to all-electron values and preserves the OFDFT linear scaling as a function of the system size. Using the projector augmented-wave method (PAW) in combination with real-space methods, we overcome some obstacles faced by other available implementation schemes. Specifically, the advantages of using the PAW method are twofold. First, PAW reproduces all-electron values offering freedom in adjusting the convergence parameters and the atomic setups allow tuning the numerical accuracy per element. Second, PAW can provide a solution to some of the convergence problems exhibited in othermore » OFDFT implementations based on Kohn-Sham (KS) codes. Using PAW and real-space methods, our orbital-free results agree with the reference all-electron values with a mean absolute error of 10 meV and the number of iterations required by the self-consistent cycle is comparable to the KS method. The comparison of all-electron and pseudopotential bulk modulus and lattice constant reveal an enormous difference, demonstrating that in order to assess the performance of OFDFT functionals it is necessary to use implementations that obtain all-electron values. The proposed combination of methods is the most promising route currently available. We finally show that a parametrized kinetic energy functional can give lattice constants and bulk moduli comparable in accuracy to those obtained by the KS PBE method, exemplified with the case of diamond.« less

  6. Clinical outcomes of arthroscopic single and double row repair in full thickness rotator cuff tears

    PubMed Central

    Ji, Jong-Hun; Shafi, Mohamed; Kim, Weon-Yoo; Kim, Young-Yul

    2010-01-01

    Background: There has been a recent interest in the double row repair method for arthroscopic rotator cuff repair following favourable biomechanical results reported by some studies. The purpose of this study was to compare the clinical results of arthroscopic single row and double row repair methods in the full-thickness rotator cuff tears. Materials and Methods: 22 patients of arthroscopic single row repair (group I) and 25 patients who underwent double row repair (group II) from March 2003 to March 2005 were retrospectively evaluated and compared for the clinical outcomes. The mean age was 58 years and 56 years respectively for group I and II. The average follow-up in the two groups was 24 months. The evaluation was done by using the University of California Los Angeles (UCLA) rating scale and the shoulder index of the American Shoulder and Elbow Surgeons (ASES). Results: In Group I, the mean ASES score increased from 30.48 to 87.40 and the mean ASES score increased from 32.00 to 91.45 in the Group II. The mean UCLA score increased from the preoperative 12.23 to 30.82 in Group I and from 12.20 to 32.40 in Group II. Each method has shown no statistical clinical differences between two methods, but based on the sub scores of UCLA score, the double row repair method yields better results for the strength, and it gives more satisfaction to the patients than the single row repair method. Conclusions: Comparing the two methods, double row repair group showed better clinical results in recovering strength and gave more satisfaction to the patients but no statistical clinical difference was found between 2 methods. PMID:20697485

  7. Visual and digital comparative tooth colour assessment methods and atomic force microscopy surface roughness.

    PubMed

    Grundlingh, A A; Grossman, E S; Shrivastava, S; Witcomb, M J

    2013-10-01

    This study compared digital and visual colour tooth colour assessment methods in a sample of 99 teeth consisting of incisors, canines and pre-molars. The teeth were equally divided between Control, Ozicure Oxygen Activator bleach and Opalescence Quick bleach and subjected to three treatments. Colour readings were recorded at nine intervals by two assessment methods, VITA Easyshade and VITAPAN 3D MASTER TOOTH GUIDE, giving a total of 1782 colour readings. Descriptive and statistical analysis was undertaken using a GLM test for Analysis of Variance for a Fractional Design set at a significance of P < 0.05. Atomic force micros copy was used to examine treated ename surfaces and establish surface roughness. Visual tooth colour assessment showed significance for the independent variables of treatment, number of treatments, tooth type and the combination tooth type and treatment. Digital colour assessment indicated treatment and tooth type to be of significance in tooth colour change. Poor agreement was found between visual and digital colour assessment methods for Control and Ozicure Oxygen Activator treatments. Surface roughness values increased two-fold for Opalescence Quick specimens over the two other treatments, implying that increased light scattering improved digital colour reading. Both digital and visual colour matching methods should be used in tooth bleaching studies to complement each other and to compensate for deficiencies.

  8. Robust Multigrid Smoothers for Three Dimensional Elliptic Equations with Strong Anisotropies

    NASA Technical Reports Server (NTRS)

    Llorente, Ignacio M.; Melson, N. Duane

    1998-01-01

    We discuss the behavior of several plane relaxation methods as multigrid smoothers for the solution of a discrete anisotropic elliptic model problem on cell-centered grids. The methods compared are plane Jacobi with damping, plane Jacobi with partial damping, plane Gauss-Seidel, plane zebra Gauss-Seidel, and line Gauss-Seidel. Based on numerical experiments and local mode analysis, we compare the smoothing factor of the different methods in the presence of strong anisotropies. A four-color Gauss-Seidel method is found to have the best numerical and architectural properties of the methods considered in the present work. Although alternating direction plane relaxation schemes are simpler and more robust than other approaches, they are not currently used in industrial and production codes because they require the solution of a two-dimensional problem for each plane in each direction. We verify the theoretical predictions of Thole and Trottenberg that an exact solution of each plane is not necessary and that a single two-dimensional multigrid cycle gives the same result as an exact solution, in much less execution time. Parallelization of the two-dimensional multigrid cycles, the kernel of the three-dimensional implicit solver, is also discussed. Alternating-plane smoothers are found to be highly efficient multigrid smoothers for anisotropic elliptic problems.

  9. Comparison of GEOS-5 AGCM planetary boundary layer depths computed with various definitions

    NASA Astrophysics Data System (ADS)

    McGrath-Spangler, E. L.; Molod, A.

    2014-07-01

    Accurate models of planetary boundary layer (PBL) processes are important for forecasting weather and climate. The present study compares seven methods of calculating PBL depth in the GEOS-5 atmospheric general circulation model (AGCM) over land. These methods depend on the eddy diffusion coefficients, bulk and local Richardson numbers, and the turbulent kinetic energy. The computed PBL depths are aggregated to the Köppen-Geiger climate classes, and some limited comparisons are made using radiosonde profiles. Most methods produce similar midday PBL depths, although in the warm, moist climate classes the bulk Richardson number method gives midday results that are lower than those given by the eddy diffusion coefficient methods. Additional analysis revealed that methods sensitive to turbulence driven by radiative cooling produce greater PBL depths, this effect being most significant during the evening transition. Nocturnal PBLs based on Richardson number methods are generally shallower than eddy diffusion coefficient based estimates. The bulk Richardson number estimate is recommended as the PBL height to inform the choice of the turbulent length scale, based on the similarity to other methods during the day, and the improved nighttime behavior.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zou, Ling; Zhao, Haihua; Kim, Seung Jun

    In this study, the classical Welander’s oscillatory natural circulation problem is investigated using high-order numerical methods. As originally studied by Welander, the fluid motion in a differentially heated fluid loop can exhibit stable, weakly instable, and strongly instable modes. A theoretical stability map has also been originally derived from the stability analysis. Numerical results obtained in this paper show very good agreement with Welander’s theoretical derivations. For stable cases, numerical results from both the high-order and low-order numerical methods agree well with the non-dimensional flow rate analytically derived. The high-order numerical methods give much less numerical errors compared to themore » low-order methods. For stability analysis, the high-order numerical methods could perfectly predict the stability map, while the low-order numerical methods failed to do so. For all theoretically unstable cases, the low-order methods predicted them to be stable. The result obtained in this paper is a strong evidence to show the benefits of using high-order numerical methods over the low-order ones, when they are applied to simulate natural circulation phenomenon that has already gain increasing interests in many future nuclear reactor designs.« less

  11. A qualitative and quantitative HPTLC densitometry method for the analysis of cannabinoids in Cannabis sativa L.

    PubMed

    Fischedick, Justin T; Glas, Ronald; Hazekamp, Arno; Verpoorte, Rob

    2009-01-01

    Cannabis and cannabinoid based medicines are currently under serious investigation for legitimate development as medicinal agents, necessitating new low-cost, high-throughput analytical methods for quality control. The goal of this study was to develop and validate, according to ICH guidelines, a simple rapid HPTLC method for the quantification of Delta(9)-tetrahydrocannabinol (Delta(9)-THC) and qualitative analysis of other main neutral cannabinoids found in cannabis. The method was developed and validated with the use of pure cannabinoid reference standards and two medicinal cannabis cultivars. Accuracy was determined by comparing results obtained from the HTPLC method with those obtained from a validated HPLC method. Delta(9)-THC gives linear calibration curves in the range of 50-500 ng at 206 nm with a linear regression of y = 11.858x + 125.99 and r(2) = 0.9968. Results have shown that the HPTLC method is reproducible and accurate for the quantification of Delta(9)-THC in cannabis. The method is also useful for the qualitative screening of the main neutral cannabinoids found in cannabis cultivars.

  12. Studies of EXAFSSpectra using Copper (II) Schiff Base complexes and Determination of Bond lengths Using Synchrotron Radiation

    NASA Astrophysics Data System (ADS)

    Mishra, A.; Vibhute, V.; Ninama, S.; Parsai, N.; Jha, S. N.; Sharma, P.

    2016-10-01

    X-ray absorption fine structure (XAFS) at the K-edge of copper has been studied in some copper (II) complexes with substituted anilines like (2Cl, 4Br, 2NO2, 4NO2 and pure aniline) with o-PDA (orthophenylenediamine) as ligand. The X-ray absorption measurements have been performed at the recently developed BL-8 dispersive EXAFS beam line at 2.5 GeV Indus-2 Synchrotron Source at RRCAT, Indore, India. The data obtained has been processed using EXAFS data analysis program Athena.The graphical method gives the useful information about bond length and also the environment of the absorbing atom. The theoretical bond lengths of the complexes were calculated by using interactive fitting of EXAFS using fast Fourier inverse transformation (IFEFFIT) method. This method is also called as Fourier transform method. The Lytle, Sayers and Stern method and Levy's method have been used for determination of bond lengths experimentally of the studied complexes. The results of both methods have been compared with theoretical IFEFFIT method.

  13. Error baseline rates of five sample preparation methods used to characterize RNA virus populations.

    PubMed

    Kugelman, Jeffrey R; Wiley, Michael R; Nagle, Elyse R; Reyes, Daniel; Pfeffer, Brad P; Kuhn, Jens H; Sanchez-Lockhart, Mariano; Palacios, Gustavo F

    2017-01-01

    Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic "no amplification" method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a "targeted" amplification method, sequence-independent single-primer amplification (SISPA) as a "random" amplification method, rolling circle reverse transcription sequencing (CirSeq) as an advanced "no amplification" method, and Illumina TruSeq RNA Access as a "targeted" enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4-5) of all compared methods.

  14. Error baseline rates of five sample preparation methods used to characterize RNA virus populations

    PubMed Central

    Kugelman, Jeffrey R.; Wiley, Michael R.; Nagle, Elyse R.; Reyes, Daniel; Pfeffer, Brad P.; Kuhn, Jens H.; Sanchez-Lockhart, Mariano; Palacios, Gustavo F.

    2017-01-01

    Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic “no amplification” method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a “targeted” amplification method, sequence-independent single-primer amplification (SISPA) as a “random” amplification method, rolling circle reverse transcription sequencing (CirSeq) as an advanced “no amplification” method, and Illumina TruSeq RNA Access as a “targeted” enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4−5) of all compared methods. PMID:28182717

  15. Surveying Europe's Only Cave-Dwelling Chordate Species (Proteus anguinus) Using Environmental DNA.

    PubMed

    Vörös, Judit; Márton, Orsolya; Schmidt, Benedikt R; Gál, Júlia Tünde; Jelić, Dušan

    2017-01-01

    In surveillance of subterranean fauna, especially in the case of rare or elusive aquatic species, traditional techniques used for epigean species are often not feasible. We developed a non-invasive survey method based on environmental DNA (eDNA) to detect the presence of the red-listed cave-dwelling amphibian, Proteus anguinus, in the caves of the Dinaric Karst. We tested the method in fifteen caves in Croatia, from which the species was previously recorded or expected to occur. We successfully confirmed the presence of P. anguinus from ten caves and detected the species for the first time in five others. Using a hierarchical occupancy model we compared the availability and detection probability of eDNA of two water sampling methods, filtration and precipitation. The statistical analysis showed that both availability and detection probability depended on the method and estimates for both probabilities were higher using filter samples than for precipitation samples. Combining reliable field and laboratory methods with robust statistical modeling will give the best estimates of species occurrence.

  16. The complex-scaled multiconfigurational spin-tensor electron propagator method for low-lying shape resonances in Be-, Mg- and Ca-

    NASA Astrophysics Data System (ADS)

    Tsogbayar, Tsednee; Yeager, Danny L.

    2017-01-01

    We further apply the complex scaled multiconfigurational spin-tensor electron propagator method (CMCSTEP) for the theoretical determination of resonance parameters with electron-atom systems including open-shell and highly correlated (non-dynamical correlation) atoms and molecules. The multiconfigurational spin-tensor electron propagator method (MCSTEP) developed and implemented by Yeager and his coworkers for real space gives very accurate and reliable ionization potentials and electron affinities. CMCSTEP uses a complex scaled multiconfigurational self-consistent field (CMCSCF) state as an initial state along with a dilated Hamiltonian where all of the electronic coordinates are scaled by a complex factor. CMCSTEP is designed for determining resonances. We apply CMCSTEP to get the lowest 2P (Be-, Mg-) and 2D (Mg-, Ca-) shape resonances using several different basis sets each with several complete active spaces. Many of these basis sets we employ have been used by others with different methods. Hence, we can directly compare results with different methods but using the same basis sets.

  17. Multithreading with separate data to improve the performance of Backpropagation method

    NASA Astrophysics Data System (ADS)

    Dhamma, Mulia; Zarlis, Muhammad; Budhiarti Nababan, Erna

    2017-12-01

    Backpropagation is one method of artificial neural network that can make a prediction for a new data with learning by supervised of the past data. The learning process of backpropagation method will become slow if we give too much data for backpropagation method to learn the data. Multithreading with a separate data inside of each thread are being used in order to improve the performance of backpropagtion method . Base on the research for 39 data and also 5 times experiment with separate data into 2 thread, the result showed that the average epoch become 6490 when using 2 thread and 453049 epoch when using only 1 thread. The most lowest epoch for 2 thread is 1295 and 1 thread is 356116. The process of improvement is caused by the minimum error from 2 thread that has been compared to take the weight and bias value. This process will be repeat as long as the backpropagation do learning.

  18. Determination of the pure silicon monocarbide content of silicon carbide and products based on silicon carbide

    NASA Technical Reports Server (NTRS)

    Prost, L.; Pauillac, A.

    1978-01-01

    Experience has shown that different methods of analysis of SiC products give different results. Methods identified as AFNOR, FEPA, and manufacturer P, currently used to detect SiC, free C, free Si, free Fe, and SiO2 are reviewed. The AFNOR method gives lower SiC content, attributed to destruction of SiC by grinding. Two products sent to independent labs for analysis by the AFNOR and FEPA methods showed somewhat different results, especially for SiC, SiO2, and Al2O3 content, whereas an X-ray analysis showed a SiC content approximately 10 points lower than by chemical methods.

  19. Shear Lag in Box Beams Methods of Analysis and Experimental Investigations

    NASA Technical Reports Server (NTRS)

    Kuhn, Paul; Chiarito, Patrick T

    1942-01-01

    The bending stresses in the covers of box beams or wide-flange beams differ appreciably from the stresses predicted by the ordinary bending theory on account of shear deformation of the flanges. The problem of predicting these differences has become known as the shear-lag problem. The first part of this paper deals with methods of shear-lag analysis suitable for practical use. The second part of the paper describes strain-gage tests made by the NACA to verify the theory. Three tests published by other investigators are also analyzed by the proposed method. The third part of the paper gives numerical examples illustrating the methods of analysis. An appendix gives comparisons with other methods, particularly with the method of Ebner and Koller.

  20. [Costs of preserved corneal transplants].

    PubMed

    Ardjomand, N; Reich, M E

    1997-10-01

    Organ culture medium and Optisol are the most commonly used corneal storage mediums. This study compares the costs for these two methods. In the calculation of costs we did not just take the direct costs into account, but also tried to determine the fixed costs per transplanted cornea with corresponding assumptions. Proceeding on the assumption that 50 stored corneas were transplanted per year, an amount of 11,660 ATS (1,666 DM, 857 ECU) for each organ cultured and 11,986 ATS (1,712 DM, 881 ECU) for each graft preserved in Optisol was calculated. Raising the number of transplanted corneas to 400 per year, each tissue stored in organ culture medium costs 2,811 ATS (402 DM, 207 ECU) and those preserved in Optisol 3234 ATS (462 DM, 238 ECU). Since organ culture storage gives us a reduction in costs of more than 15% compared to storing in Optisol, when preserving 400 transplantable grafts, from the business economics aspect, this storage method should be preferred.

Top